Category Archives: Data Security

When It Comes to Cloud Data Protection, Defend Your Information Like a Guard Dog

 

These days, enterprises are increasingly running their business from the cloud. But the portion of your business that’s running in this environment presents numerous security challenges. When it comes to cloud data protection, it’s not just credit card numbers and personally identifiable information (PII) that need protecting, but also the data that represents the majority of your company’s value: your intellectual property. This includes your product designs, marketing strategy, financial plans and more. To add to the complexity, much of that data is stored in disparate repositories.

How do you know if you’re doing enough to protect the cloud-stored data that’s most crucial to your business? To keep malicious actors away from your cloud-bound crown jewels, you need the cybersecurity equivalent of a guard dog — one that knows when to bark, when to bite and when to grant access to those within its circle of trust.

Let’s take a closer look at some challenges related to protecting data in the cloud and outline key considerations when selecting a cloud security provider.

What to Do When Data Is Out of Your Hands

Data that’s stored in the cloud is inherently accessible to other people, including cloud service providers, via numerous endpoints, such as mobile devices and social media applications. You can no longer protect your sensitive data by simply locking down network access.

You need security against outside threats, but you also need it on the inside, all the way down to where the data resides. To address this, look for a provider that offers strong data encryption and data activity monitoring, inside and out.

Data Is Here, There and Everywhere

With the growth of mobile and cloud storage, data is here, there, in the cloud, on premises, and everywhere in between. Some of it is even likely stored in locations you don’t know about. Not only does everyone want access to data, they expect access to it at the click of a mouse. A complete cloud data protection solution should have the following:

  • Mature, proven analytical tools that can analyze your environment to automatically discover data sources, analyze those data sources to discover the critical, sensitive, regulated data, and intelligently and automatically uncover risks and suspicious behavior.
  • Protection with monitoring across all activity, both network and local, especially the actions of privileged users with access to your most sensitive data. Of course, you should also protect data with strong encryption.
  • Adaptability to your changing and expanding environment, with a security solution that can support hybrid environments and seamlessly adjust to alterations in your IT landscape.

How to Gain Visibility Into Risks and Vulnerabilities

Detecting risks of both internal and external attacks is more challenging as data repositories become more virtualized. Common vulnerabilities include missing patches, misconfigurations and exploitable default system settings.

Best practices suggest authorizing both privileged and ordinary end users according to the principle of least privilege to minimize abuse and errors. A robust cloud data protection solution can help secure your cloud and hybrid cloud infrastructure with monitoring and assessment tools that reveal anomalies and vulnerabilities.

Choose the Right Data-Centric Methodology

A data-centric methodology should go hand in hand with the solutions outlined above to support cloud data protection. Make sure your data security solution can do the following:

  • Automatically and continuously discover data sources that you may not have realized existed. This means classifying the data in those databases to understand where you have sensitive, regulated and high-risk data.
  • Harden data sources and data. For data sources, that means understanding what vulnerabilities exist and who has access to data based on entitlement reports. For hardening data, your solution should enable you to set policies around who has access and when access needs to be blocked, quarantined or possibly allowed but masked before granting access.
  • Monitor all users, especially privileged users, to be able to prove to auditors that they are not jeopardizing the integrity of your data.
  • Proactively protect with blocking, quarantining and masking, as well as threat analytics that cover all data sources and use machine learning. Threat analytics can help you understand which activities represent normal, everyday business and which are suspect or anomalous — information that humans can’t possibly uncover on a large scale.

Find a Guard Dog for Your Cloud Data Protection

If your organization is just starting out with data protection, consider a software-as-a-service (SaaS) risk analysis solution that can enable you to quickly get started on the first two steps outlined above. By starting with a solution that supports discovery, classification and vulnerability assessments of both on-premises and cloud-based data sources, you can make demonstrable progress with minimal time and technology investment. Once you have that baseline, you can then start investigating more comprehensive data activity monitoring, protection and encryption technologies for your cloud-bound data.

The post When It Comes to Cloud Data Protection, Defend Your Information Like a Guard Dog appeared first on Security Intelligence.

Microsoft and Imperva Collaboration Bolsters Data Compliance and Security Capabilities

This article explains how Imperva SecureSphere V13.2 has leveraged the latest Microsoft EventHub enhancements to help customers maintain compliance and security controls as regulated or sensitive data is migrated to Azure SQL database instances.

Database as a Service Benefits

Platform as a Service (PaaS) database offerings such as Azure SQL are rapidly becoming a popular option for organizations deploying databases in the cloud.  One of the benefits of Azure SQL, which is essentially a Relational Database as a Service (RDaaS), is that all of the database infrastructure administrative tasks and maintenance are taken care of by Microsoft – and this is proving to be a very compelling value proposition to many Imperva customers.

Security is a Shared Service

What you should remember with any data migration to a cloud service, is that while hardware and software platform maintenance is no longer your burden, you still retain the responsibility for security and regulatory compliance.  Cloud vendors generally implement their services in a Shared Security Model.  Microsoft explains this in a whitepaper you can read here.

To paraphrase in the extreme, Microsoft takes responsibility for the security of the cloud, while customers have responsibility for security in the cloud.   This means Microsoft provides the services and tools (such as firewalls) to secure the infrastructure (such as networking and compute machines), while you are responsible for application and database security.   Though this discussion is about how it works with Azure SQL, the table below from the Microsoft paper referenced above shows the shared responsibility progression across all of their cloud offerings.

Figure 1:  Shared responsibility model from the Microsoft Whitepaper Shared Responsibilities for Cloud Computing

Brief Description of How Continuous Azure SQL Monitoring Works

SecureSphere applies multiple services in the oversight of data hosted by Azure SQL.  The Services include but are not limited to the following:

  • Database vulnerability assessment
  • Sensitive data discovery and classification
  • User activity monitoring and audit data consolidation
  • Audit data analytics
  • Reporting

The vulnerability assessment and data discovery are done by scanning engines that have some kind of service account access to database interfaces.  The activity monitoring is done by a customizable policy engine, pre-populated with compliance and security rules for common compliance and security requirements such as separation of duties – but fully customizable for company or industry-specific requirements.

With Azure SQL, SecureSphere monitoring and audit activity leverages the Microsoft EventHub service.  Recent enhancements to EventHub, on which Microsoft and Imperva collaborated, provide a streaming interface to database log records that Imperva SecureSphere ingests, analyzes with its policy engine (and other advanced user behavior analytics), and then takes appropriate action to prioritize, flag, notify, or alert security analysts or database administrators about the issues.

Figure 2:  Database monitoring event flow for a critical security alert

Benefits of Imperva SecureSphere for Azure SQL Customers

A Key benefit that a solution such as SecureSphere Database Activity Monitoring (DAM) provides is integrating the oversight of Azure SQL into a broad oversight lifecycle all enterprise databases.  With SecureSphere, here are some things you can do to ensure the security of your data in the cloud:

  • Secure Hybrid enterprise database environments: While many organizations now pursue a “cloud first” policy of locating new applications in the cloud, few are in a position to move all existing databases out of the data center, so they usually maintain a hybrid database estate – which SecureSphere easily supports.
  • Continuously monitor cloud database services: You can migrate data to the cloud without losing visibility and control. SecureSphere covers dozens of on-premises relational database types, mainframe databases, and big data platforms.  It supports Azure SQL and other RDaaS too – enabling you to always know who is accessing your data and what they are doing with it.
  • Standardize and automate security, risk management, and compliance practices: SecureSphere implements a common policy for oversight and security across all on-premises and cloud databases.  If SecureSphere detects that a serious policy violation has occurred, such as unauthorized user activity,  it can immediately alert you.  All database log records are consolidated and made available to a central management console to streamline audit discovery and produce detailed reports for regulations such as SOX, PCI DSS and more.
  • Continuously assess database vulnerabilities: SecureSphere Discovery and Assessment streamlines vulnerability assessment at the data layer. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities. The vulnerability assessment process, which can be fully customized, uses industry best practices such as DISA STIG and CIS benchmarks.

It’s critically important that organizations extend traditional database compliance and security controls as they migrate data to new database architectures such as Azure SQL. Imperva SecureSphere V13.2 provides a platform to incorporate oversight of Azure SQL instances into broad enterprise compliance and security processes that include both cloud and on-premises, and data assets.

Facebook Increases Security For Political Campaign Staff

Facebook is introducing new security tools for political campaign staff, concerned about dirty tricks in the run-up to the mid-term elections. On his personal Facebook page, CEO Mark Zuckerberg admitted

The post Facebook Increases Security For Political Campaign Staff appeared first on The Cyber Security Place.

Explainer Series: RDaaS Security and Managing Compliance Through Database Audit and Monitoring Controls

As organizations move to cloud database platforms they shouldn’t forget that data security and compliance requirements remain an obligation. This article explains how you can apply database audit and monitoring controls using Imperva SecureSphere V13.2 when migrating to database as a service cloud offering.

Introduction to RDaaS

A Relational Database as a Service (RDaaS) provides the equipment, software, and infrastructure needed for businesses to run their database in a vendor’s cloud, rather than putting something together in-house. Examples of RDaaS include AWS Relations Database Services (RDS) and Microsoft Azure SQL.

Benefits of RDaaS adoption

The advantages of RDaaS adoption can be fairly substantial. Here are just a few of the benefits:

  • Allows you to preserve capital rather than using it for equipment or software licenses and convert IT costs to an operating expense
  • Requires no additional IT staff to maintain the database system
  • Resiliency and dependability are guaranteed by the cloud provider

Who is responsible for cloud-based DB security?

From a high-altitude viewpoint, cloud security is based on a model of “shared responsibility” in which the concern for security maps to the degree of control any given actor has over the architecture stack.  Using Amazon’s policy as an example, Amazon states that AWS has “responsibility for the security of the cloud,” while customers have “responsibility for security in the cloud.”

What does that mean for you?  It means cloud vendors provide the tools and services to secure the infrastructure (such as networking and compute machines), while you are responsible for things like application or database security. For example, cloud vendors help to restrict access to the compute instances on which a database is deployed (by using security groups/firewalls and other methods); but they don’t restrict who among your users has access to what data.

The onus is on you to establish security measures that allow only authorized users to access your cloud-database— just as with a database in your own “on-premises” data center – and you control what data they can access. Securing your data and ensuring compliance in on-premises data centers is typically done by database activity monitoring against the database and fortunately, similar measures can be deployed in the public cloud as well.

How Imperva SecureSphere ensures compliance and security in the cloud

The benefit that a solution such as Imperva SecureSphere Database Activity Monitoring (DAM) provides is integrating the oversight of an RDaaS into a standardized methodology across all enterprise databases.  With SecureSphere, here are some things you can do to ensure the security of your data in the cloud:

Monitor cloud database services

Migrate data to the cloud without losing visibility and control. SecureSphere is a proven, highly scalable system that covers dozens of on-premises relational database types, mainframe databases, and big data platforms.  It has been extended to support Amazon RDS and Azure SQL RDaaS databases too. SecureSphere enables you to always know who is accessing your data and what they are doing with it.

Unify monitoring policy

Implement a common security and compliance policy for consistent oversight and security across all on-premises and cloud databases. SecureSphere uses the policy to continuously assess threats and observe database user activity – and detects when the policy is violated – alerting you of critical events such as risky user behavior or unauthorized database access.

Automate compliance auditing processes

Demonstrate proof of compliance and simplify audits by consolidating audit log collection and reporting across all monitored assets. SecureSphere makes all the log data available to a central management console to streamline audit discovery and produce detailed reports for regulations such as SOX, PCI DSS and more.

Asses vulnerabilities and detect exposed databases

SecureSphere Discovery and Assessment streamlines vulnerability assessment at the data layer. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities. Assessment policies are available for Amazon RDS Oracle and Postgress RDaaS as well as  Microsoft Azure SQL.  More will soon be available.  The vulnerability assessment process, which can be fully customized, uses industry best practices such as DISA STIG and CIS benchmarks.

Support Hybrid Clouds

While many organizations now pursue a “cloud first” policy of locating new applications in the cloud, few are in a position to move all existing databases out of the data center, so they usually must maintain a hybrid database estate – which SecureSphere gracefully supports.

For some customers, it may be worth deploying SecureSphere on the RDaaS vendor’s infrastructure when monitoring large databases, to optimize for cost and performance.  SecureSphere is available vendor appropriate virtual instances for both AWS and Azure, deployable individually or in HA configurations.

There is a critical need for visibility across an organization’s entire application and data infrastructure, no matter where it is located. Imperva SecureSphere provides a platform to incorporate oversight of RDaaS instances into a broad enterprise compliance and security lifecycle process.

Learn more about how Imperva solutions can help you ensure the safety of your database and enterprise-wide data.

Reconciling Trust With Security: A Closer Look at Cyber Deception With DcyFS

This article is the second in a three-part series that provides a technical overview of Decoy File System (DcyFS). This original research was recently showcased in a paper titled “Hidden in Plain Sight: Filesystem View for Data Integrity and Deception,” which appeared at the 15th Conference on Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA) in Paris in June 2018.

Our previous blog post introduced the concepts underpinning the overall design of Decoy File System (DcyFS) as part of using cyber deception tactics to protect against attacks on networked environments. Central to its security and deceptive capabilities is DcyFS’s ability to modulate subject trust through a hierarchical file system organization that explicitly encodes trust relations between different execution contexts.

The core principle of DcyFS’s trust model is that of least privilege, which means that legitimate subjects only require access to directories, files and file types relevant to their work and do not need to know about other files on the system. In this post, we detail how a trust model based on this principle is built into DcyFS’s architecture and describe its effects on process execution.

DcyFS’s Architecture

The core component of DcyFS is a set of security domains that provides each process with a customized view of the file system computed as the union of the base file system and its overlay (see Figure 1).

Architectural overview of DcyFS

Figure 1: Architectural overview of DcyFS

To alter the resulting union between layers, each overlay has the ability to:

  1. Hide base files;
  2. Modify their content by overlaying a different file with the same name; and
  3. Inject new files into the overlay that are not present in the original host system.

File writes are stored in the overlay, protecting base files from being overwritten. This forms the basis of a stackable file system that can be mounted atop different base file system types (e.g., block, network) to offer data integrity protection and detection of attacks that aim to tamper with or steal data.

To separate file system views, DcyFS transparently combines two file systems, which we term the “base” file system and the “overlay” file system. The base file system is the main host file system and is read-only, while the overlay is a read-write file system that can control what is visible to a running process.

When a file with the same name appears in both file systems, the one in the overlay is visible to the process. When a directory appears in both file systems, both directories’ contents are merged in the process view. A file or directory is hidden from view by injecting a character device on the overlay. To hide a base file or directory, DcyFS simply marks it as deleted in the overlay.

Decoy files are similarly placed in carefully chosen locations inside the overlay mount, and existing files can further be replaced or redacted for cyber deception.

Creating Security Domains to Isolate Views

To implement the separation between the base layer and its overlays, DcyFS creates persistent and reusable security domains to transparently isolate file system views. Security domains enforce coherent views of the file system and form the basis for defining DcyFS’s trust model. Each security domain has its own profile, which contains the list of files and directories that are viewable within that domain. These include files that are deleted, replaced or injected in the domain view.

The Trust Model

DcyFS’s file system view isolation is policy-driven, defined by associations between mount namespaces, file system objects and users with security domains. Similar to data classification models, each security domain sd ∈ (Γ, ≤) is assigned a rank denoting its level of trust relative to the other domains. Security domains, therefore, comprise a partially ordered lattice (Γ) ordered by trust scores (≤), with the untrusted domain (sdunt) at the bottom denoting untrusted execution and the root domain (sdroot) at the top denoting trusted execution. Meet operation U denotes greatest lower bound, which is used to determine the proper domain of execution of new programs. DcyFS uses this model to determine in which security domain to execute new processes.

This decision point extends the semantics of the kernel’s exec(filename, args) function to compute the following parameters:

  • Target execution domain as sdfilename U sdargs U sduser U sdns.
  • The meet between the security domains of filename, args computed across all arguments denoting file paths.
  • User, the set of security domains associated with a user.
  • ns, the parent process’ security domain, denoted by the current mount namespace.

Including sdns in the security domain determination of a newly launched process limits its execution to its parent process’ security domain, thus preventing lower-ranking domains from accidentally or maliciously spawning child processes in higher-ranking domains. In our research implementation, this property is seamlessly encoded in the security domains’ mount namespace hierarchy.

To illustrate, Figure 2 describes a simple security domain setup for a client desktop. It includes domains to separate internet-facing applications (sdbrowser), word processing tools (sddocs) and programming environments for scripted languages (sdscripts).

In this context, a web browser running in sdbrowser may download a PDF document from the internet, which gets stored in the browser domain. To visualize its contents, a trusted user (sdroot) opens the file in a PDF viewer (sddocs). As a result, DcyFS executes the viewer in the browser domain — the greatest lower bound of the domains involved in the security domain determination — so that the potentially malicious PDF file has no access to the user’s documents (kept separated in sddocs).

Similarly, if a process running in sdscripts spawns a second process not authorized to execute in the scripts domain, DcyFS moves the subprocess task to the untrusted domain (sdunt). This is to protect against attacks where a trusted process (e.g., Bash) is exploited to install and launch untrusted malware. The rule also prevents malware from gaining entry to another security domain by running trusted applications.

Security domains lattice example

Figure 2: Security domains lattice example

Root Domain

The root domain is a special mount namespace that fuses together a writable base file system mount with all the read-only overlay file system mounts from the other domains into a single, unified view. This enhances usability by overcoming merging issues that arise from DcyFS’s ability to separate file system views.

The root domain is reserved for a few special programs — such as a file browser, terminal, file copying tools and object collisions when multiple overlays share the same fully qualified object path names — and handled by stacking overlays according to the trust order relative to each domain.

Since the file system is a combination of the base file system and the overlays of the other domains, the file browser can transparently open files and launch applications in their native security domains to help protect the integrity of the root domain. Furthermore, specialized copying tools allow files to be copied or moved between domains as desired.

Blinding Attackers With File System Opacity

DcyFS leverages its overlay infrastructure to conceal its existence from attackers and curtail access to explicit information about its kernel modules, configuration objects and overlay mounts. This is achieved by bootstrapping the file system with configuration rules that hide and redact specific file system objects.

For example, /proc/mounts (/proc/self/mount* and /etc/mtab are redacted to conceal overlay mount point information and bind mounts into the overlays. As a result, DcyFS’s kernel live patch and kernel module are hidden from file system views. Similarly, the file system hides its configuration, usermode helper components (e.g., decoy generation, configuration parsing, forensics data extraction) and working directory where overlays and logs persist in the base file system.

File System Denial and Cyber Deception

DcyFS provides data integrity by strictly enforcing a policy that all writes are made to the overlay layer and never to the underlying base. Writes to base files are first copied up to the overlay layer before being written using copy-on-write. This has the desirable effect of preserving the integrity of the base file system. Changes made by untrusted processes do not affect the base, protecting legitimate users from seeing malicious changes as well as effectively keeping a pristine copy of the file system that can revert to the point immediately before the malicious process started.

DcyFS can hide specific files and directories from a user or a process to help protect against sensitive data leaks. Additionally, the file system can generate encrypted files and implant decoys in the overlay to shadow sensitive files in the base file system. DcyFS transparently monitors and logs access to files classified as more sensitive, confidential or valuable. Moreover, only the untrusted process is affected by hidden and decoy files, leaving legitimate users free of any effects or confusion.

It is worth noting that trusted processes can also benefit from security domains. DcyFS can launch a trusted process atop an overlay to hide unnecessary files and directories or inject decoys to catch potential insiders. Furthermore, certain directories can be bind mounted from the base file system to give trusted processes the ability to directly view and modify them. For example, we might run a database server, providing it with a fake overlay view of the entire file system, but giving it direct write access to the directories in which it writes data. As a result, if the database application is compromised, damage is limited to the data directory only.

A Look Forward

We envision security domains being configured using standard operating system policies, similar to how SELinux policies are shipped with Linux, to mitigate potential security weak points that can result from manual configuration. Default policies could also be attached to software installed from app stores, or repositories such as Linux’s package managers. In the future, we plan to investigate ways to automate this process through the application of different notions of trust (e.g., policy-, reputation-, and game-theoretic-based).

Finally, a word about portability. Our initial implementation was developed for Linux to leverage its virtual file system capabilities and mature mount namespace implementation. Recently, Windows Server 2016 has been released with native namespace support and an overlay file system driver mirroring its open-source counterpart. This new release could facilitate the future realization of DcyFS’s architectural blueprint for Windows-based environments.

The post Reconciling Trust With Security: A Closer Look at Cyber Deception With DcyFS appeared first on Security Intelligence.

Why Are Businesses Ignoring Security Threats?

A surveyed compiled at the RSA security conference showcases that lots of businesses are behind with proper security standards. Some company’s are completely ignoring security threats due to lack of

The post Why Are Businesses Ignoring Security Threats? appeared first on The Cyber Security Place.

What Are the Risks and Rewards Associated With AI in Healthcare?

The emergence of artificial intelligence (AI) in healthcare is enabling organizations to improve the customer experience and protect patient data from the raging storm of cyberthreats targeting the sector. However, since the primary goal of the healthcare industry is to treat ailing patients and address their medical concerns, cybersecurity is too often treated as an afterthought.

A recent study from West Monroe Partners found that 58 percent of parties that purchased a healthcare company discovered a cybersecurity problem after the deal was done. This may be due to a lack of personnel with in-depth knowledge of security issues. As AI emerges in the sector, healthcare professionals who misuse these technologies risk unintentionally exposing patient data and subjecting their organizations to hefty fines.

What’s Driving the AI Arms Race in Healthcare?

According to Wael Abdel Aal, CEO of telemedicine provider Tele-Med International, healthcare organizations should take advantage of AI to address two critical cybersecurity issues: greater visibility and improved implementation. Abdel Aal’s background includes 21 years as a leading cardiologist, which enables him to understand AI’s impact on healthcare from a provider’s perspective.

“Although AI security systems perform sophisticated protection algorithms, better AI systems are being developed to perform more sophisticated hacks,” he said. “The computer security environment is in a continuous race between offense and defense.”

According to Abdel Aal, the ongoing transformation in the healthcare industry depends not only on AI, but also other game-changing technologies, such as electronic medical records (EMR), online portals, wearable sensors, apps, the Internet of Things (IoT), smartphones, and augmented reality (AR) and virtual reality (VR).

“The combination of these technologies will bring us closer to modern healthcare,” he said. Abdel Aal went on to reference several potential points at which a cybersecurity breach can occur, including remote access to wearables and apps owned by the patient, connectivity with telecom, health provider access, and AI hosting.

“The potential value that these technologies will bring to healthcare is at balance with the potential security hazard it presents to individuals and societies,” AbdelAal explained. “The laws need continuous and fast updating to keep up with AI and the evolving legal questions of privacy, liability and regulation.”

As innovative technologies proliferate within healthcare systems, cyberattacks and cybercrime targeting healthcare providers are correlatively on the rise. In May 2017, for example, notorious ransomware WannaCry infected more than 200,000 victims in 150 countries. In January 2018, a healthcare organization based in Indiana was forced to pay $55,000 to cybercriminals to unlock 1,400 files of patient data, as reported by ZDNet.

In these cases, it was faster and more cost-effective for the hospital to pay the (relatively) small ransom than it would have been to undergo a complex procedure to restore the files. Unfortunately, paying the ransom only encourages threat actors. Ransomware is just the beginning; as malicious AI advances, attacks will only become more devastating.

Why Mutual Education Is Critical to Secure AI in Healthcare

So how can security leaders educate physicians and other healthcare employees to handle these new tools properly and avoid compromising patients’ privacy? Abdel Aal believes the answer is bidirectional education.

“Security leaders need to understand and experience the operational daily workflow protocols performed by individual healthcare providers,” he said. “Accordingly, they need to educate personnel and identify the most vulnerable entry points for threats and secure them.”

While the utilization of AI in healthcare is indeed on the rise and is dramatically changing the industry, according to AbdelAal, the technology driving it hasn’t evolved as fast as it could. One of the most significant hurdles for the industry to overcome is employees’ overall aversion to new technology.

“Adoption of new technology was and always is a major deterrent, be that CT, MRI or, presently, AI,” he said. “Providers, whether doctors, nurses, technicians and others, usually see new technology as a threat to their job market. They identify with the benefits but would rather stay within their comfort zone.”

Abdel Aal also pointed to legal and regulatory factors as stumbling blocks that might prompt confusion about managing progress.

Thankfully, the American Medical Association (AMA) is prepared to address these changes. According to its recently approved AI policy statement, the association will support the development of healthcare AI solutions that safeguard patient privacy rights and preserve the security and integrity of personal information. The policy states that, among other things, the AMA will actively promote engagement with AI healthcare analytics while exploring their expanding possibilities and educating patients and healthcare providers.

Patient wellness will always be the first priority in healthcare, and this is not lost on threat actors. Just like any other industry, it is increasingly imperative for leaders to understand the progressive intertwining of their primary goals with cybersecurity practices and respond accordingly.

The post What Are the Risks and Rewards Associated With AI in Healthcare? appeared first on Security Intelligence.

Taking Stock: The Internet of Things, and Machine Learning Algorithms at War

It’s in the news every day; hackers targeting banks, hospitals, or, as we’ve come to fear the most, elections.

Suffice to say then that cybersecurity has, in the last few years, gone from a relatively obscure industry – let’s qualify that: not in the sense of importance, but rather how folks have been interacting with it – to one at the forefront of global efforts to protect our data and applications.

 A decade ago, cybersecurity researchers were almost as caliginous as the hackers they were trying to defend folks against, and despite the lack of fanfare, some people still chose it as a career (*gasp).

We spoke to one of our whizz-kids, Gilad Yehudai, to find out what makes him tick and why, of all the possible fields in tech, he chose cybersecurity at a time when it might not have been the sexiest of industries.

Protecting data and applications, a different beast altogether

One of the major challenges facing the industry is the ability to attract new talent; especially when competing against companies that occupy the public sphere from the moment our alarm wakes us up to the moment we lay our phones to rest. Gilad, who has a master’s degree in mathematics and forms part of our team in Israel, offers a pretty interesting perspective,

“The world of cybersecurity is a fascinating one from my point of view, especially when trying to solve machine learning problems related to it. Cybersecurity is adversarial in nature, where hackers try to understand security mechanisms and how to bypass them. Developing algorithms in such environments is much more challenging than algorithms where the data doesn’t try to fool you.”

Never a dull moment

Additionally, our industry is one in flux, as more threats and vulnerabilities are introduced, and hackers find new ways to bypass security mechanisms. The latter was a pretty big draw for Gilad, whose experience in mathematics and serving in the Israeli Army’s cyber defense department made him a great candidate for the Imperva threat research team.

“The research group at Imperva seemed like the perfect fit, as large parts of my day to day job is to develop machine learning algorithms in the domain of cybersecurity, and the data I use is mostly attacks on web applications.”

Speaking of attacks, Gilad and the rest of our research team sure have their hands full.

“In my opinion, the Internet of Things (IoT) security is one of the biggest challenges out there. More and more devices are connected to the internet every day and these devices may be put to malicious use. Hackers may enlist these devices to their botnet in order to launch attacks like DDoS, ATO (account takeover), comment spam and much more.”

Worse still, our growing network of ‘micro-computers’ (smartphones, tablets etc.) could be manipulated and their computational power used to mine cryptocurrencies.

“Protecting these devices the same way we protect endpoint PCs will be one of the biggest challenges.”

Change brings new challenges, and opportunities

On the topic of change, the cybersecurity industry, according to Gilad, is headed increasingly towards machine learning and automation; which serves us well.

“If in the past most security mechanisms were based on hard-coded rules written by security experts, today more and more products are based on rules that are created automatically using artificial intelligent algorithms. These mechanisms can be much more dynamic and adapt better to the ever-changing world of cybersecurity.”

That said, the more the industry relies on machine learning algorithms for defense, the higher the likelihood that hackers will look to manipulate those same algorithms for their own purposes.

“Hackers may try to create adversarial examples to fool machine learning algorithms. Securing algorithms will require more effort, effort that will intensify as these algorithms are used in more sensitive processes. For example, facial recognition algorithms that authorize access to a specific location may be fooled by hackers using an adversarial example in order to gain access to an unauthorized location.”

While the cyber threat landscape continues to evolve, and the bad actors looking to nick our data and compromise our applications get increasingly creative, it’s good to know that there are experts whose sole purpose it is to ‘fight the good fight’, so to speak.

“Research is a bit like walking in the dark, you don’t know in which direction to go next, and you never know what you are going find. Sometimes you begin to research in some direction, and in the process you find a completely other direction which you haven’t even though about at the beginning. Research is not for everybody, but I get really excited about it.

How Can Media Companies Be More Confident in Their Cybersecurity Strategy and Policy?

While many industries have matured their cybersecurity strategy and policy as the digital landscape has evolved, others — such as media companies — remain unsure how to advance.

With more consumers relying on the internet for their entertainment and information consumption, media enterprises are tasked with providing a flawless user experience and continuous content delivery. But the industry is prey to a growing number of predators. As a result, a recent Akamai study titled “The State of Media Security” found that only 1 percent of media companies are “very confident” with their cybersecurity efforts.

What Challenges Do Media Companies Face?

The threat of a distributed denial-of-service (DDoS) attack, which could slow services or result in downtime, is only one of the many security challenges media companies face. Also of concern is the potential for malicious actors to steal content or breach systems and access customer networks.

“It’s not surprising that media companies aren’t confident about their security levels,” said Elad Shapira, head of research at Panorays. “They are an ongoing target, whether by political activists or nation states … Then there are those hackers just trying to leverage their skills to make money from the content they steal.”

SQL injections, Domain Name System (DNS) attacks, content pirating and DDoS attacks are among the greatest threats to the media industry. The dynamic nature of the digital ecosystem, where digital partners can change by the day, enables bad actors to optimize the reach of their malicious campaigns.

“Media organizations in particular should be afraid of their heavily trafficked digital assets, which not only serve as touch points to prospects and customers, but also provide entry points to bad actors,” said Chris Olson, CEO of The Media Trust. “These miscreants often target third-party code providers and digital advertising partners, who tend to have weaker security measures in place.”

In the past, security discussions at media companies focused largely on piracy, said Shane Keats, director of global industry strategy, media and entertainment at Akamai. It’s now incumbent upon media companies to recognize that security has extended far beyond digital rights management.

Why Do Cybercriminals Target Media Companies?

Cybercriminals rarely discriminate when it comes to their targets — which means that in the eyes of a criminal, media companies look an awful lot like retailers and banks.

“With the rise of subscription-based monetization, media companies are now increasingly capturing personally identifiable information (PII) and payment card information (PCI) that [looks] no different from the PII and PCI captured by an e-commerce company,” said Keats. “Successfully stealing a streaming video on demand (SVOD) customer database with a million customer records yields the same ROI as one stolen from an online retailer.”

Whether protecting against credentials-stuffing from malicious bots or careless contractors in the vendor landscape, media companies need to practice good security hygiene and be wary of the security practices of partners who have access to their customers’ networks. As has been the case in so many major breaches, all an attacker has to do is compromise one of those partners to gain access to the firewall and steal content, customer data and executive communications.

How Can Medial Companies Improve Cybersecurity Strategy and Policy?

In addition to acquiring a reputable cloud security firm to help investigate the attack surfaces exposing their businesses, media companies also need to ensure that they have solutions to protect each of those points.

“Find a firm that has enough scale to be able to see a ton of threats, both traditional and emerging, and ask the firm to help you understand how to best secure your apps and architecture beyond buzzwords,” Keats advised. “When you do this information session, get your different stakeholders in the room so that you can look at your security posture as a team. This is not the time for turf wars.”

By taking the following steps, media companies can enhance their security strategy and feel more confident that they are protected against current and emerging threats:

  • Discover and prioritize impacts of assets. Not all assets are created equal. An online release of a video prior to its debut screening may create reputational and financial damage to a company, but the credit card details of subscribers are under regulatory control. Each company needs to consider its assets and how they impact the business.

  • Collaborate with direct and indirect third parties. Websites have an average of 140 third parties who execute anywhere from 50 to 95 percent of their code. Most website owners only know, at most, half of the third parties with whom they do business.

  • Vet third parties. Media companies should ask their third and downstream parties the hard questions about security and follow up with frequent audits of security measures. Companies should enforce their digital policies through service-level agreements (SLAs) and contract clauses.

  • Place safety measures around these assets. Safety measures should span various levels, including networks and IT to prevent a DDoS attack, as well as on applications to avoid account breaches. Consider the human element to prevent disgruntled employees from exposing sensitive and proprietary data. Media companies should continuously scan assets in real time to identify and terminate any threats.

  • Create an incident response plan. This is not just a technological approach, but a step that must involve various teams and processes. In case of an attack against the company, there should be an advanced, detailed and well-rehearsed plan to respond.

A data breach poses a significant financial and reputational risk to media companies. To avoid becoming the next headline, businesses need to thoroughly understand not only their own risks, but also the risks that their suppliers pose.

Once media companies understand those risks, they can take measures to continuously protect against emerging threats. Collaboration throughout the organization, as well as with extended partners, will help to enforce strong digital policies and remediate unauthorized activities within the digital ecosystem.

The post How Can Media Companies Be More Confident in Their Cybersecurity Strategy and Policy? appeared first on Security Intelligence.

Imperva Recognized as a 2018 Gartner Magic Quadrant WAF Leader, Five Years Running

Gartner has named Imperva as a Leader in the 2018 Gartner Magic Quadrant for Web Application Firewalls (WAF) — for the fifth year in a row!

Our combination of on-premises appliances, cloud WAF, shared threat intelligence and flexible licensing once again cement us as the best choice for companies to protect their websites and applications.

Having recently added attack analytics and role-based administration capabilities to our offering, Imperva offers flexible deployment options to maintain full protection as application environments continue to shift.

Web Application Attacks a Leading Cause of Data Breaches

According to the 2018 Verizon Data Breach Investigations Report (DBIR), web application attacks once again rank as the leading cause of data breaches. Out of more than 2,216 data breaches this year so far, 48% resulted from hacks, with denial of service (DoS) attacks taking the top spot.

While the numbers sure help us understand the scale of attacks, the DBIR adds that “the focus should be less on the number of incidents and more on realizing that the degree of certainty that they will occur is almost in the same class as death and taxes.”

Your Apps Are Safer in Our Corner

As enterprises move applications to private and public cloud infrastructures, it becomes more important to adopt solutions that can be adapted to any cloud provider or any on-premises deployment. The Imperva WAF product line does exactly that.

Onwards and upwards

As one of the world’s leading cybersecurity companies, we continue to expand the Imperva WAF offering in a flexible configuration of on-premises and cloud WAF services. In addition, the launch of attack analytics gives our customers greater granularity when analyzing security events, distilling thousands of alerts into a handful of actionable narratives

Read the 2018 Gartner Magic Quadrant for Web Application Firewalls report to learn more.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Imperva. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

More Than a Quarter of Executives View Security Investments as Having a Negative ROI

According to a new digital trust report, 27 percent of business executives view security investments as having a negative return on investment (ROI).

Of these respondents, more than three-quarters said they had been involved in a publicly disclosed data breach in the past, according to “The Global State of Online Digital Trust Survey and Index 2018” by CA Technologies.

This finding led the report’s authors to conclude that “over one quarter of executives are tone deaf to modern security challenges and data breach implications, and have not learned from previous mistakes.” By comparison, just 7 percent of cybersecurity staffers said they believe security investments produce a negative ROI.

The Trickiest Metric in Security

ROI is a tricky subject in the context of information security. According to CSO Online, digital security investments don’t produce greater profits, but instead contribute to “loss prevention,” or greater savings in the event of a security incident. This suggests that increased revenues shouldn’t factor into organizations’ decisions on whether to invest in digital security.

Another CSO Online piece proposed that ROI is the wrong metric for evaluating the efficacy of a digital security program. Instead, executives and board members should focus on network defender first principles. To get to the heart of these principles, executives need to determine how network defenders should spend their time and what they hope to achieve.

How to Quantify the ROI of Security Investments

To quantify the ROI of their organizations’ security investments, chief information security officers (CISOs) should consider adopting a zero-trust approach and focusing on people, programs and technology to improve their data security posture. They should also take the lead in improving formal risk management processes that evaluate information assets and vulnerabilities.

Sources: CA Technologies, CSO Online, CSO Online(1)

The post More Than a Quarter of Executives View Security Investments as Having a Negative ROI appeared first on Security Intelligence.

Report: Nearly Half of Security Professionals Think They Could Execute a Successful Insider Attack on Their Organization

As potential threats and entry points into organizations’ databases keep growing, so does the amount of money folks are throwing at detecting and actioning insider threats. In fact, the ballooning amount of money being spent on cybersecurity overall clearly highlights the seriousness with which businesses are tackling the problem in general.

Identifying and containing data breaches

Insider threats are a major concern for CISOs, and rightly so; professionals are concerned because insiders need legitimate access to data to do their work, so in most cases, they’re already embedded within an organization. Perhaps more terrifying is the time it can take to identify and mitigate breaches.

According to Verizon’s 2018 Data Breach Investigations Report (DBIR), almost 60% of data breaches take months to detect, and even then, more than 20% take several days to begin actioning.

What does an insider threat look like?

Well, for one it’s not a Tinker, Tailor, Soldier Spy kind of a situation, not when it comes to day-to-day data security anyway. As a matter of fact, the very reason they’re so hard to detect is that these users all have legitimate credential and data access. They already have access to sensitive database areas, sensitive file shares; and — more importantly, for those regulations that suggest we rely on encryption of data — these employees often have access to the data we encrypt but have the right to decrypt the data, much like most applications.

So, if you were to pull off an insider attack, how would you do it?

In a recent survey of 179 IT professionals, a staggering 43% said they believe they could execute a successful attack on their own organizations. Only a third believe it would be difficult or impossible to carry out a successful insider theft and just 22% say they would have a 50/50 chance.

With the increasingly blurred line between work and home life, and more pressingly, work devices; the ease with which insider threats can carry out attacks continues to grow. Many companies issue employees a networked laptop or smartphone as standard. Now, while this is a necessary business function, it could have potentially devastating consequences where data security is concerned. When asked to put themselves in the shoes of a malicious insider, 23% of security professionals said they would use their company-owned laptop to steal information from their company, while 20% said their personal computer, and 19% said their laptop.

The good news is that our survey also suggests that nearly two-thirds of organizations have a solution which allows them to detect malicious insiders, while 79% percent of organizations would have a way to tell if their employees were accessing something they shouldn’t; this is, however, caveated with another, suggesting 33% of organizations would take weeks or months to discover an employee had gone malicious, while 14% would never know.

Insider threats are hard to detect because people deliberately try to fly under the radar. That’s why you need to monitor who is accessing what data, and how that user is using the data. In the meantime, data access analytics will create a contextual behavior baseline of user data access activity and pinpoint risky or suspicious data access.

Static vs Dynamic Data Masking: Why Are We Still Comparing the Two?

Earlier this month a leading analyst released their annual report on the state of Data Masking as a component of the overall Data Security sector; which included commentary on what’s known as ‘static’ data masking and an alternative solution known as ‘dynamic’ data masking. And these two solutions have been considered in unison for some time now within the industry overall. The question is, should they be?

Dynamic certainly sounds good, but is it what I need?

At face value, the word ‘dynamic’ certainly sounds a lot more exciting than “static” — I mean, who wouldn’t rather purchase a dynamic television, a dynamic smartphone, or a dynamic bicycle, if the alternative was a “static” version?

The problem is, these are not comparable or alternative options. In the world of data masking, dynamic and static solutions continue to be treated as alternative solutions, with vendors being assessed as to whether they offer one or the other or both.  The reality is, the two solutions certainly support data security, but only static data masking actually ‘de-identifies’ or ‘masks’ data. Dynamic masking provides a cursory replacement of data in transit but leaves the original data intact and unaltered.

So, the point here is that both tools offer a solution to organizational challenges around securing data; but they are completely different technologies, solving completely different challenges, and with completely different use-cases and end users involved.

So, what’s the difference between dynamic and static solutions?

Let’s take a closer look at the two technologies. First, what is the definition of data masking? A quick Google search will lead you to a variety of closely aligned definitions, all of which reference the notion of de-identifying specific sensitive data elements within a database. So, how does this differ between Static and dynamic varieties?

Static data masking (SDM) permanently replaces sensitive data by altering data at rest within database copies being provisioned to DevOps environments.  Dynamic data masking (DDM) aims to temporarily hide or replace sensitive data in transit leaving the original at-rest data intact and unaltered. There are use cases for both solutions, but comparing them as alternative options and/or calling them both ‘masking’ is clearly a misnomer of sorts.

SDM is primarily used to provide high quality (i.e., realistic) data for development and testing of applications within the non-production or DevOps environments, without disclosing sensitive information. Realism, rich data patterns, and high utility of the masked data is critical as it enables end-users to be more effective at conducting tests, completing analytics, and/or identifying defects earlier in the development cycle, therefore driving down costs and increasing overall quality. Leveraging SDM also provides critical input into privacy compliance efforts with standards and regulations such as GDPR, PCI, HIPAA, that require limits on the use of data that identifies individuals. By leveraging SDM, the organization reduces the volume of ‘real’ sensitive data within their overall data landscape, thereby reducing the risks and costs associated with a data breach while simultaneously supporting compliance efforts.

So, SDM clearly has a role in supporting overall data security efforts and in securing the DevOps environment, but what about DDM? How do they compare? Well, as previously mentioned… they don’t.

The reality is, DDM is primarily used to apply role-based (object-level) security for databases or applications in production environments, and as a means to apply this security to (legacy) applications that don’t have a built-in, role-based security model or to enforce separation of duties regarding access. It’s not intended to permanently alter sensitive data values for use in DevOps functions like SDM.

Okay, how does it work? At a high level, sensitive data remains within the reporting database that is queried by an analyst with DDM. All SQL issued by the analyst passes through the database proxy which inspects each packet to determine which user is attempting to access which database objects. The SQL is then modified by the proxy before being issued to the database so that masked data is returned via the proxy to the analyst. To that end, the complexities involved in preventing masked data from being written back to the database essentially mean DDM should really only be applied in read-only contexts such as reporting or customer service inquiry functions.

Ok, I get the differences, but dynamic still sounds intriguing- why wouldn’t I start there when looking to mask my data?

First, it’s complex and complicated. It’s not as simple as installing a software application and running it. The organization must undertake a detailed mapping of applications, users, database objects and access rights required to configure masking rules; and maintaining this matrix of configuration data requires significant effort.

Second, it can be risky. Some organizations we’ve worked with are hesitant to adopt DDM given the inherent risk of corruption or adverse production performance. In addition, relative to SDM, DDM is a less mature technology for which customer success stories are not as well known and use-cases are still being defined.

Finally, the fact remains that the underlying production values and sensitive fields are not actually de-identified or masked, meaning the risk of exposure remains; particularly if the organization in question is leveraging this data to provision to DevOps without a static masking solution in place. So, if your goal is to increase data security efforts relative to data breach risks and/or compliance support efforts, you’re no further ahead with dynamic masking.

All this is not intended to suggest DDM does not have a role in data security, or that it is not as effective as SDM. The point is that they are two fundamentally different solutions operating in differing environments, for varying purposes. The key question organizations must ask is simple: what is the business problem or data security challenge we are trying to solve? That question will help determine which solution makes the most sense.

At the end of the day… can’t both options just get along?

We’ve covered the basics of both dynamic and static data masking with the intent of outlining the unique aspects of the solutions and business challenges they aim to solve; in hopes of supporting the argument that the terms and solutions should not be used interchangeably or compared head-to-head in any sort of meaningful way. Both solutions offer value to organizations in solving different business challenges, and both — if properly implemented — deliver significant data support as part of a layered security strategy.

At the end of the day, however, data de-identification via static data masking is a data security solution recommended by industry analysts as a must-have protection layer in reducing your data risk footprint and the risk of breach by inside or outside threats. Dynamic data masking acts more like a role-based access security layer within the production environment and for internal user privilege requirements- and there are options available within other solutions to achieve this end-goal.

While terminology varies across the industry, the term data masking typically involves verbiage involving the replacement of sensitive data with a realistic fictional equivalent for the purpose of protecting DevOps data from unwanted disclosure and risk. So, it might be the right time to stop the unwanted and inaccurate comparisons between “dynamic” and “static” solutions, and get back to focusing on solving an access challenge in real-time production environments (i.e., the dynamic world); and addressing sensitive data exposure and compliance risks among users in DevOps environments (i.e., the static world).

Contact us to learn more about Imperva’s industry-leading data masking capabilities and our broader portfolio of data security solutions that support data security within both production and non-production environments… additionally, feel free to test-drive SCUBA, our free database vulnerability scanner tool, and/or CLASSIFIER, our free data classification tool.

A Bug in Chrome Gives Bad Actors License to Play ‘20 Questions’ with Your Private Data

In a 2013 interview with The Telegraph, Eric Schmidt, then CEO of Google was quoted as saying: “You have to fight for your privacy or lose it.”

Five years later, with the ‘Cambridge Analytica’ data breach scandal fresh in our memory, Eric Schmidt’s statement rings as a self-evident truth. Similarly clear today is the nature of the “fight”: a grapple for transparency and corporate accountability that can only be won through individual vigilance.

With this in mind, in this post, we’ll share with you details of a new browser bug we uncovered, which has the potential to affect the majority of web users. With it, bad actors could play a ‘guessing game’ to uncover private data stored on Facebook, Google, and likely many other web platforms.

The bug in question affects all browsers running the Blink engine — used to power Google Chrome –, exposing users who aren’t running the latest version of Chrome. Currently, over 58 percent of the entire internet population uses Google Chrome.

Once the vulnerability was identified, Google patched it in the latest release of Chrome 68; and we strongly recommend that all Chrome users make sure that they’re running the latest version.

Mining Private Data

The bug in question makes use of the Audio/Video HTML tags to generate requests to a target resource.

By monitoring the progress events generated by these requests, it grants visibility into the requested resource’s actual size. As we found out, this information can then be used to “ask” a series of yes and no questions about the browser user, by abusing filtering functions available on social media platforms like Facebook.

For example, a bad actor can create sizeable Facebook posts for each possible age, using the Audience Restriction option, making Facebook reflect the user age through the response size.

The same method can be used to extract the user gender, likes, and many other user properties we were able to reflect through crafted posts or Facebook’s Graph Search endpoints.

Large response size would indicate that the restriction didn’t apply, while small ones would indicate that the content was restricted. Meaning, for instance, that the user is from a disallowed age or gender. With several scripts running at once — each testing a different and unique restriction –, the bad actor can relatively quickly mine a good amount of private data about the user.

In a more serious scenario, the attack script would be running on a site that requires some kind of email registration — an e-commerce or a SaaS site, for instance –. In this case, the above-mentioned practices would allow the bad actor to correlate the private data with the login email address for even more extensive and intrusive profiling.

Attack Flow

When a user visits the bad-actor site, the site injects multiple hidden video or audio tags that request a number Facebook posts the attacker previously published and restricted using different techniques. The attacker can then analyze each request to indicate, for example, the user’s exact age, as it’s saved on Facebook, regardless of their privacy settings.

Discovering the Bug

A few months ago I was researching the Cross-Origin Resource Sharing (CORS) mechanism by checking cross-origin communications of different HTML tags. During my research, I noticed an interesting behavior in the video and audio tags. It seems that setting the ‘preload’ attribute to ‘metadata’ changed the number of times the ‘onprogress’ event was being called in a way that seemed to be related to the requested resource size.

To check my hypothesis, I created a simple NodeJS HTTP server that generates a response in the size of a given parameter. I then used this server endpoint as the resource for the JavaScript shown above.

The script creates a hidden audio element that:

  • Requests a given resource
  • Track the number of times the `onprogress` event was triggered
  • Returns the value of the counter once the audio parsing fails

I started experimenting, requesting different response sizes while looking for a correlation between the size and the number of times the `onprogress` event was triggered by the browser.

As you can see in the graph below, when response size is zero only one `onprogress` event is called, for a response of around 100KB the event is called twice, and the number of events continues to increase, allowing me to estimate the size of most web pages.

From this, we see that the number of `onprogress` events correlates with the size of the response, hence we can indicate whether the restriction criteria was met.

Conclusion

Once we confirmed the vulnerability we reported it to Google with a proof of concept, and the Chrome team responded by patching the vulnerability in Chrome’s 68 release.

We’re delighted to have contributed to protecting the privacy of the entire user community, as we continuously do for our community.

Onwards and Upwards: Our GDPR Journey and Looking Ahead

At Imperva, our world revolves around data security, data protection, and data privacy.  From our newest recruits to the most seasoned members of the executive team, we believe that customer privacy is key.

For the better part of the last two years, Imperva has laid the foundation for our compliance with the EU General Data Protection Regulation (GDPR).  At roughly ninety pages with 173 recitals and 99 articles, it’s a massive regulation that fundamentally shifts the data privacy and data protection universe.

Also read: Monitoring Data & Data Access to Support Ongoing GDPR Compliance

We at Imperva are proud of what we’ve accomplished in this time.  As the lead for Imperva’s Privacy Office, I can candidly say that our success has been made possible only through the combined efforts of the entire organization. Thank you to our many Privacy Champions that have actively engaged within their departments and teams.

And a special thanks to our many critical internal partners, including our CMO David Gee, for his humorous evangelizing of data privacy initiatives, our Director of InfoSec, Noam Lang, our CIO, Bo Kim, who was also our first-ever privacy champion, and our CEO, Chris Hylen, for all having supported and prioritized data privacy initiatives within Imperva.

Just the beginning

Our work to comply with GDPR represents only the start of Imperva’s journey to protect, and to create products that protect the data privacy of our customers and their users.  Already, Imperva is proactively building on our GDPR work and looking to ‘infinity and beyond’. Part of that ‘beyond’ is our monitoring and preparation for other game-changing regulations such as the EU ePrivacy Regulation and California’s Consumer Protection Act.

A Successful Launch

Imperva has launched significant enhancements to our data privacy and data security programs and environments to account for new obligations under GDPR.

  • Governance: We have formalized and expanded the governance structure of the data privacy function within Imperva, including the creation of a dedicated Privacy Office.  This updated governance structure has been integrated into our annual third-party certification audits and reviews.
  • DPIAs:  We have expanded our standard internal Privacy Impact Assessment process to trigger additional Data Protection Impact Assessments when appropriate.
  • Security Environments: We have long maintained several common certification frameworks via third-party audits, including ISO 27001, the PCI Data Security Standard, and SOC 2 Type II reporting.  As part of ensuring that our robust environments remain secure, we mapped our GDPR infosec obligations to our existing control frameworks to ensure we meet all GDPR obligations on an ongoing basis.
  • Updated Privacy Notices:  We updated the privacy policies on our web properties to reflect the changes we’ve adopted under GDPR. Additionally, we’ve refreshed our cookie consent banners and cookie policies for those in the European Union.
  • Customer Agreements: In order to facilitate streamlined customer onboarding, we’ve created ready-to-sign Data Processing Agreements (DPAs) that provide details about what personal data an Imperva product or service collects in order to provide that service.  These DPAs utilize the controller-processor model clauses approved by the EU Commission and address customer concerns about how cross-border data transfers are GDPR-compliant.
  • Data Subject Requests: We’ve rolled out a new data subject request portal on our web properties.  Additionally, we’ve worked with each Imperva department to ensure smooth operational processing of data subject rights, including access, rectification, and erasure.

To Infinity

We here at Imperva have not been satisfied by merely meeting our obligations.  We are making data privacy a priority. As a security company, data privacy is mission critical.  It’s part of earning and maintaining the trust of our customers and employees.

Even Better Products: Our Product teams have worked hard to re-architect infrastructure to enable regional storage of logs.  This new feature makes compliance with GDPR far easier for customers or their subsidiaries operating primarily within a single geographic region by reducing cross-border data transfers.  Additionally, regional log storage enables genuine conformity with data localization and residence laws, such as those in China, Canada, Germany, Russia, and South Korea.

Embedded Privacy Champions: We’ve ramped up our program to embed mini privacy subject matter experts within each department. Today, three percent of our workforce are privacy champions thinking about how to protect your personal data. And that number is growing.

Privacy Guidance Down to Departments: The Privacy Office has worked with each department to create individual departmental policies and operational guidance to ensure that Imperva employees in every role know how to safeguard and protect personal data.

Vendor Management: We’ve reviewed dozens of vendors across all product lines to ensure we have the appropriate data privacy and security provisions, data processing agreements, and standards in place to safeguard our customers’ personal data.  Our subprocessors page on our web properties provides additional information about third-party service providers.

And Beyond!

Imperva has aimed high when it comes to the obligations created by GDPR, but we’re also looking far beyond.

In particular, Imperva is keeping a close eye on new data privacy laws and updates coming down the line that could impact our customers’ data privacy obligations, and therefore our obligations to you—such as the EU ePrivacy Regulation, which updates the 2009 ePrivacy Directive, as well as the California Consumer Privacy Act, which becomes enforceable on January 1, 2020.

GDPR is a significant milestone in the data privacy universe and so too in Imperva’s journey, yet it’s important to recognize it as a milestone and not as an endpoint.  GDPR represents only the start of Imperva’s journey to protect and to create products that protect the data privacy of our customers and their users.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.


5 things you need to know about securing our future

“Securing the future” is a huge topic, but our Chief Research Officer Mikko Hypponen narrowed it down to the two most important issues is his recent keynote address at the CeBIT conference. Watch the whole thing for a Matrix-like immersion into the two greatest needs for a brighter future — security and privacy.

To get started here are some quick takeaways from Mikko’s insights into data privacy and data security in a threat landscape where everyone is being watched, everything is getting connected and anything that can make criminals money will be attacked.

1. Criminals are using the affiliate model.
About a month ago, one of the guys running CTB Locker — ransomware that infects your PC to hold your files until you pay to release them in bitcoin — did a reddit AMA to explain how he makes around $300,000 with the scam. After a bit of questioning, the poster revealed that he isn’t CTB’s author but an affiliate who simply pays for access to a trojan and an exploit-kid created by a Russian gang.

“Why are they operating with an affiliate model?” Mikko asked.

Because now the authors are most likely not breaking the law. In the over 250,000 samples F-Secure Labs processes a day, our analysts have seen similar Affiliate models used with the largest banking trojans and GameOver ZeuS, which he notes are also coming from Russia.

No wonder online crime is the most profitable IT business.

2. “Smart” means exploitable.
When you think of the word “smart” — as in smart tv, smartphone, smart watch, smart car — Mikko suggests you think of the word exploitable, as it is a target for online criminals.

Why would emerging Internet of Things (IoT) be a target? Think of the motives, he says. Money, of course. You don’t need to worry about your smart refrigerator being hacked until there’s a way to make money off it.

How might the IoT become a profit center? Imagine, he suggests, if a criminal hacked your car and wouldn’t let you start it until you pay a ransom. We haven’t seen this yet — but if it can be done, it will.

3. Criminals want your computer power.
Even if criminals can’t get you to pay a ransom, they may still want into your PC, watch, fridge or watch for the computing power. The denial of service attack against Xbox Live and Playstation Netwokr last Christmas, for instance likely employed a botnet that included mobile devices.

IoT devices have already been hijacked to mine for cypto-currencies that could be converted to Bitcoin then dollars or “even more stupidly into Rubbles.”

4. If we want to solve the problems of security, we have to build security into devices.
Knowing that almost everything will be able to connect to the internet requires better collaboration between security vendors and manufacturers. Mikko worries that companies that have never had to worry about security — like a toaster manufacturer, for instance — are now getting into IoT game. And given that the cheapest devices will sell the best, they won’t invest in proper design.

5. Governments are a threat to our privacy.
The success of the internet has let to governments increasingly using it as a tool of surveillance. What concerns Mikko most is the idea of “collecting it all.” As Glenn Glenwald and Edward Snowden pointed out at CeBIT the day before Mikko, governments seem to be collecting everything — communication, location data — on everyone, even if you are not a person of interest, just in case.

Who knows how that information may be used in a decade from now given that we all have something to hide?

Cheers,

Sandra

 

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Game-Changing Sensitive Data Discovery

I've tried not to let my blog become a place where I push products made by my employer. It just doesn't feel right and I'd probably lose some portion of my audience. But I'm making an exception today because I think we have something really compelling to offer. Would you believe me if I said we have game-changing DLP data discovery?

How about a data discovery solution that costs zero to install? No infrastructure and no licensing. How about a solution that you can point at specific locations and choose specific criteria to look for? And get results back in minutes. How about a solution that profiles file shares according to risk so you can target your scans according to need. And if you find sensitive content, you can choose to unlock the details by using credits which are bundle-priced.

Game Changing. Not because it's the first or only solution that can find sensitive data (credit card info, national ID numbers, health information, financial docs, etc.) but because it's so accessible. Because you can find those answers minutes after downloading. And you can get a sense for your problem before you pay a dime. There's even free credits to let you test the waters for a while.

But don't take our word for it. Here are a few of my favorite quotes from early adopters: 
“You seem to have some pretty smart people there, because this stuff really works like magic!”

"StealthSEEK is a million times better than [competitor]."

"We're scanning a million files per day with no noticeable performance impacts."

"I love this thing."

StealthSEEK has already found numerous examples of system credentials, health information, financial docs, and other sensitive information that weren't known about.

If I've piqued your interest, give StealthSEEK a chance to find sensitive data in your environment. I'd love to hear what you think. If you can give me an interesting use-case, I can probably smuggle you a few extra free credits. Let me know.



Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.