Category Archives: Data Security

Unsecured Third-Party Access Puts Personal Data at Risk

Organizations that fail to vet third-party suppliers properly are vulnerable to a threat that steals credit card data over long periods of time, according to a July 2018 IBM X-Force advisory.

The threat alert outlines details about a recent breach against Ticketmaster that affected several of its third-party websites. According to the advisory, a threat group used a tactic called digital skimming to harvest credit card information, login credentials and names from online forums.

The group, dubbed Magecart, has been running the campaign since at least December 2016.

Digital Skimming Threat Exploits Third-Party Access

It’s important to note that Magecart launched its attack not through Ticketmaster itself, but via one of its digital suppliers, Inbenta, and possibly through a second vendor called SocialPlus.

This incident shows how an extended ecosystem of partners and suppliers can significantly expand the perimeter that security professionals must protect. A May 2018 study from Kaspersky Lab found that incidents affecting third-party infrastructure have led to an average loss of $1.47 million for large enterprises.

How Can Organizations Thwart Third-Party Threats?

While malicious actors have been secretly inserting physical devices to skim credit card data at point-of-sale (POS) terminals for years, digital skimming makes this threat much more difficult to contend with. This is especially true for large organizations that oversee dozens of websites, landing pages and other digital properties that prompt customers to enter their personal data.

To keep third-party threats in check, IBM experts recommend:

  • Taking inventory of third-party network connections to understand where they are coming from, where they are going to and who has access;
  • Conducting vulnerability assessments on their external-facing hosts and cloud environments to look for services that are listening for inbound connections; and
  • Using encryption to ensure that their sensitive data is useless to cybercriminals in the event that it is stolen via unsecured third-party access.

The post Unsecured Third-Party Access Puts Personal Data at Risk appeared first on Security Intelligence.

Imperva Cloud Security Now Available Through UK Government’s GCloud 10 Digital Marketplace

Building on the success of Imperva listing our market-leading, single stack Incapsula cloud platform for DDOS protection; CDN; load balancing and WAF on the GCloud 9 framework, Imperva has now added more products to the GCloud 10 portfolio.

As the UK pushes for even greater digital adoption on a national scale, it constantly adds to and updates GCloud 10; a hotlist of preferred products and services for companies that seek to do business with government… simply put, partnering with Imperva now ticks an important box on the UK government’s procurement checklist.

Imperva SecureSphere data protection solutions protect databases from attack, reduce risk and streamlines compliance by enabling organizations to leverage common infrastructure, both in AWS, Azure, hybrid and on-prem.

Imperva SecureSphere Web Application Firewall (WAF) for AWS & Azure provides the industry’s leading WAF technology to protect web apps. It combines multiple defenses to accurately pinpoint and block attacks without blocking your citizens and partners.

Check us out on the Digital Marketplace if you’d like to learn more.

Retail data breaches continue to reach new highs

Thales announced the results of its 2018 Thales Data Threat Report, Retail Edition. According to U.S. retail respondents, 75% of retailers have experienced a breach in the past compared to 52% last year, exceeding the global average. U.S retail is also more inclined to store sensitive data in the cloud as widespread digital transformation is underway, yet only 26% report implementing encryption – trailing the global average. Year-over-year breach rate takes a turn for the … More

The post Retail data breaches continue to reach new highs appeared first on Help Net Security.

Five network security threats facing retail – and how to fight them

Digital networks are now the backbone of every retail operation. But they are also a very attractive target for cyber criminals. Paul Leybourne of Vodat International examines the key cyber

The post Five network security threats facing retail – and how to fight them appeared first on The Cyber Security Place.

Shrouding IoT Security in the Fog

The world is undergoing the most dramatic overhaul of our information service infrastructure ever, driven by the “connected everything” movement.While the benefits of connected data are indisputable – better decisions

The post Shrouding IoT Security in the Fog appeared first on The Cyber Security Place.

Recent Attack Suggests Ransomware Is Alive and Well in Healthcare

A U.S. hospital disclosed that it suffered a ransomware attack, the latest in a spate of such incidents befalling the industry in recent years. Despite the fact that ransomware has declined in most other industries, these continued attacks highlight the need for healthcare organizations to boost their defenses and adopt strategies to proactively fight against this persistent threat.

Another Hospital, Another Data Breach

The hospital announced that it became aware of a crypto-malware attack on the morning of July 9. The incident affected the organization’s internal communications systems and access to its electronic health record (EHR).

Soon after discovering the malware, the hospital quickly initiated its incident response protocol, and IT professionals worked with law enforcement and forensics experts to investigate the incident. The security team also evaluated the hospital’s digital defense capabilities and decided to divert ambulance patients suffering from trauma or stroke to other institutions.

Although the investigators did not discover any evidence of the attack compromising patient data, they did opt to temporarily shut down the system as a precaution.

Ransomware Rates Remain High in Healthcare

According to Recorded Future, ransomware campaigns began declining in 2017, driven largely by the disappearance of many exploit kits (EKs) on the cybercrime market. At the same time, the remaining EKs made a tactical shift toward distributing crypto-mining malware. Unfortunately for hospitals, the decline in overall ransomware attacks does not apply to the healthcare sector.

Healthcare companies are still prime targets for ransomware because they invest relatively little in IT security. In addition, hospitals are often more willing to pay ransoms due to the criticality of their IT systems and EHRs. As John Halamka, chief information officer (CIO) at Boston’s Beth Israel Deaconess Medical Center, noted in Fierce Healthcare, some of these systems are not up to date, which makes them susceptible to vulnerability-driven attacks.

“Each time a patch is introduced, the act of changing a mission-critical system impacts reliability and functionality,” Halamka explained. “Some mission-critical systems were created years ago and never migrated to modern platforms.”

According to ZDNet, many hospitals have recently paid ransoms of tens of thousands of dollars to regain access to their data. Threat actors view these incidents as evidence that ransomware is still an effective and lucrative tactic to use against healthcare organizations.

How Can Hospitals Protect Their Data?

To protect healthcare data from threat actors looking to hold it for ransom, hospitals should double down on patch management to ensure that all networks, endpoints, applications, databases and medical devices are up to date. They should also implement network segmentation to limit attackers’ lateral movement and regularly back up data so that operations can resume quickly in the event of a breach.

As always, the best defense against threats such as ransomware is continuous training and education throughout the organization. By ensuring that everyone from rank-and-file employees to top leadership can recognize signs of a ransomware attack and act accordingly, these users can serve as the first line of defense against this persistent threat.

The post Recent Attack Suggests Ransomware Is Alive and Well in Healthcare appeared first on Security Intelligence.

Only 65% of organizations have a cybersecurity expert

Despite 95 percent of CIOs expecting cyberthreats to increase over the next three years, only 65 percent of their organizations currently have a cybersecurity expert, according to a survey from Gartner. The survey also reveals that skills challenges continue to plague organizations that undergo digitalization, with digital security staffing shortages considered a top inhibitor to innovation. Gartner’s 2018 CIO Agenda Survey gathered data from 3,160 CIO respondents in 98 countries and across major industries, representing … More

The post Only 65% of organizations have a cybersecurity expert appeared first on Help Net Security.

Why Consumers Demand Greater Transparency Around Data Privacy

Although consumers have a wide range of attitudes toward data privacy, the vast majority are calling for organizations to be more transparent about how they handle customer information, according to a July 2018 survey from the Direct Marketing Association.

Previous research has shown that many companies are not doing enough to communicate and clarify their data-handling policies to customers. Given these findings, what practices can organizations adopt to be more upfront with users and build customer trust?

How Important Is Data Privacy to Consumers?

The Direct Marketing Association survey sorted respondents into three categories:

  1. Data pragmatists (51 percent): Those who are willing to share their data as long as there is a clear benefit.
  2. Data unconcerned (26 percent): Those who don’t care how or why their data is used.
  3. Data fundamentalists (23 percent): Those who refuse to share their personal data under any circumstances.

It’s not just fundamentalists who see room for improvement when it comes to organizations’ data-handling practices. Eighty-two percent of survey respondents said companies should develop a flexible privacy policy — while 84 percent said they should simplify their terms and conditions. Most tellingly, 86 percent said organizations should be more transparent with users about how they engage with customer data.

There Is No Digital Trust Without Transparency

The results of a May 2018 study from Ranking Digital Rights (RDR), Ranking Digital Rights 2018 Corporate Accountability Index, suggest that consumers’ demands for more transparency are justified. Not one of the 22 internet, mobile and telecommunications companies surveyed for the study earned a privacy score higher than 63 percent, indicating that most organizations fail to disclose enough information about data privacy to customers.

Transparency is often a critical factor for consumers when deciding whether to establish digital trust with a company or service provider. According to IBM CEO Ginni Rometty, organizations can and should work to improve their openness by being clear about what they’re doing with users’ data. Those efforts, she said, should originate from companies themselves and not from government legislation.

“This is better for companies to self-regulate,” Rometty told CNBC in March 2018. “Every company has to be very clear about their data principals — opt in, opt out. You have to be very clear and then very clear about how you steward security.”

The post Why Consumers Demand Greater Transparency Around Data Privacy appeared first on Security Intelligence.

26,000 electronic devices are lost on London Transport in one year

Commuters lost over 26,000 electronic devices on London’s Transport for London (TFL) network last year, new research from the think tank Parliament Street has revealed. The findings reveal that 26,272 devices were reported lost on the network of tubes, trains and buses between April 2017 and April 2018. The report contains further security analysis on the risks lost devices pose for fraudulent activity, identity verification and data security for UK businesses. The data revealed that … More

The post 26,000 electronic devices are lost on London Transport in one year appeared first on Help Net Security.

Staying secure as the IoT tsunami hits

The ubiquitous adoption of devices in virtually every industry is creating a massive, global security gap. Data science can help reign in the risks. Just when we thought we were

The post Staying secure as the IoT tsunami hits appeared first on The Cyber Security Place.

Need for Speed: Optimizing Data Masking Performance and Providing Secure Data for DevOps Users

Let’s start with a pretty common life experience — you identify a need (e.g., transportation), you evaluate your options (e.g., evaluate car manufacturers, various features, pricing, etc.), and you decide to purchase (e.g., vehicle X). This process repeats itself over and over again regardless of the purchase. What typically happens following the purchase decision is also equally likely and transferrable — that is: How do I improve it? Increase efficiency? Can I tailor it to my individual needs?

For most technology purchases, including those related to data security — and data masking in particular — the analogy holds equally true. For many of our data security customers, the desire to optimize the through-put, run-rate, or outputs from the solutions they invest in is becoming increasingly important as they race to achieve regulatory compliance with key data privacy requirements and regulations such as the European (EU)-wide General Data Protection Regulation (GDPR), HIPAA, or PCI-DSS. Equally important is that most organizations are looking to mitigate the risk of sensitive data exposure and optimize their DevOps function; allowing more end-users to access data for test and development functions, without the risk associated with using sensitive data. And, they want all this to be achieved FASTER.

Imperva offers a variety of data security solutions to support these increasingly common organizational challenges, including Imperva Camouflage. Our industry-leading data masking solution and best practice implementation process offer a one-stop means to achieve compliance support and reduce data risks in DevOps, while meeting end-user processing expectations. Simply put: the process involves the use of a production database copy, upon which the data is classified for masking, and transformation algorithms are then applied to produce fictional — but contextually accurate — data that is substituted for the original source data; all done in an expeditious manner to meet the need for speed.

You’ve decided you need data masking, but your end-users want more

In previous blogs and webinars, we highlighted the value that data masking provides for protecting data privacy and supporting industry and regulatory compliance initiatives. Industry analysts continue to see ongoing and expanding use cases and demand for the technology. This is largely due to the fact that organizational data capture and storage of data, and sensitive customer data, in particular, continues to grow. Further, changing data applications; database types, migration to the cloud (for DevOps), privacy regulations required for de-identified data, and the growth of big data applications and various data use cases all combine to drive the added need for data masking technologies and their diversification and advancement.

So, organizations are seeing the value of data masking, and many have implemented it into their overall data security strategies to provide yet another critical layer. That said, they are also demanding increased speed of masked copy processing and provisioning to ensure their DevOps teams continue to deliver on business-critical processes. How then can data masking be optimized? Let’s first take a look at typical performance considerations, and then review how the process can be optimized for end-users.

How do you measure data masking ‘performance’?

One of the most common questions asked during a sales cycle, POC or implementation relates to the length of time it will take to mask data, and how that performance is measured. The quick answer is ‘it’s complicated’. Data Masking run-rate performance is typically measured in rows per second. This is the metric most often cited by our customers, but the underlying size of a row can and does vary significantly depending on how wide the particular tables are, therefore impacting performance comparison.

Additionally, the variance in data volumes and data model complexity make it challenging to provide specific performance numbers; but these are batch processes that modify large amounts of data (excepting discovery) and therefore consume a significant amount of time for large amounts of data. That said, Imperva offers a number of avenues to optimize performance for a given customer/client’s requirements and can be reverse engineered in most cases to achieve the desired performance or run-time metric. We’ll get into the specifics on this shortly.

Aside from the inherent capabilities of the software solution itself, there are several factors that influence the performance of data masking that we discuss with our customers. We also explain that the various combination of these make it challenging for any vendor to pinpoint exact masking run times. We’ve consolidated the performance-impacting variables into three key variables, including database characteristics, hardware requirements, and masking configurations. Let’s review each of these:

  1. Database characteristics:

In general, a large database takes longer to mask than a small database- pretty simple! To be more specific the height and width (row count and columns per row) of the tables in the database being masked directly impact the runtime. Tall tables, with high row counts, have more data elements to process compared to shorter tables.

In contrast, wide tables containing extraneous non-sensitive information introduces I/O overhead throughout the data transformation process because the non-sensitive information is included, in part, during the data transformation process. This makes the input of the DevOps SME’s key to assessing the underlying databases and helping scope the performance requirements.

  1. Hardware Requirements:

Data masking can be a relatively heavy process on the database tier in that it copies and moves a significant amount of data in many cases. As we employ our best practice implementation process using a secure staging server, this introduces yet another variable that influences through-put- but also an opportunity. The processing power and I/O speed on the database staging tier greatly influence the performance- regardless of vendor or solution being deployed. When we provide base hardware specifications to customers for the staging server, we make this clear. We also help by providing a range of hardware options required depending on the underlying environment characteristics and end-user requirements- noting that where SLA windows are tight for masking then appropriate hardware should be provisioned to accommodate accordingly. The good news is that this is an easy configuration, and most customers already have access to all the tools they need to maximize their staging server to their specific requirements.

  1. Masking Configurations within the projects:

The details of the security and data masking requirements, which are driven by the organization, also influence performance. In particular, the amount and complexity of the masking being applied have an impact on masking run times. From our experience, in cases where typical sensitive data element types require masking, the data resides in 15% – 20% of the tables. If higher security requirements are imposed and additional data elements are included, this could expand to include as many as 30% – 40% of the tables.

In addition to the volume of data being masked, the specific data transformations also influence the runtimes.  There are different types of data transformers, and they each have different performance characteristics based on the manner in which they manipulate data. For example, ‘Data Generators’ synthesize fictitious numbers such as Credit Cards and Phone numbers, whereas, ‘Data Loaders’ load data such as shuffling names and addresses from defined sets. The business needs usually dictate which transformer should be used for a given data element, but sometimes the business requirement can be met with one or more transformers. In those cases, the option chosen can have an impact on performance.

For each of these key variables, it’s important to assess the business requirements and then balance appropriately with regards to the complexity of the underlying data model, the staging server horsepower, and the methods applied for the masking process. Imperva’s depth of experience in this regard provides additional value to customers when they are looking to understand the best implementation approach to meet both data security and end-user requirements. It’s a critical piece of the puzzle.

We know what impacts performance and what to consider beforehand. Now, how do we make it even better?

While we’ve focused on the more tool-agnostic variables that impact masking performance, there are also considerations within the tool itself that can help fine-tune the end result. For Imperva’s solution, there are a variety of levers that can be used to customize and optimize high-volume/high-throughput masking. For example, performance settings within the solution can be adjusted at multiple levels of the application stack including (a) the database server, (b) the Imperva Camouflage application server, and (c) within the masking engine itself to maximize performance. Settings for parallelization of operations, flexible allocation of hardware resources (RAM) and the use of bulk-SQL commands during masking operations, are some of the ways in which the performance and scalability of the Imperva Camouflage solution can also be configured.

A number of approaches for maximum scalability and performance are also available within the Imperva Camouflage solution that can be considered depending on the environment and requirements, including:

  • Multi-Threading – parallelization is used throughout Imperva Camouflage to enable masking to scale to the largest of databases and masking targets. This includes the capability to process many database columns at the same time while accounting for dependencies within certain databases.
  • Optimized SQL – although invisible to the user, Imperva Camouflage refines the SQL used to affect the masking depending on the database type as well as the particular masking operation being performed. No configuration changes are necessary to take advantage of bulk-SQL and other commands that minimize database logging overhead.
  • Execution on the Database Tier – many operations are performed directly on the database server(s) which has the effect of minimizing data movement over the network thereby maximizing performance. It also leverages the hardware resources that are typically dedicated to database servers.
  • Parallelization on the Database Tier – wherever possible, operations are performed in parallel using multiple instances on the database tier as well. For some environments, this is a combination of database engine settings as well as Imperva Camouflage configuration. By scaling up or down, the masking process can be made to conform to the needs/constraints of the given masking operation. This is one area that Imperva typically spends time with its customer’s and masking end users to ensure they are maximizing the tool’s performance.

It’s also important to reinforce that regardless of the solution, the storage architecture and configuration have a significant impact on performance. Faster storage with reads/writes spread across multiple disks will result in better performance overall. In many cases, database and storage tiers are configured for transactional workloads which are different from the bulk/batch workload that masking represents. Better performance will be found with faster storage that is in the same data center as the database server being masked, period!

Slow down to Speed up!

So, there are clearly a variety of factors that impact the run-rate of data masking, and yet there are a variety of levels (once understood) that can be unpacked of to optimize performance and achieve end-user expectations. Additionally, leveraging industry-recognized, purpose-built solutions and best-practice implementation expertise offers a much more efficient and effective way to optimize data masking run-rates; and offers a more scalable and sustainable process over the long-term.

The key is to slow down during the implementation phase. Understand your requirements. Understand your data model. Understand what resources you need to apply to your staging server and your masking processes, and you’re well on your way to optimizing the resulting output. At the end of the day, Imperva can in most cases reverse engineer a customer’s desired performance requirements and configure the solution, processes, and recommended hosting architecture to achieve the desired result.

Imperva offers a variety of data security solutions to support organizational data security efforts, with Imperva Camouflage offering industry-leading support for masking (and/or pseudonymizing) data to help achieve privacy compliance requirements (e.g., GDPR) and mitigate the risk of a costly data breach within the various DevOps environments.

Get in touch to learn more about Imperva Camouflage and Imperva’s broader portfolio of data security solutions. Also, feel free to test-drive SCUBA, our free database vulnerability scanner tool, and/or CLASSIFIER, our free data classification tool.

Think You’ve Got Nothing to Hide? Think Again — Why Data Privacy Affects Us All

We all hear about privacy, but do we really understand what this means? According to privacy law expert Robert B. Standler, privacy is “the expectation that confidential personal information disclosed in a private place will not be disclosed to third parties when that disclosure would cause either embarrassment or emotional distress to a person of reasonable sensitivities.”

It’s important to remember that privacy is about so much more than money and advertisements — it ties directly to who we are as individuals and citizens.

What Is the Price of Convenience?

Most users willingly volunteer personal information to online apps and services because they believe they have nothing to hide and nothing to lose.

When I hear this reasoning, it reminds me of stories from World War II in which soldiers sat on the sideline when the enemy was not actively pursuing them. When the enemy did come, nobody was left to protect the soldiers who waited around. That’s why it’s essential for all users to take a stand on data privacy — even if they’re not personally affected at this very moment.

Some folks are happy to disclose their personal information because it makes their lives easier. I recently spoke to a chief information security officer (CISO) and privacy officer at a major unified communications company who told me about an employee who willingly submitted personal data to a retail company because it streamlined the online shopping experience and delivered ads that were targeted to his or her interests.

This behavior is all too common today. Let’s dive deeper into some key reasons why privacy should be top of mind for all users — even those who think they have nothing to hide.

How Do Large Companies Use Personal Data?

There is an ongoing, concerted effort by the largest technology companies in the world to gather, consume, sell, distribute and use as much personal information about their customers as possible. Some organizations even market social media monitoring tools designed to help law enforcement and authoritarian regimes identify protesters and dissidents.

Many of these online services are free to users, and advertising is one of their primary sources of revenue. Advertisers want high returns per click, and the best way to ensure high conversion rates is to directly target ads to users based on their interests, habits and needs.

Many users knowingly or unknowingly provide critical personal information to these companies. In fact, something as simple as clicking “like” on a friend’s social media post may lead to new ads for dog food.

These services track, log and store all user activity and share the data with their advertising partners. Most users don’t understand what they really give up when technology firms consume and abuse their personal data.

Advanced Technologies Put Personal Data in the Wrong Hands

Many DNA and genomics-analysis services collect incredibly detailed personal information about customers who provide a saliva-generated DNA sample.

On the surface, it’s easy to see the benefit of submitting biological data to these companies — customers get detailed reports about their ancestry and information about potential health risks based on their genome. However, it’s important to remember that when users volunteer data about their DNA, they are also surrendering personal information about their relatives.

Biometrics, facial recognition and armed drones present additional data-privacy challenges. Governments around the world have begun using drones for policing and crowd control, and even the state of North Dakota passed a law in 2015 permitting law enforcement to arm drones with nonlethal weapons.

Facial recognition software can also be used for positive identification, which is why travelers must remove their sunglasses and hats when they go through immigration control. Law enforcement agencies recently started using drones with facial recognition software to identify “potential troublemakers” and track down known criminals.

In the U.S., we are innocent until proven guilty. That’s why the prospect of authorities using technology to identify potential criminals should concern us all — even those who don’t consider privacy to be an important issue in our daily lives.

Who Is Responsible for Data Privacy?

Research has shown that six in 10 boards consider cybersecurity risk to be an IT problem. While it’s true that technology can go a long way toward helping organizations protect their sensitive data, the real key to data privacy is ongoing and companywide education.

According to Javelin Strategy & Research, identity theft cost 16.7 million victims $16.8 billion in the U.S. last year. Sadly, this has not been enough to push people toward more secure behavior. Since global regulations and company policies often fall short of protecting data privacy, it’s more important than ever to understand how our personal information affects us as consumers, individuals and citizens.

How to Protect Personal Information

The data privacy prognosis is not all doom and gloom. We can all take steps to improve our personal security and send a strong message to governments that we need more effective regulations.

The first step is to lock down your social media accounts to limit the amount of personal information that is publicly available on these sites. Next, find your local representatives and senators online and sign up to receive email bulletins and alerts. While data security is a global issue, it’s important to keep tabs on local legislation to ensure that law enforcement and other public agencies aren’t misusing technology to violate citizens’ privacy.

Lastly, don’t live in a bubble: Even if you’re willing to surrender your data privacy to social media and retail marketers, it’s important to understand the role privacy plays in day-to-day life and society at large. Consider the implications to your friends and family. No one lives alone — we’re all part of communities, and we must act accordingly.

The post Think You’ve Got Nothing to Hide? Think Again — Why Data Privacy Affects Us All appeared first on Security Intelligence.

Departing Employees Should Not Mean Departing Data

Empowering your employees to do their best work means providing them access to physical and digital assets in the company network that can help them scale their initiatives. But when

The post Departing Employees Should Not Mean Departing Data appeared first on The Cyber Security Place.

What is the Tor Browser? How it works and how it can help you protect your identity online

Move over “dark web,” the Tor Browser will keep you safe from snoops. The Tor Browser is a web broswer that anonymizes your web traffic using the Tor network, making it

The post What is the Tor Browser? How it works and how it can help you protect your identity online appeared first on The Cyber Security Place.

Understanding SIEM Technology: How to Add Value to Your Security Intelligence Implementation

Security information and event management (SIEM) technology has been around for more than a decade — and the market is growing by the minute.

So, it may seem strange that so many organizations lack a proper understanding of what a security intelligence and analytics solution can do, what type of data it ingests and where to begin when it comes to implementation.

As the threat environment expands in both diversity and volume, IT skills are becoming increasingly scarce, and point solutions are increasingly flooding the market. As a result, many security leaders are at a loss when it comes to selecting the right SIEM solutions to serve their unique needs.

Clear the Fog Surrounding SIEM Technology

Why all the confusion? For one thing, many companies just throw money at a SIEM platform to solve all their security use cases or as a silver bullet for compliance. These are ill-advised strategies because customers are often left to their own devices to both define and implement the system.

So, how should these companies proceed? The first step is to identify the primary security challenges they are trying to solve and the outcomes they hope to achieve.

To shed light on their SIEM implementation, security leaders need a single pane of glass across the organization’s infrastructure to detect and investigate threats, both internal and external. In both cases, these threats are typically after the enterprise’s critical data, whether they aim to steal or destroy it. Since more and more of this data is being moved off premises, cloud security has become a critical function of security operations.

Threat actors will do anything they can to gain access to the enterprise’s crown jewels — and, when they do, security teams need a rapid and efficient incident-response process that enables analysts to take action quickly and confidently.

Finally, and perhaps most crucially, organizations must be able to prove all of the above to various compliance and regulatory auditors.

Related to this Article

How to Optimize Your SIEM Implementation

To clear up the uncertainty surrounding SIEM technology — and to maximize the value of their implementation — security leaders should:

  • Understand the outcomes their SIEM solution can deliver against common use cases;
  • Create a road map for SIEM maturity;
  • Understand how adding different types of data to the SIEM can improve outcomes; and
  • Continuously review their processes and educate staff and stakeholders accordingly.

By following these basic steps, chief information security officers (CISOs) can demonstrate the value of their SIEM implementation in a way that is easily communicable to business leaders and lead the way toward smarter, more prudent investments.

Download the 2017 Gartner Magic Quadrant for SIEM

The post Understanding SIEM Technology: How to Add Value to Your Security Intelligence Implementation appeared first on Security Intelligence.

How Local Privacy Regulations Influence CISO Spending Around the World

As local privacy regulations take effect in places like California and the U.K., security leaders around the world are sensing a shift toward stronger data privacy and transparency — and are using these laws as guidelines to help them make budgetary decisions.

The California Consumer Privacy Act was signed into law on June 28, 2018, and will take effect by 2020. The law will take an approach similar to the General Data Protection Regulation (GDPR) regarding transparency and consent around personal information. GDPR went into effect across the European Union (EU) just one month before the new law’s signing.

Like other privacy regulations, organizations in California must now ensure their customers know what kind of information they are collecting and sharing with third parties, such as advertisers and marketers. Consumers can choose to opt out of having their information collected, and companies that fail to comply risk incurring fines from the state’s attorney general.

Local Privacy Regulations Guide Private Sector Security Strategies

While GDPR and the California Consumer Privacy Act focus on how companies gather and manage data, other legislators are trying to ensure that the systems they use don’t fall prey to cybercriminals.

The U.K.’s Cabinet Office, for instance, published the first iteration of its “Minimum Cyber Security Standard” in June 2018. Though designed as a checklist for government agencies, organizations can adopt some of its practices in the private sector — such as checking websites and applications for common vulnerabilities — to keep ahead of further privacy legislation. As with more traditional privacy regulations, it outlines several mandatory requirements, including support for Transport Layer Security (TLS) encryption.

Regulatory Activity Impacts Security Budgets Around the World

These new laws and regulations reveal that chief information security officers (CISOs) from California to the U.K. are starting to use privacy regulations as a guide to determine what resources they will need to be effective.

For instance, according to a February 2018 report from consulting group Ankura, The Shifting Cybersecurity Landscape: How CISOs and Security Leaders Are Managing Evolving Global Risks to Safeguard Data, 73 percent of CISOs said regulatory activity drives their decision-making around security budgets — and all respondents said they had to comply with at least one such framework.

Even if privacy regulations like GDPR don’t directly pertain to their organizations, the Ankura report suggested that security leaders are paying close attention because they recognize that one piece of legislation can influence what other governments may demand in the future.

In other words: The effects of cybersecurity legislation in places like the E.U., the U.K. and California are reaching far past their own borders. As data privacy laws proliferate around the world, security leaders everywhere will be impacted by the shift toward greater protection and transparency.

The post How Local Privacy Regulations Influence CISO Spending Around the World appeared first on Security Intelligence.

Optimizing A Monitoring System: Three Methods for Effective Incident Management

Picture this: You’ve just returned from a well-deserved vacation and, upon opening up your security monitoring system you’re faced with the prospect of analyzing thousands of events.

This isn’t an imaginary scenario, the security monitoring world (actually monitoring in general) is full of anomalies that trigger events. These may represent a real problem or just a slight difference in someone’s day-to-day behavior that might trigger such alerts.

Regardless of the cause, it forces you to sift through large numbers of incidents to figure out which are high priority and which aren’t.

In this post, we’ll highlight three effective methods that can be used to alleviate this problem, based on real-world examples.

Real Value Incidents

The biggest questions in the monitoring world are which anomalies should trigger an incident. One of the challenges the security operations team is facing is to find relevant and meaningful incidents, there are too many false positives. To answer this question, we need to also ask ourselves how we define an incident. Well, that depends on the system domain. The actual decision requires high-level knowledge of the domain and may require the use of complex algorithms that, based on the definition, will highlight what is really interesting.

For example, in the insider threat domain, a system identifies that a user has performed an action on a database for the first time. This is an anomaly since it never occurred before, but is it a real security incident? In order to answer this question, we have to classify the user as well as the database and correlate these two. This allows insights you wouldn’t get at face value.

Grouping of Incidents

Once the real-value incidents are identified, one way of reducing the number of incidents that need to be managed is by grouping them into narratives to describe a specific phenomenon that security engineers can handle as one. Although each individual incident is valid, when grouped together, an even larger, more manageable narrative appears that can be dealt with as one – the sum is greater than its parts.

The two types of groupings:

  1. By incident type. For example, ‘a service account was abused by multiple users’.
    This implies that this service account is accessible to a community, which is bad practice. Handling of this phenomenon can be to change the permission of this account.
  2. Grouping of different types of incidents that represents a certain narrative. For example, a user has abused a specific database account, accessed several application tables and accessed a large number of files. This implies that the user may be compromising the data of the enterprise. Handling this could mean assessing the user and their behavior.

An Imperva CounterBreach customer data example shows how grouping reduces the number of events to deal with. The number of incidents continuously grows whereas the number of groups slows down until it stops.

Figure 1: 13 groups instead of 377 incidents

Incident Priority Scoring

Traditional prioritization of security incidents is usually done by classification into severities (critical, high, medium, low). This type of classification doesn’t provide a clear decision on what should be done first. Let’s say there are 10 incidents classified as critical. All of them must be treated immediately, but which should be first?

The suggested solution is to set a priority score for each incident on a range of 0 to 100. Different criteria within the incident add scores — different calculation methods can be used — and the priority score is the end result.

Example: The traditional severity for incident ‘Excessive Database Record Access’ is high as this implies data theft.

Two incidents of this type are raised which, at first glance, might be treated with the same urgency, but are they really the same?

Let’s now look more closely at the details:

  1. A human user has accessed 105000 records in a database in a production environment.
  2. A human user has accessed 100000 records in a regular database in a staging environment.

The details clearly indicate that the first incident should be treated prior to the second as it one as it poses a greater threat.

Using the new method:

  • Incident type: excessive database record access = 70.
  • Number of records accessed > 100000 – Add +5.
  • Database is in a production environment – Add +10.

Based on the above, the first incident’s final priority score is 85 whereas the second incident’s final priority score is 70.

Scoring can be done on groups as well

Deciding on the score criteria and values is a fundamental factor of whether the ordering of the incidents guides to the correct prioritization. It requires in-depth knowledge of the subjects being monitored.

Applying the described methods

Each of these methods reduces the number of incidents you need to deal with, however, is best to implement all.

As seen in our examples, the number of incidents with real value may still be high, especially if you have a big amount of coverage. Grouping incidents can dramatically reduce the number of issues to deal with, but you will still want to know which incidents or groups of incidents to handle first. Setting scores takes care of that.

Conclusion

Security monitoring systems provide a very important layer of protection, however, when the number of incidents raised increase it becomes harder to manage and more time-consuming. It may even lead to abandoning a system altogether.

Focusing on the important stuff (real value incidents), providing the big picture (grouping incidents) and defining a clear priority (incident priority scoring) allows a faster, more effective investigation. As such, the real value of monitoring is achieved. Imperva CounterBreach addresses all of these requirements, get in touch and let’s see where we can help.

Cloud Security For The Healthcare Industry: A No-Brainer

The healthcare industry has become one of the likeliest to suffer cyber-attacks, and there’s little wonder why. Having the financial and personal information of scores of patients makes it a very appetizing target for attackers.

Just over a year ago, the WannaCry ransomware attack wreaked havoc on the UK National Health Service (NHS), ultimately disrupting a third of its facilities and causing a rash of canceled appointments and operations.

As healthcare organizations face the prospect of increasing attack, their security teams look to cybersecurity experts with comprehensive, tested products to protect the sensitive information they hold. ALYN Woldenberg Family Hospital, Israel’s only pediatric rehabilitation facility, is no exception.

With a database of more than 70,000 patients and a website hosted in four languages and across three different domains; ALYN Hospital’s IT team was concerned that their content management system (CMS) could be vulnerable. The team didn’t feel their cybersecurity vendor was updating the security on their CMS as often as they should, leading them to go looking for a new vendor.

Initially checking out on-premises WAF systems, ALYN’s team kept coming up against the cost of securing their sites and, because of strict government regulations, they were initially hesitant to move to a cloud-based system. Ultimately, however, they decided that the Imperva Incapsula cloud-based WAF was just the thing, as it meets the most stringent enterprise-grade security criteria.

“We looked at community reviews and talked with colleagues at other hospitals and got the impression that Incapsula is one of the best in terms of cost-benefit ratio, which is important to us, in addition to robustness, ease-of-use, and integration, which was very smooth. It all proved to be correct, for which I am very glad,” said Uri Inbar, Director of IT for ALYN Hospital.

Setting up the system took less than a day and ALYN Hospital still manages its servers in-house, with a staff member who is now dedicated to security. Imperva Incapsula has been low maintenance from the start, so, while customer support was with them every step of the way at the beginning, they haven’t needed any for the last few years because the automatic system, managed and tuned by a team of Imperva security experts, has been running smoothly on its own.

“It gives us peace of mind to know that someone has dedicated themselves to the subject and keeps us updated. It’s one less worry to take care of.”

Since making the switch, ALYN Hospital has seen some significant improvements:

  • Increased visibility for monitoring security threats: The Imperva Incapsula dashboard is easy to use and provides information that helps ALYN Hospital keep its systems secure. And for their special projects, they can even see which countries are generating the most traffic.
  • Good cost-benefit ratio: One of the most important aspects of any new security system for ALYN, the costs were reasonable, especially given the security benefits they received from the Incapsula system.
  • Faster content delivery: While no formal studies were done, the IT staff has heard from some users that their CDN is delivering content faster than before.

Imperva Incapsula offers a single stack solution that integrates content delivery, website security, DDoS protection, and load balancing. Incapsula is PCI-compliant, has customizable security rules and offers 24/7 support.

Back to Basics: Let’s Forget About the GDPR… For A Moment

At this point it’s fairly safe to assume that most everyone in the business of “data” has heard of the European Union (EU)-wide General Data Protection Regulation (GDPR) that was signed into law in late April 2016; with the compliance deadline having come into effect on May 25, 2018. Clearly, this new regulation has significant implications for organizations across the globe as it relates to data capture, storage, transfer and use. In previous blogs, whitepapers, and webinars, we outlined various ways by which companies can either “get started” with their GDPR compliance efforts, and also how to “maintain” compliance over the long-term.

With all the focus on GDPR compliance, however, we may have glossed over the importance of executing some key data processes for purposes much broader than GDPR. So, let’s forget about GDPR – only for a moment – and level-set on a few critical steps organizations must take to protect their sensitive data, mitigate the risk of accidental or malicious exposures and the subsequent cost and reputational risk.

A major challenge, and one that goes well beyond privacy compliance

In the rush to meet industry expectations or compliance cut-off dates, organizations may skip some foundational steps critical to ensuring long-term data security and reducing organizational cost and administrative burden. Far too many organizations spend far too much money trying to prevent data breaches by simply throwing the full force of technology at their environments. And, just like the old saying “half the money I spend on advertising is wasted; the trouble is I don’t know which half,” half the money spent on data protection is likely also wasted.

To that end, you can’t protect what you don’t know about. Many organizations labor under a false sense of security with respect to knowing what data they have, and where, and that’s just their production data environments. In many cases, lower level non-production or DevOps environments are similar to “wild west” organizational data stores with multiple copies, users, and risk points, usually with fewer security controls in place. In fact, according to an IDC report titled “Copy Data Management”, 82% of organizations surveyed had more than 10 copies of each database, a number that’s no doubt growing as data capture, storage, and use continues to grow.

Furthermore, organizations can waste a lot of time and money on a multitude of unnecessary technologies and resources aimed at protecting complex yet unclassified data environments. Understanding where all your data is stored, classifying relevant sensitive data to align with security and privacy requirements, and assessing vulnerabilities all play a critical role in priming the organization for long-term data security success; and in supporting process and technology enhancements in a more strategic and cost-effective manner.

This leads me to offer what many believe to the number one data security priority for any organization, regardless of industry compliance requirements. That is, data discovery and classification.

Executing your number one data security priority could also save you millions.

By identifying all databases, including archived, forgotten or rogue databases (yes, these exist!), cataloguing and classifying sensitive data, and assessing databases for vulnerabilities and misconfigurations; your organization is in a much better position to make educated decisions on what investments need to be made, and where, in terms of data security. It also allows you to find weak points in the chain, and make decisions on what data is being captured, why, and where opportunities exist to remove databases/sensitive data, ultimately limiting risk footprint and overall data protection costs.

Too many high profile, high value organizations skip this critical step in the haste to tick the compliance checkbox and are therefore under the false assumption that they are protected against the cost of data breach because they have either (a) purchased and installed various security software and related applications, and/or (b) have grossly inflated their IT budgets with additional internal hires or an army of consultants. They don’t really know what they have, where it is, or why they are protecting it (and at what cost).

This is clearly a risky scenario from a variety of perspectives. So, where do you go from here? Some organizations attempt manual discovery efforts to achieve a greater understanding of their data environments, and while better than nothing, these manual processes are fraught with inefficiencies, risk, and opportunity costs when looking at the resources required.

Leveraging industry-recognized, purpose-built solutions and expertise offer a much more efficient and effective way to conduct data discovery and classification and offers a more scalable and sustainable process over the long-term. Imperva offers a variety of data security solutions and managed services to support organizational data discovery and classification efforts, with the two solutions being Imperva SecureSphere and Imperva Camouflage.

Mapping your data landscape with Imperva Data Security solutions

The Imperva SecureSphere solution offers Discovery and Assessment, which provides an automated and reliable way to locate ALL sensitive data. It easily identifies where databases are on the network, and across complex environments. It also surfaces rogue databases and finds sensitive data pertinent to all major privacy and compliance regulations. In addition, SecureSphere also streamlines vulnerability assessment at the data level. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities.

To make life even easier, SecureSphere produces automated and detailed reports that help provide an understanding of an organization’s overall security posture and risk footprint. In addition to graphical dashboards, it includes pre-defined assessment test reports as well as the ability to create custom reports. Assessment test reports also provide concrete recommendations to mitigate identified vulnerabilities and strengthen the security posture of a data repository. With respect to Imperva Camouflage, it too offers discovery and classification capabilities embedded within this purpose-built data masking solution, and many organizations avail of its discovery and classification capabilities when choosing to de-identify or mask their sensitive data for use in securing DevOps or test and development data use.

Regardless of the solution, it’s clear that using a purpose-built tool, with years of expertise built right-in can provide significant value to an organization when compared to native or manual efforts, and helps ensure a long-term sustainable model for data security.

The ease with which you can achieve critical data security outcomes provides significant value to the organization and a compelling cost/benefit analysis for the ultimate decision makers. These outcomes include:

  • Uncovering new, forgotten or rogue databases
  • Discovering where sensitive data is stored across your database infrastructure
  • Detecting database vulnerabilities based on the latest research from the Imperva Defense Center
  • Automating database discovery, sensitive data classification, and database vulnerability assessment • Audit database configurations and measure compliance with industry standards
  • Streamlining regulatory compliance efforts

Armed with this information an organization can quickly identify the level of security required for each application/database and determine both the appropriate technologies as well as the priority of deployment and investment required.

Ultimately, the organization will be primed to reduce their risk of internal or external data breaches, while at the same time enabling secure data use and copy provisioning. Oh, and let’s not forget the value in supporting compliance with privacy regulations such as HIPAA, FERPA, and GDPR!

Contact us to learn more about Imperva’s data security solutions and managed discovery and classification managed services in detail. Also, feel free to test-drive SCUBA, our free database vulnerability scanner tool, and/or CLASSIFIER, our free data classification tool.

Read: Our Top Picks for 2018’s Biggest Cybersecurity Stories… So Far

Our threat research team’s been burning the candle at both ends this year, what with the sheer number of nasties out there at any given time. But with so many to choose from, how did we populate a list with just seven cybersecurity threats, and why? For one, it’ll take the rest of the year to catalog the number of threats we’ve seen in just the first six months, and secondly… well, we’ll do another one of these in time.

So, we went ahead and picked the brains of a handful of our researchers and came up with a ‘cybersecurity’s most wanted’ list, to give you an overview of what’s been driving security teams up the wall. While this list is by no means exhaustive, it should give you some insight into the current application and data risks out there and what you should keep an eye on. Let’s crack on.

First off, we look at misconfiguration and incorrect deployment, which can leave resources unguarded and sensitive data up for grabs.

  • March 2018’s PostGreSQL Monero vulnerability report is a great example of how database serves were left wide open and vulnerable to attack.
  • Another one for the list is a recent report showing how open Redis servers were exposed to hackers, the culprit here again being the fact that the servers were left open.

A second and equally devastating threat emerges when security teams aren’t able to patch systems fast enough to counter the increasing pace of new threats popping up.

  • One of the year’s biggest ‘patch-fails’ was when unpatched Drupal apps were being hit by Drupalgeddon; leaving scores of sites vulnerable.
  • RedisWannaMine — which took aim at unpatched Windows machines — also made a splash earlier this year.

Thankfully, however, there are ways to defend against these kinds of threats. Adopting a layered security approach can be a strong defense against patching vulnerabilities, as well as putting in place a good patching management system.

Not to be left off a threat list, 2018 saw an increase in both the scale and severity of DDoS attacks.

Finally, as cryptocurrencies show no signs of slowing in terms of popularity, cryptomining – sometimes referred to as cryptojacking – attacks follow the same trajectory.

  • A favorite method for hackers is remote code execution – driving almost 90% of all cryptomining attacks globally.

The cybersecurity landscape is one of increasing complexity, and security teams have to equip themselves with tools that are scalable, accurate and make it easy to hone in and take action on action real threats. Pair this with financial constraints and a lack of skilled personnel in the industry as a whole and you begin to understand the mammoth challenge so many face in securing their applications and data.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.


5 things you need to know about securing our future

“Securing the future” is a huge topic, but our Chief Research Officer Mikko Hypponen narrowed it down to the two most important issues is his recent keynote address at the CeBIT conference. Watch the whole thing for a Matrix-like immersion into the two greatest needs for a brighter future — security and privacy.

To get started here are some quick takeaways from Mikko’s insights into data privacy and data security in a threat landscape where everyone is being watched, everything is getting connected and anything that can make criminals money will be attacked.

1. Criminals are using the affiliate model.
About a month ago, one of the guys running CTB Locker — ransomware that infects your PC to hold your files until you pay to release them in bitcoin — did a reddit AMA to explain how he makes around $300,000 with the scam. After a bit of questioning, the poster revealed that he isn’t CTB’s author but an affiliate who simply pays for access to a trojan and an exploit-kid created by a Russian gang.

“Why are they operating with an affiliate model?” Mikko asked.

Because now the authors are most likely not breaking the law. In the over 250,000 samples F-Secure Labs processes a day, our analysts have seen similar Affiliate models used with the largest banking trojans and GameOver ZeuS, which he notes are also coming from Russia.

No wonder online crime is the most profitable IT business.

2. “Smart” means exploitable.
When you think of the word “smart” — as in smart tv, smartphone, smart watch, smart car — Mikko suggests you think of the word exploitable, as it is a target for online criminals.

Why would emerging Internet of Things (IoT) be a target? Think of the motives, he says. Money, of course. You don’t need to worry about your smart refrigerator being hacked until there’s a way to make money off it.

How might the IoT become a profit center? Imagine, he suggests, if a criminal hacked your car and wouldn’t let you start it until you pay a ransom. We haven’t seen this yet — but if it can be done, it will.

3. Criminals want your computer power.
Even if criminals can’t get you to pay a ransom, they may still want into your PC, watch, fridge or watch for the computing power. The denial of service attack against Xbox Live and Playstation Netwokr last Christmas, for instance likely employed a botnet that included mobile devices.

IoT devices have already been hijacked to mine for cypto-currencies that could be converted to Bitcoin then dollars or “even more stupidly into Rubbles.”

4. If we want to solve the problems of security, we have to build security into devices.
Knowing that almost everything will be able to connect to the internet requires better collaboration between security vendors and manufacturers. Mikko worries that companies that have never had to worry about security — like a toaster manufacturer, for instance — are now getting into IoT game. And given that the cheapest devices will sell the best, they won’t invest in proper design.

5. Governments are a threat to our privacy.
The success of the internet has let to governments increasingly using it as a tool of surveillance. What concerns Mikko most is the idea of “collecting it all.” As Glenn Glenwald and Edward Snowden pointed out at CeBIT the day before Mikko, governments seem to be collecting everything — communication, location data — on everyone, even if you are not a person of interest, just in case.

Who knows how that information may be used in a decade from now given that we all have something to hide?

Cheers,

Sandra

 

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Game-Changing Sensitive Data Discovery

I've tried not to let my blog become a place where I push products made by my employer. It just doesn't feel right and I'd probably lose some portion of my audience. But I'm making an exception today because I think we have something really compelling to offer. Would you believe me if I said we have game-changing DLP data discovery?

How about a data discovery solution that costs zero to install? No infrastructure and no licensing. How about a solution that you can point at specific locations and choose specific criteria to look for? And get results back in minutes. How about a solution that profiles file shares according to risk so you can target your scans according to need. And if you find sensitive content, you can choose to unlock the details by using credits which are bundle-priced.

Game Changing. Not because it's the first or only solution that can find sensitive data (credit card info, national ID numbers, health information, financial docs, etc.) but because it's so accessible. Because you can find those answers minutes after downloading. And you can get a sense for your problem before you pay a dime. There's even free credits to let you test the waters for a while.

But don't take our word for it. Here are a few of my favorite quotes from early adopters: 
“You seem to have some pretty smart people there, because this stuff really works like magic!”

"StealthSEEK is a million times better than [competitor]."

"We're scanning a million files per day with no noticeable performance impacts."

"I love this thing."

StealthSEEK has already found numerous examples of system credentials, health information, financial docs, and other sensitive information that weren't known about.

If I've piqued your interest, give StealthSEEK a chance to find sensitive data in your environment. I'd love to hear what you think. If you can give me an interesting use-case, I can probably smuggle you a few extra free credits. Let me know.



Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.