Category Archives: Data Security

The Journey to Data Integrity

In 2017, ‘Fake News’ was crowned word of the year thanks in part to a deteriorating relationship between politicians and the media. Claims and counterclaims could be challenged without the

The post The Journey to Data Integrity appeared first on The Cyber Security Place.

How organizations handle disruptive data sources

In the 2018 Data Connectivity Survey by Progress, more than 1,400 business and IT professionals in various roles across industries and geographies shared their insights on the latest trends within the rapidly changing enterprise data market. The findings revealed five data-related areas of primary importance for organizations as they migrate to the cloud: data integration, real-time hybrid connectivity, data security, standards-based technology and open analytics. Significant findings from the survey include: Data integration has become … More

The post How organizations handle disruptive data sources appeared first on Help Net Security.

Catastrophe, Not Compromise: VFEmail Attack Destroys Decades of Data

The email provider VFEmail suffered a “catastrophic” hack that destroyed the company’s primary and backup data servers in the U.S.

As reported by Krebs on Security, the attack began on the morning of Feb. 11, when the company’s official Twitter account warned that all external-facing systems across multiple data centers were down. Hours later, VFEmail tweeted that it “caught the perp in the middle of formatting the backup server.” Just after 1 p.m., the company reported that all disks on every server had been formatted with every VM, file server and backup server lost.

Only a small, Netherlands-based backup server was left untouched. VFEmail founder Rick Romero (@Havokmon) tweeted on Feb. 12 that the company is “effectively gone” and will likely not return.

VFEmail’s Exceptional Circumstances

Most email attacks aren’t looking to destroy data. As reported by Healthcare IT News, healthcare email fraud attacks are up by nearly 500 percent over the last two years, while IT Pro Portal noted that threat actors are now leveraging compromised accounts to gain email access and steal confidential data. Even ransomware attacks — which include the threat of data destruction — are typically used as leverage to generate corporate payouts.

The VFEmail hack, meanwhile, had no clear aim: No ransom message was reported, and there’s no evidence that data was exfiltrated before being destroyed. Romero managed to track the attacker to an IP address hosted in Bulgaria — likely just a virtual machine (VM) that was used as a launch pad for the attack.

He also noted that to compromise VFEmail’s mail hosts, VM hosts and SQL server clusters, the attacker would have needed multiple passwords, as reported by Ars Technica. While some of the mail service is back up and running, there’s only a slim chance that U.S. email data will be recovered.

Back Up Your Mission-Critical Email Data

Email clients come with inherent risks and no guarantees. While layered email security can help reduce the risk of malware infections and ransomware attacks, it can’t prevent host-side attacks like the one VFEmail experienced.

Security teams should follow best practices for defending against threats that destroy data, such as ransomware attacks. According to experts, data backups are key to reducing the risk of complete data loss — while this typically applies to local files, enterprises using hosted email providers to send and receive mission-critical data should consider creating an on- or off-site email backup to combat the threat of catastrophic data destruction.

The post Catastrophe, Not Compromise: VFEmail Attack Destroys Decades of Data appeared first on Security Intelligence.

Why You Need a Security-First Culture to Deliver on Your Customer-First Goals

I’ve been working in the security industry for mumble mumble years, and one recurring problem I’ve noticed is that security is often considered an add-on to business initiatives. This is neither new, nor surprising. And while the “customer-first” approach is not really a new talking point for most companies, “customer-obsessed” became a major business initiative for many in 2018. This is due to a number of factors — increased brand visibility via social media, changing buyer behaviors and evolving data privacy legislation, to name a few — and doesn’t show any signs of changing in 2019.

What Does It Mean to Be Customer-First?

Contrary to what many businesses seem to believe, customer obsession doesn’t mean sending six emails in two weeks to make sure your customer is happy with his or her purchase and requesting a good review or rating. Being customer-first simply means listening to your customers’ needs. It requires you to quickly adjust and react to meet those needs — or, ideally, anticipate them and proactively offer solutions to your customers’ issues.

Most of all, customer obsession requires trust. To build trust among your end users, security must be the foundation of every customer-first initiative. In fact, I’d argue that organizations must be security-obsessed to effectively deliver on their customer-first plans.

Prioritize Security to Build Customer Trust

The benefits of a customer-first business approach are clear: increased loyalty to your brand, revenue gains, etc. It is also apparent why security is so important: No organization wants to suffer the consequences of a data breach. However, by looking deeper into what a security-first to customer-first culture looks like, you’ll quickly uncover the complexity of this issue.

First, there is a distinct difference between checking the boxes of your security requirements (i.e., compliance) and truly making your customers’ welfare a top priority. Of course, adherence to security and privacy regulations is essential. Without these standardized compliance policies, companies could measure success in a variety of ways, which would look different to everyone. And if we’re being honest, meeting compliance regulations is often more about avoiding penalties than improving your business.

Second, your brand is more than just your product or service; it encompasses the way your company looks, feels, talks and spends money and is representative of its culture and beliefs. In other words, your brand is about the way people feel when they interact with your company. According to Forrester Research, today’s buyers are increasingly looking at these other characteristics when they make decisions about the products or services they use.

This is where security becomes essential. If you want to instill trust among your end users, you need to go beyond standard compliance measures. Security must become a foundation of your company culture and your customer-first initiatives. It must be threaded into every business initiative, corporate policy, department and individual. This means technology purchases should be made with your end users’ security in mind, as well as your employee data and corporate assets.

It also means evaluating your business partners and the policies they have in place to ensure they fall within your standards. For example, are you considering moving critical business technology to the cloud as part of your digital transformation initiatives? If so, what do you know about your cloud provider’s security precautions? Are you working with advertisers or marketing organizations that interact with your end users? If so, do you know how they handle your customers’ and prospects’ personal data?

How to Develop a Strong Security Culture

Operating a business that is customer-first is ambitious. It’s also really, really hard. By making security a cultural tenet throughout your organization, you communicate to your customers that your brand is trustworthy, your business has integrity and that they matter to you. So how do you do it?


Design collaboration into your security strategy with open solutions. The threat-solution cycle is a familiar one: A new security event occurs, the news covers it, a new company emerges to solve the problem, your company deploys the solution and then a new security event occurs. The entire industry is stuck in a vicious cycle that we, as vendors, have created. To break this cycle we need to take a page from our adversaries. Share intelligence with our peers and our competitors. Learn from other industries. Use open technology that integrates multiple sources of data. Only then are we equipped to uncover risks to our customers that hide among the chaos.

Build Security Muscle Memory

Many organizations are spending a lot of money on security awareness training, which is great. However, the best training is useless if employees are bypassing security measures for convenience. Make security processes required, enforceable and, above all, easily incorporated into the daily life of your users.

Shift Your Perspective

Security strategy is often an afterthought to business initiatives that cut costs, increase revenue and improve efficiency. Security is, after all, a cost. But a good security culture can set your company apart. It can be the champion or the killer for your brand, particularly in an era where customers’ buying motivations have shifted.

Right now, brand loyalty is an asset. A recent Harris Poll survey found that 75 percent of respondents will not buy from a company, no matter how great the products are, if they don’t trust it to protect their data. Stability, integrity and corporate responsibility are key factors in purchasing decisions. Making security a strategic pillar of your company’s brand is a tremendous responsibility, but one that will go a long way toward establishing trust among your users.

The Best Way to Grow Your Business

A customer-first approach is, arguably, the business initiative that can impact your bottom line the most. Understanding and proactively addressing your customers’ security and privacy concerns shows that you’re not just trying to sell a product or service, but that you are responsible with their data and operate with integrity. In an era where brand integrity matters, security-first is the best way to grow your business.

The post Why You Need a Security-First Culture to Deliver on Your Customer-First Goals appeared first on Security Intelligence.

The inside track on protecting intellectual property (IP)

Dr Darren Williams, CEO and Founder of BlackFog, discusses the need for firms to protect their IP from cyber attack and provides advice on how to stop hackers from removing

The post The inside track on protecting intellectual property (IP) appeared first on The Cyber Security Place.

Cyber Security Risk in Retail and How to Handle It

Hackers and their tactics are continually evolving but one thing remains the same: retailers are prime targets for a cyber-attack. This is such a widespread issue that in nearly every

The post Cyber Security Risk in Retail and How to Handle It appeared first on The Cyber Security Place.

Data Security Blog | Thales eSecurity: Does artificial intelligence mean artificial security?

Wouldn’t you like to stop losing sleep worrying about what your AI team, partners and suppliers are doing with your data?

Marvin Minsky is recognised as one of the founding fathers of artificial intelligence. Increasingly eccentric, anyone who spent time with him became aware that individuals are unpredictable. What makes humans predictable is behaviour in large numbers. Yes, we’re talking about big data and analytics. Data, analytics and AI go together like eggs, toast and coffee. The price you pay for this breakfast increases radically with poor security. When designing AI into your business ask yourself three questions.

1. Are your AI plans supported by an encryption strategy?

A global law firm may want to install software that predicts the type of contract and clauses it needs for a new client depending on the client’s profile or legal situation. The firm’s AI team says the software contains encryption. However, in this scenario where are the encryption keys? Do the standards comply to an auditor’s policies? Who will manage the lifecycle of the keys? It’s important to understand that encryption will affect data in other related applications and processes as well. And siloed encryption for every application will add cost and complexity across the business.

2. Is your AI strategy fit for compliance?

As I inferred above, AI systems rely on a phenomenal amount of data to make sense of situations and predict an outcome. You may be a retailer predicting the mood of a potential buyer using deep learning for sentiment analysis, or maybe you run an aircraft lease company consuming diagnostics to make critical decisions about engine component failures. In both cases, data originates, and is consumed in multiple geographies within a variety of regulatory environments. It is critical that you plan for data security that upholds your security posture in all geographies. This can only be accomplished with encryption, data masking, key management and access policies working in unison with governance recognised by individual territories. To be fit for compliance, a global security partner with an enterprise data security platform should be a part of the team executing your AI vision.

3. How will you secure AI data in the cloud?

Your data provides insight and fuels good decision making within AI systems. Your business data combined with that of your competitive peer group fuels even better AI outcomes. For this reason, analytics and AI platforms may aggregate industry data from multiple companies within cloud-based systems for independent analysis, resource efficiency and computing power. This can be the perfect recipe for CIO insomnia. Who holds your data? (cloud provider? AI platform provider?) How are they segregating it? Are they returning it to the right company every time? Are they encrypting it? If the cloud provider encrypts your data and has the ability to decrypt it, then what happens when a subpoena arrives requesting your data? The answer is to obfuscate the data in a way that still preserves its format and ability to be processed by third parties, but proves completely useless if stolen or accidentally revealed. At all times it should remain within your control and never converted to clear text without the permission of your business or approved AI applications. Data security platforms exist to perform this function.

Rather than the artificial security provided by native cloud encryption or AI platform proprietary encryption, true enterprise data security platforms provide real security for artificial intelligence so you can sleep easy.

Please visit our website to learn more about high-assurance encryption solutions, and click here to see if your business is Fit for Compliance.


McCarthy, J. 1990. “Generality in artificial intelligence”. In Lifschitz, V., ed., Formalizing Common Sense. Ablex. 226-236.

A. Sloman and R.L. Chrisley, Virtual machines and consciousness, in Journal of Consciousness Studies, 10, 4–5, pp.113–172, 2003

Dr. Ilesh Dattani – GEMOM: Significant and Measurable Progress beyond the State of the Art

The post Does artificial intelligence mean artificial security? appeared first on Data Security Blog | Thales eSecurity.

Data Security Blog | Thales eSecurity

There’s a growing disconnect between data privacy expectations and reality

There is a growing disconnect between how companies capitalize on customer data and how consumers expect their data to be used, according to a global online survey commissioned by RSA Security. Consumer backlash in response to the numerous high-profile data breaches in recent years has exposed one of the hidden risks of digital transformation: loss of customer trust. According to the study, which surveyed more than 6,000 adults across France, Germany, the United Kingdom and … More

The post There’s a growing disconnect between data privacy expectations and reality appeared first on Help Net Security.

AI, cloud and security — top priorities for enterprise legal departments

A report released today indicates that legal professionals are at the forefront of piloting emerging technologies, such as AI and cloud, in the enterprise. Are you surprised? Legal departments are

The post AI, cloud and security — top priorities for enterprise legal departments appeared first on The Cyber Security Place.

The Benefits of Correctly Deploying a PKI Solution

With new threats to data emerging every day, public key infrastructure (PKI) has become an increasingly larger part of enterprises’ information security and risk management strategies. Research has found that 43% of

The post The Benefits of Correctly Deploying a PKI Solution appeared first on The Cyber Security Place.

Cyber risk management: There’s a disconnect between business and security teams

Business managers want real-time cyber risk management metrics, but cybersecurity teams can only deliver technical data and periodic reports. That gap needs to close.A few years ago, cybersecurity professionals often

The post Cyber risk management: There’s a disconnect between business and security teams appeared first on The Cyber Security Place.

Moving to the Hybrid Cloud? Make Sure It’s Secure by Design

Many organizations have such a positive first experience with cloud computing that they quickly want to move to a hybrid cloud environment with data and workloads shared between private and public clouds. The flexibility and control that a hybrid cloud provides is why it is expected to be the dominant cloud computing model for the foreseeable future.

However, companies often don’t think about security issues until after they are well along in the process of building a hybrid cloud. This can lead to nasty surprises when they realize this environment introduces some unique security considerations that don’t exist in traditional infrastructure. That’s why a hybrid cloud needs to be secure by design.

Cloud Security Is a Shared Responsibility

Public cloud providers offer enterprise-class security, but that doesn’t absolve customers from responsibility for protecting data, enforcing access controls and educating users. Private cloud security is complicated because private clouds can take many forms. They may be hosted entirely on-site, entirely in the public cloud or some combination. Private cloud infrastructure can also be dedicated to a single tenant or shared across multiple zones with isolation providing dedicated resources. Each environment has different security demands.

The scale and dynamism of cloud computing complicates visibility and control. Many customers incorrectly believe that cloud providers take care of security. In fact, security is a shared responsibility. In my experience, most cloud security failures occur because customers don’t live up to their part of the bargain.

No single cloud security mechanism does the entire job. There is also little consensus about what the ideal cloud security environment should look like. As a result, most product offerings in this market are still evolving. Secure by design starts with assessing risk and building a framework for technology.

A New Way of Computing

Moving to the cloud doesn’t mean relinquishing total control, but it does require embracing a new security mindset based on identity, data and workloads rather than underlying platforms. Security professionals who can reorient themselves around business enablement rather than device protection are particularly well-suited to securing public clouds.

Cloud computing is highly distributed and dynamic, with workloads constantly spinning up and down. Visibility is essential for security. According to Gartner, cloud security should address three core topics that have not traditionally been an IT discipline: multitenancy risk, virtualization security and software-as-a-service (SaaS) control.

Multitenancy risk is inherent to cloud architectures because multiple virtual machines (VMs) share the same physical space. Major public cloud providers go to great lengths to mitigate the possibility that one tenant could access data in another VM, but on-premises infrastructure is susceptible if the servers are not configured properly. Changes made to one hybrid cloud environment may also inadvertently affect another.

Virtualization security refers to the unique risks of virtualized environments. While hypervisors and VMs are in many ways more secure than bare-metal environments because the operating system is isolated from the hardware, the use of shared resources like storage and networking also introduces potential vulnerabilities that don’t exist on dedicated servers.

SaaS environments require greater attention to authentication and access control because the user doesn’t own the network. Governance standards need to be put in place to ensure that users take appropriate precautions with data and that all necessary regulatory and compliance guidelines are met.

Without these new competencies, organizations will struggle to gain visibility into their hybrid cloud environments, making it almost impossible to determine which computing and storage tasks are taking place where, using which data and under whose direction. In that situation, provisioning and enforcement of policy can quickly become impractical. But if organizations practice secure-by-design principles using new cloud-native tools, they can get a single-pane-of-glass view into activity that enables policy enforcement.

Three Keys to Secure Hybrid Cloud Deployments

Three areas merit special attention: encryption, endpoint security and access control.

Encryption is the best form of data protection. Data moving to and from the public cloud should be encrypted at all stages, and sensitive data should never be left unencrypted. All cloud providers support encryption, but not necessarily by default. Customers need to choose the type of encryption that is most appropriate and secure encryption keys.

When public cloud services are accessed over the public internet, special attention needs to be paid to endpoint security to prevent the risk of creating access points for attackers or becoming targets of malware. For example, an attacker who compromises a PC and logs on as an administrator for the company’s public cloud effectively has the keys to the kingdom. Hardware firewalls aren’t protection enough.

Secure web gateways (SWGs) utilize URL filtering, advanced threat defense (ATD) and malware detection to protect organizations and enforce internet policy compliance. SWGs are delivered as both physical and virtual on-premises appliances, cloud-based services or hybrid cloud/on-premises solutions. They provide an additional layer of protection against destructive attacks such as ransomware and enable safer and more efficient adoption of cloud-based services.

Finally, cloud-specific access control is a necessity if employees, contractors and vendors are to use both public and private clouds. Single sign-on (SSO) and federated access controls can minimize inconvenience while maintaining control and security monitoring.

Identity and access management-as-a-service (IDaaS) works in both multitenant and dedicated environments. It provides identity governance and administration, access management, and analytics functions that span the organization’s entire cloud environment. IDaaS can also be integrated with existing access management software to manage access to legacy applications.

The Cloud Security Alliance has an extensive library of resources that cover practices for hybrid cloud security. Organizations should familiarize themselves with these guidelines before beginning the migration process. Building security into hybrid infrastructure from the beginning minimizes the pain and delay of backfilling later.

The post Moving to the Hybrid Cloud? Make Sure It’s Secure by Design appeared first on Security Intelligence.

Converged IT and OT to Advance Security Maturity

The convergence of IT, operational technology (OT) and industrial internet of things (IIoT) has raised concerns about cybersecurity, safety and data privacy for many organizations, according to a new Ponemon Institute study. Released

The post Converged IT and OT to Advance Security Maturity appeared first on The Cyber Security Place.

Cybersecurity: Billions Pour In, Basics Languish

2018’s headlines only underscored the need for robust data security. As if to add a whopping exclamation point to the end of the year, the massive Marriott/Starwood Resorts data breach (announced

The post Cybersecurity: Billions Pour In, Basics Languish appeared first on The Cyber Security Place.

Safer Internet Day: Are you where you think you are?

Safer Internet Day is an excellent opportunity for users of all kinds to brush up on their cyber safety knowledge — although security practice should be maintained on all days, it

The post Safer Internet Day: Are you where you think you are? appeared first on The Cyber Security Place.

Google Apologetically Shuts Down Its iPhone Data Collection App

After the recent outcry about Facebook, Google has also been found to supposedly violate Apple’s policies. As discovered recently, one

Google Apologetically Shuts Down Its iPhone Data Collection App on Latest Hacking News.

Data Breach Fatigue Makes Every Day Feel Like Groundhog Day

The constant string of data breaches isn’t what I’d call funny, but it does make me think about one of my favorite cinematic comedies. The film “Groundhog Day” stars Bill Murray as a grumpy weatherman who travels to the little town of Punxsutawney, Pennsylvania, where a famous rodent supposedly predicts when spring will arrive.

According to some unexplained movie logic, Murray’s character ends up caught in a time warp so that he wakes up the day after Groundhog Day and it’s — you guessed it — Groundhog Day once again. No matter what he does, he wakes up day after day and the same events happen again and again. As you can imagine, the poor weatherman starts to lose his mind and, for a time, gives up trying to change his fate.

In the world of cybersecurity, things don’t appear to be much different. If it feels like there’s a new data breach reported every day, that’s because it’s more or less true. According to the Privacy Rights Clearinghouse, there have been 9,033 data breaches made public since 2005 — and those are just breaches that were reported in the U.S. or affected U.S. consumers. Spread out over the last 14 years, that averages out to about 1.77 breaches a day.

All told, there were at least 11.6 billion records lost in those breaches. The consequences for the economy and individual businesses and consumers are mounting, and the cost of these breaches is staggering if you consider the average cost per lost record, which was $148 in the U.S. last year.

These data points raise other questions about the human impact of data breach Groundhog Day, if you will. How does the daily barrage of data breaches affect our behavior? Are we responding with urgency to this growing problem as consumers, businesses and security professionals? Or have we given a collective shrug, accepting that this is the new normal?

What Does Data Breach Fatigue Look Like?

One apparent consequence of constant breaches is data breach fatigue — the idea that consumers have become inured to the effects of data breaches and are less motivated to do anything to protect themselves. The data breach fatigue effect is a little hard to calculate, but there is some evidence it exists, and the fallout is harmful to both consumers and the breached organizations.

In one study, researchers measured consumer sentiment on social media in the aftermath of a breach at the U.S. Office of Personnel Management that affected 21.5 million people. According to the study, overall sentiment about the breach was tinged with anxiety and anger, but victims of the breach showed higher levels of sadness. Moreover, social media chatter about the breach dropped off significantly over time. Two months after the breach, engagement was almost nonexistent, which the researchers said showed acceptance, apathy and the onset of breach fatigue.

While there isn’t a lot of data on how people respond to having their personal information breached, there is some evidence in consumer surveys that data breach fatigue is setting in. For example, a significant proportion of users don’t take proactive steps to improve their security after a breach, such as changing their passwords or checking their credit score. Although almost 50 percent of respondents to a 2016 Experian survey said they were taking more precautions to protect their personal information, just 33 percent check their credit scores regularly and only 36 percent review the privacy policies of the companies they do business with.

In another study conducted by RAND Corporation, only half (51 percent) of survey respondents said they changed their password or PIN after a breach, and a scant 4 percent said they started using a password manager. While 24 percent said they became “more diligent” in response to a breach, 22 percent took no action whatsoever.

Finally, a survey conducted by Ponemon Institute in 2014 on behalf of Experian found that many consumers were taking a passive approach to data breach notifications. Of the 32 percent of consumers who had received at least one data breach notification in the prior two years, their concern about breaches didn’t necessarily produce an urgent response. Although 45 percent of breach victims said they were “very concerned” or “extremely concerned” about the potential for identity theft, 32 percent said they ignored the breach notification or took no action, and 55 percent said they did nothing to protect themselves from identity theft.

If data breach fatigue contributes to consumers failing to take the necessary precautions to protect themselves, it could leave those consumers at greater risk of identity theft, damaged credit, financial loss and privacy violations. But before we start blaming the victims for being irresponsible, it’s clear from the Ponemon/Experian study that many breach victims feel powerless or even trapped because the products and services they depend on from breached companies can’t easily be replaced, and nothing they can do as individuals will change the likelihood that their data will be breached.

The Dangers of Data Breach Fatigue

There’s another risk from data breach fatigue that is maybe underappreciated: that organizations will assume their security and privacy practices won’t matter to consumers. We know from surveys that consumers are very concerned about cybersecurity, but constant breaches have caused a steady erosion of trust between businesses and customers.

In another consumer survey from 2018, conducted by The Harris Poll on behalf of IBM Security, only 20 percent of respondents said they “completely trust” organizations they interact with to maintain the privacy of their data, and 73 percent said it is extremely important that companies take swift action to stop a data breach.

People do care about the security and privacy of their information, and some will take their business elsewhere. In the 2014 Ponemon survey for Experian, 29 percent of respondents said they stopped doing business with a company after a breach.

There are some things organizations can do to start rebuilding trust. Consumers expect a certain baseline of activity in a company’s response that includes identity theft protection and credit monitoring, access to customer service to handle questions and, perhaps most importantly, a sincere apology.

According to Michael Bruemmer, a vice president of consumer protection at the Experian Data Breach Resolution Group, the following steps are crucial to effective communications after a breach:

  • Provide timely notification explaining what happened and why.
  • Explain the risks or impact to the customer as a result of the breach.
  • Explain all the facts and don’t sugarcoat the message.
  • Make the communications more personal with less technical and legal jargon.
  • Describe easy-to-follow steps for customers to protect themselves from identity theft and fraud.
  • Consider using other communication channels to reach customers, including social media and a secure website to answer frequently asked questions and a way for customers to enroll in identity theft protection services.

Practice Your Incident Response Plan

Communicating with customers after a breach is just one element of an effective incident response (IR) plan. But most organizations don’t have any plan for responding to a breach.

Caleb Barlow, vice president of threat intelligence at IBM Security, said having an incident response playbook is “just the beginning.” Organizations need to practice for a full-business response and hone the crisis leadership and communication skills of executives, board members and heads of key departments, such as PR and HR.

“In the heat of the moment, there’s no time to fumble through the playbook and figure out what to do next,” Barlow wrote in a blog post. “That’s when your training and muscle memory kicks in and you execute your plan. If you don’t practice it, you are exposed to an avoidable disadvantage.”

To stop the cycle of data breaches and data breach fatigue, organizations and consumers alike need to shake off our fatalism and reluctance to change. Cyberattacks and breaches may be inevitable, but we have control over the way we respond, and we can’t afford to accept the status quo.

We can’t keep doing the same things and expect different results. If data breach fatigue keeps organizations stuck in a pattern of passive and uncoordinated breach responses — and if consumers remain reluctant to take security into their own hands — then every day is going to feel like just another Groundhog Day.

Learn how to build your breach response plan

The post Data Breach Fatigue Makes Every Day Feel Like Groundhog Day appeared first on Security Intelligence.

Is your organization ready for the data explosion?

“Data is the new oil” and its quantity is growing at an exponential rate, with IDC forecasting a 50-fold increase from 2010 to 2020. In fact, by 2020, it’s estimated that new information generated each second for every human being will approximate to 1.7 megabytes. This creates bigger operational issues for organizations, with both NetOps and SecOps teams grappling to achieve superior performance, security, speed and network visibility. This delicate balancing act will become even … More

The post Is your organization ready for the data explosion? appeared first on Help Net Security.

Data Security Blog | Thales eSecurity: The Standards Race of the Future is On

The National Institute of Standards and Technology (NIST)’s Post-Quantum Cryptography Standardization project second round candidates have just been announced, a thinned-down selection of the 71 entries submitted by November 2017. In the past year, NIST has been assessing the submissions, and scrutinizing them for security and efficiency. The goal of the project is to create a set of standards for protecting electronic information from attack by the computers of today and in the future.

The competition has seen great engagement from the cryptographic community, with the large number of entries and lively analysis/debate being promising signs that highly-secure, studied, trusted and efficient algorithms will emerge in 2022-2024.

The second-round candidates comprise a varied and diverse selection of the original submissions, covering all of the main algorithm families, but with a few surprises.

Quantum-resistant algorithms rely on one of four main types of difficult problems, against which quantum computers are thought to offer no benefit: lattices; hashes; codes; or multivariate quadratic polynomial problems.

The selected algorithms fall into two categories: those which allow public-key encryption (17 selected) and key establishment, and those for creating digital signatures (9 selected).

In the public-key encryption and key establishment selections, 53% of the second-round candidates are based on lattice problems. These lattice problems have unique theoretical security properties, leading to them being the most popular of the quantum-resistant public-key schemes, hence it’s not a surprise they occupy the lion’s share of the selections. In second place are code-based schemes with 41%.

A surprise inclusion is the Classical McEliece scheme, a code-based scheme that dates back to 1978 but has rarely seen use due to large key sizes. It would be ironic if the McEliece scheme were to finally come into use, 45 years after its invention. If McEliece had become the standard public key algorithm in place of (quantum-vulnerable) RSA in the late 1970s, quantum computing would not be such a threat to modern cryptography.

The final scheme in the public-key category is a supersingular isogeny scheme, a relatively new category but one that’s seen experimentation by Microsoft and Cloudflare. This is one to watch.

In the signature category, only three of the nine are lattice-based, while almost half are Multivariate Quadratic (MQ) based. MQ schemes, while attractive in terms of efficiency, have a long and chequered history of catastrophic mathematical vulnerabilities, with no MQ-based scheme ever seeing meaningful real-world use – the inclusion of so many MQ schemes is a surprise. Another surprise comes in the inclusion of only a single hash-based scheme – but maybe this is a sign that hash-based schemes have reached an optimal state, with only a single scheme being required.

A significant number of the round two candidates are encumbered by patents: 47% of the public-key entries and 22% of the signature entries. So far, this doesn’t seem to have affected NIST’s choices, with a similar proportion of round one entries falling under patents.

NIST will hold a standardization conference in August, followed by the beginning of a third round as early as next year, if enough viable candidates remain.

We at Thales eSecurity will continue to closely follow the progress of the standardization effort but think that the NIST effort is yielding great results so far, particularly in bringing together the worldwide cryptographic community to ensure that we’re ready for the arrival of large-scale quantum computing. Our in-house research team has been experimenting with several of the candidate algorithms since 2017 to prepare our products and services for the coming quantum computing revolution, and will continue to do so as the effort progresses and the best algorithms are selected.

For more information on the candidates that will be moving on to the second round of the NIST PQC Standardization Process, click here.

To learn more about the innovative projects and thought leadership from our worldwide research teams at Thales eSecurity, visit our Horizons research portal for more information.

The post The Standards Race of the Future is On appeared first on Data Security Blog | Thales eSecurity.

Data Security Blog | Thales eSecurity

Data security being left behind in digital transformation

Some companies looking to digitally transform are trying to run before walking, putting themselves and their customers at grave cybersecurity risks.Some companies looking to digitally transform are trying to run

The post Data security being left behind in digital transformation appeared first on The Cyber Security Place.

Enterprises are struggling with cloud complexity and security

The rush to digital transformation is putting sensitive data at risk for organizations worldwide according to the 2019 Thales Data Threat Report – Global Edition with research and analysis from IDC. As organizations embrace new technologies, such as multi-cloud deployments, they are struggling to implement proper data security. Greatest data security threats “Our research shows that no organization is immune from data security threats and, in fact, we found that the most sophisticated organizations are … More

The post Enterprises are struggling with cloud complexity and security appeared first on Help Net Security.

How to Build a System Hardening Program From the Ground Up

Commercial and open-source system configurations such as Windows, Linux and Oracle do not always have all the necessary security measures in place to be deployed immediately into production. These configurations often have features and functionalities enabled by default, which can make them less secure, especially given the sophistication and resourcefulness of today’s cybercriminals.

A system hardening program can help address this issue by disabling or removing unnecessary features and functionalities. This enables security teams to proactively minimize vulnerabilities, enhance system maintenance, support compliance and, ultimately, reduce the system’s overall attack surface.

Unfortunately, many companies lack a formal system hardening program because they have neither an accurate IT asset inventory nor the resources to holistically maintain or even begin a program. An ideal system hardening program can successfully track, inventory and manage the various platforms and assets deployed within an IT environment throughout their life cycles. Without this information, it is nearly impossible to fully secure configurations and verify that they are hardened.

Planning and Implementing Your System Hardening Program

System hardening is more than just creating configuration standards; it also involves identifying and tracking assets in an environment, establishing a robust configuration management methodology, and configuring and maintaining system parameters to expected values. To manage and promote system hardening throughout your organization, start by initiating an enterprisewide program plan. Most companies are engaged in various stages of a plan, but suffer from inconsistent approaches and execution.

A plan builds on the premise that hardening standards will address the most common platforms, such as Windows, Linux and Oracle, and IT asset classes, such as servers, databases, network devices and workstations. These standards will generally address approximately 80 percent of the platforms and IT asset classes deployed in an environment; the remaining 20 percent may be unique and require additional research or effort to validate the most appropriate hardening standard and implementation approach. By adopting the 80/20 rule, hardening will become more consistent, provide better coverage and increase the likelihood of continued success.

Let’s take a closer look at the components of a system hardening program plan and outline the steps you can take to get started on your hardening journey, gain companywide support of your strategy and see the plan through to completion.

1. Confirm Platforms and IT Asset Classes

First thing’s first: Determine the types of platforms and IT asset classes deployed within your environment. For example, list and document the types of server versions, such as Windows 2016, Windows 2012 R2, Red Hat Enterprise Linux or Ubuntu, and the types of desktop versions, such as Windows 7 and Apple iOS. Then, list the types of database versions, such as MySQL, Oracle 12c and MongoDB. The IT asset inventory should be able to report on the data needed to create the platform and IT asset class list. However, some companies struggle to maintain an IT asset inventory that accurately accounts for, locates and tracks the IT assets in their environment.

If there isn’t an up-to-date IT asset inventory to report from, review network vulnerability scan reports to create a list of platforms and asset classes. The scan reports will help verify and validate existing platforms and IT asset classes in your environment, as well as devices that may be unique to your company or industry. Interviewing IT tower leads can also support this information-gathering exercise, as can general institutional knowledge about what is deployed.

2. Determine the Scope of Your Project

Once you’ve documented the platforms and IT asset classes, you can determine the full scope of the system hardening program. From a security perspective, all identified platforms and IT asset classes should be in scope, but if any platform or IT asset class is excluded, document a formalized rationale or justification for the exception.

Any platform or IT asset class not included in the hardening scope will likely increase the level of risk within the environment unless compensating controls or countermeasures are implemented.

3. Establish Configuration Standards

Next, develop new hardening builds or confirm existing builds for all in-scope platforms and IT asset classes. Create this documentation initially from industry-recognized, authoritative sources. The Center for Internet Security (CIS), for example, publishes hardening guides for configuring more than 140 systems, and the Security Technical Implementation Guides (STIGs) — the configuration standards for U.S. Department of Defense systems — can be universally applied. Both of these sources are free to the public. It is generally best to apply one set of hardening standards from an industry-recognized authority across all applicable platforms and IT asset classes whenever possible.

This is the step in the plan where you’ll reference the in-scope listing of all platforms and IT asset classes. For each line item on the list, there should be a corresponding hardening standard document. Start with the industry-recognized source hardening standards and customize them as necessary with the requisite stakeholders.

As an example, let’s say the Microsoft Windows Server 2008 platform needs a hardening standard and you’ve decided to leverage the CIS guides. First, download the Microsoft Windows Server 2008 guide from the CIS website. After orienting the Windows Server team to the overall program plan objectives, send the hardening guide for review in advance of scheduled meetings. Then, walk through the hardening guide with the Windows Server team to determine whether the configuration settings are appropriate.

During these discussions, the team should be able to verify which configuration settings are currently in place, what is not in place, and what may violate company policy for pre- and postproduction server images. If there are hardening guide configuration settings that are not already in place, conduct formal testing to ensure that these changes will not degrade performance, lead to outages or cause other problems within the production environment.

Let’s take the configuration setting “Cryptographic Services to Automatic,” a Microsoft Windows Server 2008 hardening standard from the CIS guide, for example. If this configuration setting is not already in place, can it be implemented? If it cannot be implemented, document the reason why it causes problems as determined through testing, whether it violates company policy or anything else that’s applicable. Note this particular configuration setting as an exception in the overall hardening standard documentation for future reference.

4. Implement Your System Hardening Standards

After you’ve established the hardening build and maintenance documentation and conducted any necessary configuration testing, implement the hardening standards accordingly. The preproduction “golden,” or base, images should be hardened initially to proactively disable or remove unnecessary features prior to deploying in production. Starting with the preproduction images should be less time- and labor-intensive because only one image per platform typically needs to be hardened, removing the need for a change management process or scheduled downtime.

Once a particular platform image is hardened, that image can be used to re-image the postproduction platforms already deployed in the environment. The hardened configuration changes can be deployed with configuration management tools, depending on the platform. For example, the Windows team can implement a vast array of configuration settings throughout the environment it manages with Group Policy. If you cannot make automated hardening changes globally for some or all platforms, you’ll need to physically visit these systems individually and manually apply the configuration changes.

5. Monitor and Maintain Your Program

An effective system hardening program requires the support of all IT and security teams throughout the company. The success of such a program has as much to do with people and processes as it does with technology. Since system hardening is inherently interdisciplinary and interdepartmental, a variety of skill sets are needed to carry it out. Hardening is a team effort that requires extensive coordination.

It’s important to appoint a hardening lead to ensure accountability and responsibility for the management and oversight of the program. This individual should possess the drive to achieve results, a knack for problem-solving, and the ability to direct others in collaboration and teamwork. The system hardening lead is ultimately responsible for the success of the program and should provide the focus, support and momentum necessary to achieve its objectives.

Still, accountability for hardening-related activities should be formally assigned to the teams best suited to ensure their completion and maintenance. The information security team should help facilitate improvements when gaps are identified and serve in a governance role by monitoring the hardening practices of all teams, challenging poor processes and approaches, and verifying compliance against hardening standards. If configuration management tools are not available, verify compliance using vulnerability scans.

All this complexity demands a great deal of synchronization. The roles and responsibilities must be clearly delineated so teams can focus their efforts on activities that truly advance the hardening program plan.

System Hardening Has Never Been So Crucial

Implementing and managing an effective system hardening program requires leadership, security knowledge and execution. Obtaining executive commitment, management support and sufficient investment for the program is also crucial. If you carefully choose a combination of easy-to-implement platforms and IT asset classes and more challenging, longer-term hardening efforts, you’ll see incremental improvements in program execution and support.

Companies everywhere and across industries face an ever-accelerating rate of change in both the threat and technology landscapes, making system hardening more crucial than ever. A hardening program isn’t built in a day, but an effective, thoughtfully constructed plan can significantly lower your company’s risk posture.

The post How to Build a System Hardening Program From the Ground Up appeared first on Security Intelligence.

Cybersecurity Experts Share Insight For Data Privacy Day 2019

You’ll have to forgive my ignorance—but what is an appropriate gift for Data Privacy Day? Perhaps an encrypted portable drive? That might not be a bad idea, but what I have

The post Cybersecurity Experts Share Insight For Data Privacy Day 2019 appeared first on The Cyber Security Place.

The most effective security strategies to guard sensitive information

Today’s enterprise IT infrastructures are not largely hosted in the public cloud, nor are they SaaS-based, with security being the single largest barrier when it comes to cloud and SaaS adoption. With the recent rise in breaches and privacy incidents, enterprises are prioritizing the protection of their customers’ personally identifiable information, according to Ping Identity. Most infrastructure is hybrid Less than one quarter (21%) of IT and security professionals say that more than one half … More

The post The most effective security strategies to guard sensitive information appeared first on Help Net Security.

Seven Must-Dos to Secure MySQL 8.0

Most database breaches are blamed on insiders such as employees who are either malicious or whose security has been compromised. In fact, most of these breaches are actually caused by poor security configuration and privilege abuse. Every new database version brings security upgrades. Use them appropriately and your organization can secure its data and keep you out of trouble.

MySQL 8.0 was released in April 2018, bringing improvements across the board, including important security enhancements. In this blog, we will review the new security features and guide you on how to use them appropriately.

1. Use a Strong Authentication Plugin

When a client connects to the MySQL server, the server verifies which authentication plugin is configured for this user and then invokes that plugin to authenticate users who provide the correct password.

Starting with MySQL 8.0, the default authentication plugin is changed from ‘mysql_native_password’ to ‘caching_sha2_password.’ The former plugin implements the SHA1 algorithm, which is more susceptible than expected to collision attacks that fabricate the same hash value for different inputs. In addition, the hash function is calculated only on the user’s plain password. As a result, two users with the same password will have the same hash value. In such a case, an attacker that gains access to the mysql.user table can get the user’s hash value and use a rainbow table to recover the plain password.

The new default authentication plugin, ‘caching_sha2_password,’ implements SHA-256 hashing, which is much stronger because it 1) is not susceptible to collision attacks like SHA1, and 2) uses 5,000 rounds of SHA-256 transformation on a salted password (combination of the password with random data), preventing the use of rainbow tables to recover the password.

If you use MySQL native plugins – and not external authentication such as PAM, Windows login IDs, LDAP, or Kerberos – we encourage you to use ‘caching_sha2_password’ for all new users.

(Another authentication plugin, ‘sha256_password’, which was introduced in version 5.6, also uses SHA-256, and from security perspective it can also be used. However, the advantage of the new default ‘caching_sha2_password’ plugin over this one is that it uses server-side caching for better performance.)

Configuration Guidelines:

Here’s how to create new users with the new default authentication plugin, ‘caching_sha2_password’:

mysql> CREATE USER ‘user_1’@’’ IDENTIFIED BY ‘Hpdclim1@#$’;

Query OK, 0 rows affected (3.26 sec)


mysql> SELECT host, user, plugin FROM mysql.user WHERE user = ‘user_1’;


| host     | user      | plugin                                  |


| | user_1 | caching_sha2_password |


1 row in set (0.01 sec)

For current users authenticated with ‘mysql_native_password’, here’s how to update them to ‘caching_sha2_password’:

mysql> SELECT host, user, plugin FROM mysql.user WHERE plugin = ‘mysql_native_password’;


| host     | user    | plugin                                 |


| | usr_2 | mysql_native_password |


1 row in set (0.03 sec)


mysql> ALTER USER ‘usr_2’@’’ IDENTIFIED WITH caching_sha2_password BY ‘Hpdclim1@#$’;

Query OK, 0 rows affected (0.15 sec)


mysql> SELECT host, user, plugin FROM mysql.user WHERE plugin = ‘mysql_native_password’;

Empty set (0.00 sec)

2. Use Database Roles to Enforce Least Privilege

The Principle of Least Privilege mandates that each user is limited to access only the information they need. Role-based security can enable this: privileges are granted to roles and users are assigned to roles according to their needs. This approach is so fundamental that it is required to comply with security benchmarks such as GDPR, DISA and CIS. Without role-based access, you may end up with non-administrative users with more privileges than they need, such as an application accessing a database (applicative user) that is using an account with DBA privileges.

Such negligence creates a great surface for attacks. As reported in the Verizon Data Breach Investigations Report 2018 report, out of the 2,216 confirmed data breaches reported, more than 200 are attributed to Privilege Abuse.

Most other major relational databases implemented role-based security many years before MySQL 8.0 did. We encourage you to adopt MySQL 8.0 and start managing your privileges using roles as soon as possible.

3. Apply the Change Current Password Policy

MySQL 8.0.13 now lets you require your non-privileged users to enter their current password at the time they set a new password. Such approach can save you in several ways. For example, if an attacker who has compromised your application host machine uses a Web Shell to access a user’s database session. The attacker will not be able to change the password and lock the user out without the user’s current password.

The Change Current Password policy is off by default. You can control it globally for all non-privileged users or enforce it for individual users. We encourage you to set this policy globally. If not, set it for all your non-privileged users, especially your applicative users.

Configuration Guidelines:

To enable Current Password policy globally, put this line in the server my.cnf file:



4. Apply the Password Reuse Policy

MySQL 8.0 enables restrictions on the reuse of previous passwords. Reuse restrictions can be established based on the number of password changes, time elapsed, or both.

Reuse policy can be global, with individual accounts set to either defer to the global policy or override the global policy. We encourage you to establish the reuse policy globally, and set the values according to your security benchmark requirements.

Configuration Guidelines:

To prohibit reusing any of the last 6 passwords or passwords used within the last 365 days, put these lines in the server my.cnf file:




5. Enable FIPS Mode

FIPS Mode on the server side applies to U.S. government-approved cryptographic operations performed by the server. This includes replication (master/slave and Group Replication) and X Plugin, which run within the server, as well as the use of certain encryption ciphers during client-server communication.

MySQL 8.0 supports FIPS Mode if compiled using OpenSSL, and if an OpenSSL library and FIPS Object Module are available at runtime.

However, there is nothing about FIPS mode that prevents establishing an unencrypted connection (to do that, you can use the REQUIRE clause for CREATE USER or ALTER USER for specific user accounts, or set the require_secure_transport system variable to affect all accounts)

We encourage you to enable FIPS Mode and always establish encrypted connections.

Configuration Guidelines:

To enable FIPS Mode, put this line in the server my.cnf file:



6. Migrate Accounts from SUPER to Dynamic Privileges

In MySQL 8.0, many operations that previously required the SUPER privilege are also possible with Dynamic Privileges of more limited scope. Migrating from SUPER to Dynamic Privileges improves security by enabling DBAs to avoid granting SUPER, and tailoring user privileges more closely to the operations permitted. SUPER is now deprecated and will be removed in a future version of MySQL.

Configuration Guidelines:

Execute the following query to identify accounts granted with SUPER privilege:

mysql> GRANT BINLOG_ADMIN ON *.* TO ‘admin_log’@’’;

Query OK, 0 rows affected (0.03 sec)


mysql> REVOKE SUPER ON *.* FROM ‘admin_log’@’’;

Query OK, 0 rows affected, 1 warning (0.14 sec)

For each account listed above, grant the Least Privilege it needs and revoke the SUPER privilege. For example: if user ‘admin_log’@’’ requires SUPER for binary log purging, execute the following:

mysql> GRANT BINLOG_ADMIN ON *.* TO ‘admin_log’@’’;

Query OK, 0 rows affected (0.03 sec)


mysql> REVOKE SUPER ON *.* FROM ‘admin_log’@’’;

Query OK, 0 rows affected, 1 warning (0.14 sec)

7. Enable Redo Log and Undo Log Data Encryption

Redo data and undo data contains sensitive information about operations performed in the database. MySQL 8.0 adds redo and undo log data encryption when data is written or read from disk. Such encryption at rest for redo and undo files is very important. There is no reason to use the tablespace encryption feature introduced in MySQL 5.7 without also encrypting the redo and undo data. We encourage you to configure these files to be encrypted.

Configuration Guidelines:

Redo log encryption and Undo log encryption are disabled by default. After you configure encryption prerequisites, you can enable them by putting these lines in the server my.cnf file:





MySQL 8.0 provides new important security enhancements. We encourage you to adopt the new release and follow our guidelines in order to use them appropriately. With most database breaches today the result of bad configuration and privilege abuse, adopting and configure the new security enhancements properly will secure your data and most likely keep you out of trouble.

The post Seven Must-Dos to Secure MySQL 8.0 appeared first on Blog.

As BYOD Adoption and Mobile Threats Increase, Can Enterprise Data Security Keep Up?

While most security professionals have come to embrace — or, at least, accept — bring-your-own-device (BYOD) policies, leadership still often lacks confidence in the data security of employees’ personal phones, tablets and laptops.

In a recent study from Bitglass, 30 percent of the 400 IT experts surveyed were hesitant to adopt BYOD due to security concerns such as data leakage, shadow IT and unauthorized data access. As the General Data Protection Regulation (GDPR) and other data privacy mandates go into full swing, it’s more important than ever for organizations to monitor and protect enterprise data on mobile devices. However, BYOD may still be the Wild West of network access, especially given the rapid proliferation of new endpoints.

All these moving parts beg the question: Is BYOD security any better today than it was when personal devices first entered the workforce?

The Ten Rules of BYOD

Growing Acceptance of Personal Devices in the Enterprise

It wasn’t long ago that corporate leadership balked at the idea of their employees using personal devices for work. While workers had been using their personal computers and laptops to access company networks, it wasn’t until smartphones and digital tablets were introduced that the concept of BYOD caught on. Security for these devices wasn’t very mature back then, and IT and security decision-makers had well-founded concerns.

Over the past decade, of course, phones have evolved into personal hand-held computers. According to Comscore, only 17 percent of consumers were using smartphones in 2009, compared to 81 percent in 2016. That irreversible trend, along with the rise of the internet of things (IoT) and wearable devices, linked personal technology inextricably with enterprise networks.

Employees believe they are more productive and efficient when using not only their device of choice, but also their preferred software and apps. Apparently, leadership agrees: The same Bitglass study found that 85 percent of companies now allow not only employees, but even contractors, customers and suppliers to access enterprise data from their personal devices. Despite this shift, more than half of those surveyed believe mobile threats have gotten worse.

Mobile Threats Are Rising, but Security Hasn’t Changed Much

Given the ubiquity and relative insecurity of mobile devices in the workplace, it’s no surprise that criminals are targeting them. Threat actors can gain access to both corporate data and personal data from one easy-to-breach device. Basic mobile security protections, such as remote wiping and mobile device management tools, are deployed in just over half of the organizations surveyed by Bitglass. In addition, many security teams lack visibility into apps used on personal devices.

Most threat actors who attack mobile devices are after passwords, according to mobile security expert Karen Scarfone, as quoted by Wired.

“A lot of email passwords still go back and forth in the clear,” she said. “That’s a big problem.”

Passwords remain the keys to the data castle, and they are largely unencrypted and unprotected on mobile devices. This, coupled with the password reuse epidemic, means that threat actors can gain virtually unlimited access to corporate networks through personal devices.

Clearly, there’s plenty of room for improvement when it comes to mobile security. A U.S. Department of Homeland Security (DHS) study mandated by the Cybersecurity Act of 2015 found that while the federal government’s use of mobile technology is improving, “many communication paths remain unprotected and leave the overall ecosystem vulnerable to attacks.”

Similar security holes exist in the private sector. According to SyncDog, mobile devices are the most dangerous point of intrusion to corporate networks. In large enterprises in particular, “mobile devices are looked at as toys with games on them, and protecting them comes last in line to application management, network security, mainframes and other larger IT concerns.”

BYOD Security Starts With Smart Policies

How can chief information security officers (CISOs) and IT leaders ensure that employees use their personal devices in a smart, secure way? First, determine whether the employee needs to use personal devices for work at all. If there are jobs within the organization that don’t require regular access to networks, or if employees are working remotely, these users should not be allowed to participate in a BYOD program because their devices are neither authorized nor consistently monitored.

Second, employees should be required — or, at least, highly encouraged — to update their device software, especially operating systems and any security software. Consider requiring all employees who use personal devices to install the corporate security software and use the company’s security protocols if they are connecting to enterprise networks.

Third, communicate BYOD policies to employees and implement effective measures to enforce them. Policies should include the most basic data security best practices, such as implementing multifactor authentication (MFA), creating strong and unique passwords, using virtual private networks (VPNs) over public WiFi, and locking devices with biometric controls. In addition to protecting enterprise networks, these steps will help secure employees’ personal data on devices. But remember, a policy is useless if you don’t enforce it. People will break the rules if they know there are no consequences to pay.

When it comes to worker productivity, the embrace of BYOD has been a good thing for businesses. But in a world where cyberthreats loom large and data loss could result in huge fines and reputational damage, enterprises need to prioritize the security of their critical assets — and that of the thousands of endpoints that access them.

To learn more, read the IBM white paper, “The Ten Rules of Bring Your Own Device (BYOD).”

Read the white paper

The post As BYOD Adoption and Mobile Threats Increase, Can Enterprise Data Security Keep Up? appeared first on Security Intelligence.

Organizations waste money storing useless IT hardware

A survey of 600 data center experts from APAC, Europe and North America reveals that two in five organizations that store their data in-house spend more than $100,000 storing useless IT hardware that could pose a security or compliance risk. Astonishingly, 54 percent of these companies have been cited at least once or twice by regulators or governing bodies for noncompliance with international data protection laws. Fines of up to $1.5 million could be issued … More

The post Organizations waste money storing useless IT hardware appeared first on Help Net Security.

Should enterprises delay efforts to remediate most vulnerabilities?

Companies today appear to have the resources needed to address all of their high-risk vulnerabilities. The research demonstrates that companies are getting smarter in how they protect themselves from today’s

The post Should enterprises delay efforts to remediate most vulnerabilities? appeared first on The Cyber Security Place.

Embrace the Intelligence Cycle to Secure Your Business

Regardless of where we work or what industry we’re in, we all have the same goal: to protect our most valuable assets. The only difference is in what we are trying to protect. Whether it’s data, money or even people, the harsh reality is that it’s difficult to keep them safe because, to put it simply, bad people do bad things.

Sometimes these malicious actors are clever, setting up slow-burning attacks to steal enterprise data over several months or even years. Sometimes they’re opportunistic, showing up in the right place at the wrong time (for us). If a door is open, these attackers will just waltz on in. If a purse is left unattended on a table, they’ll quickly swipe it. Why? Because they can.

The Intelligence Cycle

So how do we fight back? There is no easy answer, but the best course of action in any situation is to follow the intelligence cycle. Honed by intelligence experts across industries over many years, this method can be invaluable to those investigating anything from malware to murders. The process is always the same.

Stage 1: Planning and Direction

The first step is to define the specific job you are working on, find out exactly what the problem is and clarify what you are trying to do. Then, work out what information you already have to deduce what you don’t have.

Let’s say, for example, you’ve discovered a spate of phishing attacks — that’s your problem. This will help scope subsequent questions, such as:

  • What are the attackers trying to get?
  • Who is behind the attacks?
  • Where are attacks occurring?
  • How many attempts were successful?

Once you have an idea of what you don’t know, you can start asking the questions that will help reveal that information. Use the planning and direction phase to define your requirements. This codifies what you are trying to do and helps clarify how you plan on doing it.

Stage 2: Collection

During this stage, collect the information that will help answer your questions. If you cannot find the answers, gather data that will help lead to those answers.

Where this comes from will depend on you and your organization. If you are protecting data from advanced threats, for instance, you might gather information internally from your security information and event management (SIEM) tool. If you’re investigating more traditional organized crime, by contrast, you might knock on doors and whisper to informants in dark alleys to collect your information.

You can try to control the activity of collection by creating plans to track the process of information gathering. These collection plans act as guides to help information gatherers focus on answering the appropriate questions in a timely manner. Thorough planning is crucial in both keeping track of what has been gathered and highlighting what has not.

Stage 3: Processing and Exploitation

Collected information comes in many forms: handwritten witness statements, system logs, video footage, data from social networks, the dark web, and so on. Your task is to make all the collected information usable. To do this, put it into a consistent format. Extract pertinent information (e.g., IP addresses, telephone numbers, asset references, registration plate details, etc.), place some structure around those items of interest and make it consistent. It often helps to load it into a schematized database.

If you do this, your collected information will be in a standard shape and ready for you to actually start examining it. The value is created by putting this structure around the information. It gives you the ability to make discoveries, extract the important bits and understand your findings in the context of all the other information. If you can, show how attacks are connected, link them to bad actors and collate them against your systems. It helps to work with the bits that are actually relevant to the specific thing you’re working on. And don’t forget to reference this new data you collected against all the old stuff you already knew; context is king in this scenario.

This stage helps you make the best decisions you can against all the available information. Standardization is great; it is hard to work with information when it’s in hundreds of different formats, but it’s really easy when it’s in one.

Of course, the real world isn’t always easy. Sometimes it is simply impossible to normalize all of your collected information into a single workable pot. Maybe you collected too much, or the data arrived in too many varied formats. In these cases, your only hope is to invest in advanced analytical tools and analysts that will allow you to fuse this cacophony of information into some sensible whole.

Stage 4: Analysis Production

The analysis production stage begins when you have processed your information into a workable state and are ready to conduct some practical analysis — in other words, you are ready to start producing intelligence.

Think about the original task you planned to work on. Look at all the lovely — hopefully standardized — information you’ve collected, along with all the information you already had. Query it. Ask questions of it. Hypothesize. Can you find the answer to your original question? What intelligence can you draw from all this information? What stories can it tell? If you can’t find any answers — if you can’t hypothesize any actions or see any narratives — can you see what is missing? Can you see what other information you would need to collect that would help answer those questions? This is the stage where you may be able to draw new conclusions out of your raw information. This is how you produce actionable intelligence.

Actionable intelligence is an important concept. There’s no point in doing all this work if you can’t find something to do at the end of it. The whole aim is to find an action that can be performed in a timely manner that will help you move the needle on your particular task.

Finding intelligence that can be acted upon is key. Did you identify that phishing attack’s modus operandi (MO)? Did you work out how that insider trading occurred? It’s not always easy, but it is what your stakeholders need. This stage is where you work out what you must do to protect whatever it is you are safeguarding.

Stage 5: Dissemination

The last stage of the intelligence cycle is to go back to the stakeholders and tell them what you found. Give them your recommendations, write a report, give a presentation, draw a picture — however you choose to do it, convey your findings to the decision-makers who set the task to begin with. Back up your assertions with your analysis, and let the stakeholders know what they need to do in the context of the intelligence you have created.

Timeliness is very important. Everything ages, including intelligence. There’s no point in providing assessments for things that have already happened. You will get no rewards for disseminating a report on what might happen at the London Marathon a week after the last contestant finished. Unlike fine wine, intelligence does not improve with age.

To illustrate how many professionals analyze and subsequently disseminate intelligence, below is an example of an IBM i2 dissemination chart:

The Intelligence Cycle

The analysis has already happened and, in this case, the chart is telling your boss to go talk to that Gene Hendricks chap — he looks like one real bad egg.

Then what? If you found an answer to your original question, great. If not, then start again. Keep going around the intelligence cycle until you do. Plan, collect, process, analyze, disseminate and repeat.

Gain an Edge Over Advanced Threats

We are all trying to protect our valued assets, and using investigation methodologies such as the intelligence cycle could help stop at least some malicious actors from infiltrating your networks. The intelligence cycle can underpin the structure of your work both with repetitive processes, such as defending against malware and other advanced threats, and targeted investigations, such as searching for the burglars who stole the crown jewels. Embrace it.

Whatever it is you are doing — and whatever it is you are trying to protect — remember that adopting this technique could give your organization the edge it needs to fight back against threat actors who jealously covet the things you defend.

To learn more, read the interactive white paper, “Detect, Disrupt and Defeat Advanced Physical and Cyber Threats.”

Read the white paper

The post Embrace the Intelligence Cycle to Secure Your Business appeared first on Security Intelligence.

Data Security Blog | Thales eSecurity: Securing data in the hybrid cloud

IDG’s 2018 Cloud Computing Study tells us:

Seventy-three percent of organizations have at least one application, or a portion of their computing infrastructure already in the cloud – 17% plan to do so within the next 12 months.

But IDG also points out:

Organizations are utilizing a mix of cloud delivery models. Currently the average environment is 53% non-cloud, 23% SaaS, 16% IaaS and 9% PaaS….

We at Thales see many new on-premises installs happening for many applications as well as a massive base of on-premises applications (legacy) that are delivering ROI and that will live on. So, reality for the foreseeable future is not solely the cloud, but a hybrid cloud environment.

Unfortunately, none of these environments – on premises, cloud or hybrid cloud — is immune to cyberattacks or increasingly strict compliance and regulatory mandates. To protect their customers, their intellectual property, their reputations and their share prices, enterprises need best practice data security that is easy to use throughout this integrated environment.

VMWare Cloud on AWS

VMware Cloud on AWS is an integrated cloud offering that allows organizations to create vSphere data centers on Amazon Web Services. Highly scalable, secure and innovative, this solution allows organizations to seamlessly migrate and extend their on-premises VMware vSphere-based environments to the AWS Cloud. VMware Cloud on AWS is a great solution for enterprise IT infrastructure and operations organizations looking to migrate their on-premises vSphere-based workloads to the public cloud, consolidate and extend their data center capacities, and optimize, simplify and modernize their disaster recovery solutions.

But what about security? Enter Thales’s Vormetric Transparent Encryption.

Vormetric Transparent Encryption for VMWare Cloud on AWS

VMware Cloud on AWS can leverage Vormetric Transparent Encryption (VTE) from Thales to deliver data-at-rest encryption with centralized key management, privileged user access control and detailed data access audit logging that helps enterprises meet compliance reporting and best practice requirements for protecting data, wherever it resides. VTE protects structured databases, unstructured files, and linked cloud storage accessible from systems on-premises, across multiple cloud environments, and even within big data and container implementations. Designed to meet data security requirements with minimal disruption, effort, and cost, implementation is seamless – keeping both business and operational processes working without changes even during deployment and roll out.

The bottom line is that hybrid cloud environments are the new reality, and enterprises need to look at best practice data security in order to protect data from cyberattacks as well as achieve and maintain compliance.

For more information on VTE for VMWare on AWS, please visit our website.

The post Securing data in the hybrid cloud appeared first on Data Security Blog | Thales eSecurity.

Data Security Blog | Thales eSecurity

Mis-valuation of data poses a huge threat to businesses

A business must fully understand the value of its data if it is to protect it properly. IT Security safeguards corporate data. It’s a widely accepted practice and commonplace in

The post Mis-valuation of data poses a huge threat to businesses appeared first on The Cyber Security Place.

Why Compliance Does Not Equal Security

A company can be 100% compliant and yet 100% owned by cyber criminals. Many companies document every cybersecurity measure and check all appropriate compliance boxes. Even after all that, they

The post Why Compliance Does Not Equal Security appeared first on The Cyber Security Place.

Encryption is key to protecting information as it travels outside the network

A new Vera report reveals stark numbers behind the mounting toll of data breaches triggered by cybercrime and accidents. One of the most recognized and mandated security controls, installed encryption tools protect

The post Encryption is key to protecting information as it travels outside the network appeared first on The Cyber Security Place.

Imperva Increases Self-Service Capability Fourfold with Custom Security Rules

Back in 2014, we introduced Rules (previously IncapRules) to give our customers advanced control over their application security.

Today we’re putting even more of this custom tuning power in the hands of our customers by quadrupling the number of filters available via self-service.

Rules Basics

Rules are an extensive policy engine developed in response to the emergence of increasingly advanced threats as well as the growing demand to add more logic at the edge. Advanced threats continue to drive the need for adaptive security solutions which enable real-time response and flexible, custom security policy enforcement.

Rules are built using filters, operators, and values. Filters are the core function which helps tune manual policies. Customers use filters for advanced bot protection, security against brute-force attacks, and more. Although about 90% of our customers use our out-of-the-box policies in their default preventative settings, leaving the burden of tuning to the Imperva security experts, there are times when some of our customers need to make specific adjustments to their security policies.

The Existing Filter Set

Traditionally, Imperva has limited to 20 the list of filters exposed to our customers. A set of parameters is available for self-service use when defining Rules.

Filters for Rules are divided into three logical groups:

  1. Clients: Information about the connecting client
  2. Requests: Information about the current request
  3. Counters\rates: A running count of the number of actions performed  

As one example, there’s an existing filter for the ID for the client application.

  • Googlebot (Search bot) (6)
  • cURL (Developer Tool) (47)

When adding or editing a rule in the Management Console, you start entering text in the value field to display a list of available values.

Example: ClientId == 15

The New Filters

Behind the scenes, the Imperva Support team and SOC alone have had much more power up until now in terms of custom tuning upon request, with access to approximately 100 different filters.

But by expanding the set of Rules available via self-service to a total of 48 filters, even more policy customization is possible without ever needing to contact the Support team.  

To complement existing self-service Rules for bot protection, protection against Account Takeover (ATO), application hardening, rate limiting, and Advanced Access Control (ACL), the new filters offer the following and more:

  • Advanced optionality when handling advanced bot scenarios
  • Logic based on popular technologies such as Drupal, PHP, Joomla, WordPress and others
    (the idea here is to simplify complex expressions by encapsulating the tech detection and rates within a single command)
  • Client certificate info such as CN, SHA1 and Serial Number


Here are three examples of new filters unveiled to customers:

(1) session-creation-ip_rate

If you set the value here to, say, 800, you will be able to capture sessions with more than 800 requests from a certain IP in 1 minute – a likely indicator of a DDoS attack.

(2) request-rate-ip_rate

This filter measures the rate of requests per IP over a 1-minute period, and if you expect no more than 100 requests, you can set the action to “block” or “alert” because that could indicate that an attacker is trying to scrape your website.

(3) login-bf-drupal_rate

Though we patch by default from the backend to mitigate known exposures such as the latest critical RCE vulnerabilities affecting Drupal, this filter may be useful for our more advanced customers who wish to add an extra layer of logic and protection.

The filter allows you to measure the rate of requests per IP over a period of time to a Drupal login page, helping in detection and prevention of brute-force attacks on Drupal login pages.

The Demand for Logic at the Edge

The customer policy rule engine has proven to be a powerful tool when you need to perform specialized adjustments for specific use cases. Advanced options for building custom security policies complement the default prevention settings in our cloud WAF for complete protection.  

But as organizations of all sizes shift to the cloud and our enterprise customer list has expanded rapidly in turn, the demand for more edge functionality has also increased. We’ve heard time and again that our customers find it very appealing to be able to add advanced logic rules at the edge.

Last year we announced the rollout of advanced Application Delivery Rules. And while we invested in building and improving the engine which drives both of these rule sets, we honed in on the need to (1) develop advanced filters which can help address emerging threats and exploitation and (2) expose logical parts of our classification engine and bot protection layers previously hidden from customer view.

The Move to More Control and Visibility

The newly visible Rules will provide a much more powerful tool for savvy customers who need to tune their policies in order to handle complex cases. This approach is yet another step we’re taking to put enhanced visibility and extended control in our customers’ hands and to stay ahead of the curve in handling the most sophisticated and automated attacks out there.

The post Imperva Increases Self-Service Capability Fourfold with Custom Security Rules appeared first on Blog.

A week in security (January 7 – 13)

Last week on the Malwarebytes Labs blog, we took a look at the Ryuk ransomware attack causing trouble over the holidays, as well as a ransom threat for an Irish transportation company. We explored the realm of SSN scams, and looked at what happens when an early warning system is attacked.

Other cybersecurity news

  • Password reuse problems. Multiple Reddit accounts reported being locked out after site admins blamed “password reuse” for the issue. (Source: The Register)
  • 85 rogue apps pulled from Play Store. Sadly, not before some 9 million downloads had already taken place. (Source: Trend Micro)
  • Home router risk. It seems many home routers aren’t doing enough in the fight against hackers. (Source: Help Net Security)
  • Deletion not allowed. Some people aren’t happy they can’t remove Facebook from their Samsung phones. (Source: Bloomberg)
  • Takedown: How a system admin brought down the notorious “El Chapo.” (Source: USA Today)
  • 2FA under fire. A new pentest tool called Mantis can be used to assist in the phishing of OTP (one time password) codes. (Source: Naked Security) 
  • Facebook falls foul of new security laws in Vietnam. New rules have brought a spot of bother for Facebook, accused of not removing certain types of content and handing over data related to “fraudulent accounts.” (source: Vietnam News)
  • Trading site has leak issue. A user on the newly set up trading platform was able to grab a lot of potentially problematic snippets, including authentication tokens and password reset links. (source: Ars Technica)
  • Local risk to card details. A researcher discovered payment info was being stored locally on machines, potentially exposing them to anyone with physical access. (Source: Hacker One) 
  • Facebook exec swatted. The dangerous “gag” of sending armed law enforcement to an address ends up causing problems for a “cybersecurity executive,” after bogus calls claimed they had “pipe bombs all over the place.” (source: PA Daily post)

Stay safe, everyone!

The post A week in security (January 7 – 13) appeared first on Malwarebytes Labs.

The Dark Overlord Claims to Have Stolen Secrets of 9/11 Attacks in Law Firm Data Breach

The threat group known as The Dark Overlord has claimed responsibility for a law firm data breach involving files allegedly related to the 9/11 terrorist attacks.

The Dark Overlord first announced on New Year’s Eve that it had stolen files belonging to Llyod’s of London, Silverstein Properties and Hiscox Syndicates Ltd., according to Motherboard. Although the group’s announcement on the Pastebin messaging service has been deleted, Motherboard confirmed the hack with Hiscox.

The stolen information reportedly includes email and voicemail messages as well as legal files such as non-disclosure strategies and expert witness testimonies.

9/11 Data Held for Ransom

In a Dec. 31 tweet, The Dark Overlord claimed it had managed to steal more than 18,000 secret documents that would provide answers about 9/11 conspiracy theories. Twitter has since suspended the group’s account.

SC Magazine reported that the law firm paid an initial ransom, but then violated terms of agreement by reporting the incident to law enforcement. The threat group is now demanding a second ransom be paid in bitcoin and said it will also sell information obtained in the breach to interested third parties on the dark web.

According to a post on Engadget, The Dark Overlord also attempted to prove it had committed the data breach by publishing nonsensitive material from other law firms as well as organizations such as the U.S. Transportation Security Administration (TSA) and Federal Aviation Authority (FAA).

How to Limit the Threat of Groups Like The Dark Overlord

This latest attack from The Dark Overlord is further proof that data breaches can not only create a PR nightmare, but also put organizations’ survival and, in some cases, national security at risk.

Unfortunately, the exact details around how The Dark Overload accessed the law firm’s network are unknown. Security experts recommend conducting a short but comprehensive 15-minute self-assessment to gauge the organization’s IT security strengths and weaknesses. The results can be benchmarked against similar firms, and security leaders can gain access to the expertise they need to keep groups like The Dark Overlord away from their data.

The post The Dark Overlord Claims to Have Stolen Secrets of 9/11 Attacks in Law Firm Data Breach appeared first on Security Intelligence.

HHS Publishes Voluntary Healthcare Cybersecurity Practices for Medical Organizations

The U.S. Department of Health and Human Services (HHS) released voluntary healthcare cybersecurity practices to help medical organizations strengthen their security posture.

On December 28, HHS released “Health Industry Cybersecurity Practices (HICP): Managing Threats and Protecting Patients” in response to a mandate to develop healthcare cybersecurity standards laid out by the Cybersecurity Act of 2015. More than 150 cybersecurity and healthcare experts from the private and public sectors worked together for two years to fulfill this directive.

The publication is broken down into three sections. The first examines cybersecurity threats confronting the healthcare industry. The second portion identifies weaknesses that render healthcare organizations vulnerable to threats, and the third and final segment outlines strategies that medical entities can use to defend against digital threats.

Healthcare Data Breaches on the Rise

Healthcare data breaches are on the rise. In a study published by the JAMA Network, researchers analyzed all the data security incidents reported to the Office of Civil Rights at HHS between January 2010 and December 2017. They found a total of 2,149 breaches affecting 176.4 million patient records. The annual number of data breaches increased each year during the analyzed time period except 2015, starting with 199 in 2010 and growing to 344 in 2017.

Of the incidents that exposed patients’ personal health information (PHI), 53 percent originated inside the organization. That’s consistent with the Office of the Australian Information Commissioner’s (OAIC) quarterly statistics for Q3 2018. OAIC received 45 data breach notifications from healthcare organizations during the quarter, 56 percent of which resulted from human error.

Healthcare Cybersecurity Best Practices

Security professionals can begin enforcing healthcare cybersecurity best practices by producing creative employee awareness content that specifically appeals to the company’s workforce. Healthcare organizations should also adopt a security immune system strategy that, among other things, uses artificial intelligence (AI) and automation to mitigate risk across the network.

The post HHS Publishes Voluntary Healthcare Cybersecurity Practices for Medical Organizations appeared first on Security Intelligence.

McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention

I am excited to announce that McAfee has been recognized as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention. I believe our position as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention is a testament that our device-to-cloud DLP integration of enterprise products helps our customers stay on top of evolving security needs, with solutions that are simple, flexible, comprehensive and fast, so that our customers can act decisively and mitigate risks. McAfee takes great pride in being recognized by our customers on Gartner Peers Insights.

In its announcement, Gartner explains, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, considering both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.




For this distinction, a vendor must have a minimum of 50 published reviews with an average overall rating of 4.2 stars or higher during the sourcing period. McAfee met these criteria for McAfee Data Loss Prevention.

Here are some excerpts from customers that contributed to the distinction:

“McAfee DLP Rocks! Easy to implement, easy to administer, pretty robust”

Security and Privacy Manager in the Services Industry

“Flexible solution. Being able to rapidly deploy additional Discover systems as needed as the company expanded was a huge time saving. Being able to then recover the resources while still being able to complete weekly delta discovery on new files being added or changed saved us tens of thousands of dollars quarterly.”

IT Security Manager in the Finance Industry

“McAfee DLP Endpoint runs smoothly even in limited resource environments and it supports multiple platforms like windows and mac-OS. Covers all major vectors of data leakages such as emails, cloud uploads, web postings and removable media file sharing.”

Knowledge Specialist in the Communication Industry

“McAfee DLP (Host and Network) are integrated and provide a simplified approach to rule development and uniform deployment.”

IT Security Engineer in the Finance Industry

 “Using ePO, it’s easy to deploy and manage the devices with different policies.”

Cyber Security Engineer in the Communication Industry


And those are just a few. You can read more reviews for McAfee Data Loss Prevention on the Gartner site.

On behalf of McAfee, I would like to thank all of our customers who took the time to share their experiences. We are honored to be a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention and we know that it is your valuable feedback that made it possible. To learn more about this distinction, or to read the reviews written about our products by the IT professionals who use them, please visit Gartner Peer Insights’ Customers’ Choice.


  • Gartner Peer Insights’ Customers’ Choice announcement December 17, 2018
The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention appeared first on McAfee Blogs.

The Year Ahead: Cybersecurity Trends To Look Out for In 2019

A Proven Record Tracking Cybersecurity Trends

This time of the year is always exciting for us, as we get to take a step back, analyze how we did throughout the year, and look ahead at what the coming year will bring. Taking full advantage of our team’s expertise in data and application security, and mining insights from our global customer base, we’ve decided to take a different approach this time around and focus on three key, and overriding trends we see taking center stage in 2019.

2018 brought with it the proliferation of both data and application security events and, as we predicted, data breaches grew in size and frequency and cloud security took center stage globally. With that in mind, let’s take a look at what next year holds.

Data breaches aren’t going away anytime soon, which will bolster regulation and subsequent compliance initiatives

Look, there’ll be breaches, and the result of that is going to be more regulation, and therefore, more compliance, this is a given. In fact, the average cost of a data breach in the US 2018 exceeded $7 million.

Whether it’s GDPR, the Australian Privacy Law, Thailand’s new privacy laws or Turkey’s KVKK; it doesn’t matter where you are, regulation is becoming the standard whether it be a regional, group, or an individual country standard.

Traditionally when we looked at data breaches, the United States lit up the map, but as regulatory frameworks and subsequent compliance measures expand globally, we’re going to see a change.

The annual number of data breaches and exposed records in the United States from 2005 to 2018 (in millions) [Statista]

What you ’ll see in 2019, and certainly, as we move forward, is a red rosy glow covering the entire globe. In 2019 you’ll hear more of “It’s not just the United States. This happens everywhere.”


Let’s unpack this for a second. If you were going to steal private data or credit card details, why would you do it in an environment that has world-class, or even mediocre cybersecurity measures in place? If everyone else is even slightly less protected, that’s where you’re going to find people targeting data, but we hear more about it in regions where regulation and compliance is a major focus.


To that end, we don’t necessarily see 2019 as the year where regulators start hitting companies with massive fines for compliance. Maybe by the end of the year, or if you see outright egregious negligence. But, you’ll find that companies have put in the legwork when it comes to compliance.

Having your head in the cloud(s) when it comes to managing risk… not a bad idea

McKinsey reports that, by 2020, organizations will be spending more than six times on cloud-specific products than they do on general IT services; and according to a survey by LogicMonitor, up to 83% of all enterprise workloads will be in the cloud around that same time.

LogicMonitor’s Future of the Cloud Study [Forbes]

Organizations continue to capitalize on the business benefits of the digital economy and, as such, end up chunking more data into the cloud. Now, we’re not saying that this is being done without some forethought, but are they classifying data as they go along and increasingly open their businesses up to the cloud?


Teams need to recognize that, as they transition their data to the cloud, they transition their awareness of what’s in the cloud; who is using it, when they’re using it, and why they’re using it. 2019 isn’t going be the year that businesses figure out they need to do that. What we will see, however, is increasingly cloud-friendly solutions hit the market to solve these challenges.

Social Engineering and the rise of AI and machine learning in meeting staffing issues

One of 2019’s most critical developments will be how the cybersecurity industry steps up to meet the increasing pressure on security teams to perform. According to the Global Information Security Workforce Study, the shortage of cybersecurity professionals will hit 1.8 million by 2022, but at the same time, a report by ESG shows just nine percent of millennials are interested in a career in cybersecurity.


What we’re going to see is how AI  and machine learning in cybersecurity technology will close the gaps in both numbers and diversity of skills.


Organizations today have to solve the problem of cybersecurity by hiring for a host of specialized competencies; network security, application security, data security, email security and now, cloud security. Whatever it is, underscore security, those skills are crucial to any organization’s security posture.


Here’s the thing, there aren’t a lot of people that claim to know cloud security, database security, application security, data security, or file security. There just isn’t a lot. We know that and we know businesses are trying to solve that problem, often by doing the same old things they’ve always done, which is the most common solution. Do more antimalware, do more antivirus, do more things that don’t work. In some cases, however, they’re doing things around AI and trying to solve the problem by leveraging technology. The latter will lead to a shift where organizations dive into subscription services.


There are two facets driving this behavior: the first is the fact that, yes, they realize that they are not the experts, but that there are experts out there. Unfortunately, they just don’t work for them, they work for the companies that are offering this as a service.


Secondly, companies are recognizing that there’s an advantage in going to the cloud, because, and this is a major determining factor, it’s an OpEx, not CapEx. The same thing is true of subscription services whether that be in the cloud or on-prem, it doesn’t matter. Driven by skills shortages and cost, 2019 will see an upswing in subscription services, where organizations are actually solving cybersecurity problems for you.


We should add here, however, that as more organizations turn to AI and machine learning-based decision making for their security controls, attackers will try to leverage that to overcome those same defenses.

Special mention: The ‘trickledown effect’ of Cyberwarfare

The fact is, cyber attacks between nations do happen, and it’s a give and take situation. This is the world we live in, these are acceptable types of behavior, quite frankly, right now, that won’t necessarily lead to war these days. But someone still stands to gain.


Specifically, they’re attacking third-party business, contractors and financial institutions. That’s why cybersecurity is so important, there needs to be an awareness that somebody might be stealing your data for monetary gain. It might be somebody stealing your data for political gain too, and protecting that data is just as critical, regardless of who’s taking it.


Now, while state-hacking isn’t necessarily an outright declaration of war these days, it doesn’t end there. The trickledown effect of nation-state hacking is particularly concerning, as sophisticated methods used by various governments eventually find their way into the hands of resourceful cybercriminals, typically interested in attacking businesses and individuals.

Repeat offenders

No cybersecurity hit list would be complete without the things that go bump in the night and, while all of them might not necessarily be ballooning, they’ll always be a thorn in security teams’ sides.

  • Following the 2017 Equifax breach, API security made it onto the OWASP Top 10 list and remains there for a good reason. With the expanding use of APIs and challenges in detecting attacks against them, we’ll see attackers continuing to take aim at APIs as a great target for a host of different threats; including brute force attacks, App impersonation, phishing and code injection.
  • Bad actors already understand that crypto mining is the shortest path to making a profit, and continue to hone their techniques to compromise machines in the hope of mining crypto-coins or machines that can access and control crypto-wallets.
  • Low effort, easy money, full anonymity and potentially huge damage to the victim… what’s not to like when it comes to ransomware? It’s unlikely that we’ll see these types of attacks go away anytime soon.


If there’s one overriding theme we’d like to carry with us into 2019 it’s the concept of general threat intelligence, the idea that it’s better to have some understanding of the dangers out there and to do something, rather than nothing at all.


We often talk about the difference between risk and acceptable risk or reasonable risk, and a lot of companies make the mistake of trying to boil the ocean… trying to solve every single problem they can, ultimately leaving teams feeling overwhelmed and short on budget.


Acceptable risk isn’t, “I lost the data because I wasn’t blocking it. I get it. And it wasn’t a huge amount of data because at least I have some controls in place to prevent somebody from taking a million records, because nobody needs to read a million records. Nobody’s going to read a million records. So, why did I let it happen in the first place?”


Acceptable risk is “I know it happened, I accept that it happened, but it’s a reasonable number of events, it’s a reasonable number of records, because the controls I have in place aren’t so specific, aren’t so granular that they solve the whole problem of risk, but they take me to a world of acceptable risk.”


It’s better to begin today, and begin at the size and relevance that you can, even if that only takes you from high to medium risk, or reasonable to acceptable risk.

The post The Year Ahead: Cybersecurity Trends To Look Out for In 2019 appeared first on Blog.

Read: New Attack Analytics Dashboard Streamlines Security Investigations

Attack Analytics, launched this May, aimed to crush the maddening pace of alerts that security teams were receiving. For security analysts unable to triage this avalanche of alerts, Attack Analytics condenses thousands upon thousands of alerts into a handful of relevant, investigable incidents.  Powered by artificial intelligence, Attack Analytics is able to automate what would take a team of security analysts days to investigate and to cut that investigation time down to a matter of minutes.

Building upon the success of our launch, we are now introducing the Attack Analytics Dashboard.  Aimed at SOC (Security Operations Center) analysts, managers, and WAF administrators to provide a high-level summary of the type of security attacks that are hitting their web applications; it helps to speed up security investigations and quickly zoom in on abnormal behaviors.

The WAF admin or the SOC can use the Dashboard to get a high-level summary of the security attacks that have happened over a period of time (the last 24 hours, 7 days, 30 days, 90 days or other customized time range):

  • Attack Trends: Incidents and events
  • Top Geographic Areas: Where attacks have originated
  • Top Attacked Resources
  • Breakdown of Attack Tool Types
  • Top Security Violations (Bad Bots, Illegal Resource Access, SQL injections, Cross-Site Scripting, etc.)

Events vs. incidents

Upon entering the Attack Analytics Dashboard, you can see the Incidents tab, which shows the attack trends across time, classified according to severity (critical, major and minor).  A quick scan allows you to understand if a sudden jump in incidents may deserve immediate attention.

In the Events tab, you can see the number of events vs. incidents which have occurred over a specific period of time. For example – the marked point in the graph shows that on October 4th there were 2,142 alerts that were clustered into 19 security incidents. If you want to understand what happened on this day, you can drill down and investigate these 19 incidents.

Next, you can see the Top Attack Origin countries which have attacked your websites over a specified period of time. This again could help identify any abnormal behavior from a specific country. In the snapshot below, you can see the “Distributed” incidents. This means that this customer experienced 4 distributed attacks, with no dominant country, and could imply the attacks originated from botnets spread across the world.

Top attacked resources

Top Attacked Resources provides a snapshot of your most attacked web resources by percentage of critical incidents and the total number of incidents. In this example, singular assets are examined as well as a distributed attack across the customer’s assets. In the 3rd row, you can see that the customer (in this case, our own platform) experienced 191 distributed attacks. This means that each attack targeted a few hosts under our brand name; for example, it may have been a scanning attack aimed at finding vulnerable hosts.

Attack tool types

A SOC Manager/WAF admin might also want to understand the type of attack tools that are being used.  In the example below, on the left, you see the distribution of incidents according to the tool types and on the right, you see the drill-down into the malicious tools, so you can better understand your attack landscape. Over the last 90 days, there were 2.38K incidents that used malicious tools. On the right we can see the breakdown of the different tools and the number of incidents for each one – for example, there were 279 incidents with a dominant malicious tool called LTX71.

We think you’ll quickly discover the benefits which the new Attack Analytics Dashboard provides as it helps you pinpoint abnormal behaviors and speed up your security investigations. It should also assist you in providing other stakeholders within your company a high-level look at the value of your WAF.

And right now, we have even more dashboard insight enrichments in the works, such as:

  • False Positives Suspects: Incidents our algorithms predict to be highly probable of being false positives.
  • Community Attacks (Spray and Pray Attacks): Provide a list of incidents that are targeting you as part of a larger campaign – based on information gathered from our crowdsourced customer data.

Stay tuned for more!

The post Read: New Attack Analytics Dashboard Streamlines Security Investigations appeared first on Blog.

Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility

This article explains how Imperva application security integrates with AWS Security Hub to give customers better visibility and feedback on the security status of their AWS hosted applications.

Securing AWS Applications

Cost reduction, simplified operations, and other benefits are driving organizations to move more and more applications onto AWS delivery platforms; because all of the infrastructure maintenance is taken care of by AWS.  As with migration to a cloud service, however, it’s important to remember that cloud vendors generally implement their services in a Shared Security Responsibility Model.  AWS explains this in a whitepaper available here.

Imperva solutions help diverse enterprise organizations maintain consistent protection across all applications in their IT domain (including AWS) by combining multiple defenses against Application Layer 3-4 and 7 Distributed Denial of Service (DDoS) attacks, OWASP top 10 application security risks, and even zero-day attacks.  Imperva application security is a top-rated solution by both Gartner and Forrester for both WAF and DDoS protection.

Visibility Leads to Better Outcomes

WAF security is further enhanced through Imperva Attack Analytics, which uses machine learning technology to correlate millions of security events across Imperva WAFs assets and group them into a small number of prioritized incidents, making security teams more effective by giving them clear and actionable insights.

AWS Security Hub is a new web service that provides a consolidated security view across AWS Services as well as 3rd party solutions.  Imperva has integrated its Attack Analytics platform with AWS Security Hub so that the security incidents Attack Analytics generates can be presented by the Security Hub Console.

Brief Description of How the Integration Works

The integration works by utilizing an interface developed for AWS Security Hub for what is essentially an “external data connector” called a Findings Provider (FP). The FP enables AWS Security Hub to ingest standardized information from Attack Analytics so that the information can be parsed, sorted and displayed. This FP is freely available to Imperva and AWS customers on Imperva’s GitHub page listed at the end of this article.

Figure 1: Screen Shot of Attack Analytics Incidents in AWS Security Hub

The way the data flows between Attack Analytics and AWS Security Hub is that Attack Analytics exports the security incidents into an AWS S3 bucket within a customer account, where the Imperva FP can make it available for upload.

Figure 2: Attack Analytics to AWS Security Hub event flow

To activate AWS Security Hub to use the Imperva FP, customers must configure several things described in the AWS Security Hub documentation. As part of the activation process, the FP running in the customer’s environment needs to acquire a product-import token from AWS Security Hub. Upon FP activation, the FP is authorized to import findings into their AWS Security Hub account in the AFF format, which will happen at configurable time intervals.

It’s critically important that organizations maintain robust application security controls as they build or migrate applications to AWS architectures.  Imperva helps organizations ensure every application instance can be protected against both known and zero-day threats, and through integration with AWS Security Hub, Imperva Attack Analytics can ensure organizations always have the most current and most accurate status of their enterprise application security posture.


Security Hub is initially being made available as a public preview.  We are currently looking for existing Attack Analytics customers that are interested in working with us to refine our integration. If you’re interested in working with us on this please get in touch.  Once SecurityHub becomes generally available we intend to release our Security Hub integration as an open source project on Imperva’s GitHub account.

The post Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility appeared first on Blog.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.


A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.


The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Game-Changing Sensitive Data Discovery

I've tried not to let my blog become a place where I push products made by my employer. It just doesn't feel right and I'd probably lose some portion of my audience. But I'm making an exception today because I think we have something really compelling to offer. Would you believe me if I said we have game-changing DLP data discovery?

How about a data discovery solution that costs zero to install? No infrastructure and no licensing. How about a solution that you can point at specific locations and choose specific criteria to look for? And get results back in minutes. How about a solution that profiles file shares according to risk so you can target your scans according to need. And if you find sensitive content, you can choose to unlock the details by using credits which are bundle-priced.

Game Changing. Not because it's the first or only solution that can find sensitive data (credit card info, national ID numbers, health information, financial docs, etc.) but because it's so accessible. Because you can find those answers minutes after downloading. And you can get a sense for your problem before you pay a dime. There's even free credits to let you test the waters for a while.

But don't take our word for it. Here are a few of my favorite quotes from early adopters: 
“You seem to have some pretty smart people there, because this stuff really works like magic!”

"StealthSEEK is a million times better than [competitor]."

"We're scanning a million files per day with no noticeable performance impacts."

"I love this thing."

StealthSEEK has already found numerous examples of system credentials, health information, financial docs, and other sensitive information that weren't known about.

If I've piqued your interest, give StealthSEEK a chance to find sensitive data in your environment. I'd love to hear what you think. If you can give me an interesting use-case, I can probably smuggle you a few extra free credits. Let me know.

Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.