Category Archives: Data Security

McAfee Blogs: McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention

I am excited to announce that McAfee has been recognized as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention. I believe our position as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention is a testament that our device-to-cloud DLP integration of enterprise products helps our customers stay on top of evolving security needs, with solutions that are simple, flexible, comprehensive and fast, so that our customers can act decisively and mitigate risks. McAfee takes great pride in being recognized by our customers on Gartner Peers Insights.

In its announcement, Gartner explains, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, considering both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.

 

 

 

For this distinction, a vendor must have a minimum of 50 published reviews with an average overall rating of 4.2 stars or higher during the sourcing period. McAfee met these criteria for McAfee Data Loss Prevention.

Here are some excerpts from customers that contributed to the distinction:

“McAfee DLP Rocks! Easy to implement, easy to administer, pretty robust”

Security and Privacy Manager in the Services Industry

“Flexible solution. Being able to rapidly deploy additional Discover systems as needed as the company expanded was a huge time saving. Being able to then recover the resources while still being able to complete weekly delta discovery on new files being added or changed saved us tens of thousands of dollars quarterly.”

IT Security Manager in the Finance Industry

“McAfee DLP Endpoint runs smoothly even in limited resource environments and it supports multiple platforms like windows and mac-OS. Covers all major vectors of data leakages such as emails, cloud uploads, web postings and removable media file sharing.”

Knowledge Specialist in the Communication Industry

“McAfee DLP (Host and Network) are integrated and provide a simplified approach to rule development and uniform deployment.”

IT Security Engineer in the Finance Industry

 “Using ePO, it’s easy to deploy and manage the devices with different policies.”

Cyber Security Engineer in the Communication Industry

 

And those are just a few. You can read more reviews for McAfee Data Loss Prevention on the Gartner site.

On behalf of McAfee, I would like to thank all of our customers who took the time to share their experiences. We are honored to be a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention and we know that it is your valuable feedback that made it possible. To learn more about this distinction, or to read the reviews written about our products by the IT professionals who use them, please visit Gartner Peer Insights’ Customers’ Choice.

 

  • Gartner Peer Insights’ Customers’ Choice announcement December 17, 2018
The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention appeared first on McAfee Blogs.



McAfee Blogs

McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention

I am excited to announce that McAfee has been recognized as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention. I believe our position as a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention is a testament that our device-to-cloud DLP integration of enterprise products helps our customers stay on top of evolving security needs, with solutions that are simple, flexible, comprehensive and fast, so that our customers can act decisively and mitigate risks. McAfee takes great pride in being recognized by our customers on Gartner Peers Insights.

In its announcement, Gartner explains, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, considering both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.

 

 

 

For this distinction, a vendor must have a minimum of 50 published reviews with an average overall rating of 4.2 stars or higher during the sourcing period. McAfee met these criteria for McAfee Data Loss Prevention.

Here are some excerpts from customers that contributed to the distinction:

“McAfee DLP Rocks! Easy to implement, easy to administer, pretty robust”

Security and Privacy Manager in the Services Industry

“Flexible solution. Being able to rapidly deploy additional Discover systems as needed as the company expanded was a huge time saving. Being able to then recover the resources while still being able to complete weekly delta discovery on new files being added or changed saved us tens of thousands of dollars quarterly.”

IT Security Manager in the Finance Industry

“McAfee DLP Endpoint runs smoothly even in limited resource environments and it supports multiple platforms like windows and mac-OS. Covers all major vectors of data leakages such as emails, cloud uploads, web postings and removable media file sharing.”

Knowledge Specialist in the Communication Industry

“McAfee DLP (Host and Network) are integrated and provide a simplified approach to rule development and uniform deployment.”

IT Security Engineer in the Finance Industry

 “Using ePO, it’s easy to deploy and manage the devices with different policies.”

Cyber Security Engineer in the Communication Industry

 

And those are just a few. You can read more reviews for McAfee Data Loss Prevention on the Gartner site.

On behalf of McAfee, I would like to thank all of our customers who took the time to share their experiences. We are honored to be a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention and we know that it is your valuable feedback that made it possible. To learn more about this distinction, or to read the reviews written about our products by the IT professionals who use them, please visit Gartner Peer Insights’ Customers’ Choice.

 

  • Gartner Peer Insights’ Customers’ Choice announcement December 17, 2018
The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a 2018 Gartner Peer Insights Customers’ Choice for Enterprise Data Loss Prevention appeared first on McAfee Blogs.

The Year Ahead: Cybersecurity Trends To Look Out for In 2019

A Proven Record Tracking Cybersecurity Trends

This time of the year is always exciting for us, as we get to take a step back, analyze how we did throughout the year, and look ahead at what the coming year will bring. Taking full advantage of our team’s expertise in data and application security, and mining insights from our global customer base, we’ve decided to take a different approach this time around and focus on three key, and overriding trends we see taking center stage in 2019.

2018 brought with it the proliferation of both data and application security events and, as we predicted, data breaches grew in size and frequency and cloud security took center stage globally. With that in mind, let’s take a look at what next year holds.

Data breaches aren’t going away anytime soon, which will bolster regulation and subsequent compliance initiatives

Look, there’ll be breaches, and the result of that is going to be more regulation, and therefore, more compliance, this is a given. In fact, the average cost of a data breach in the US 2018 exceeded $7 million.

Whether it’s GDPR, the Australian Privacy Law, Thailand’s new privacy laws or Turkey’s KVKK; it doesn’t matter where you are, regulation is becoming the standard whether it be a regional, group, or an individual country standard.

Traditionally when we looked at data breaches, the United States lit up the map, but as regulatory frameworks and subsequent compliance measures expand globally, we’re going to see a change.

The annual number of data breaches and exposed records in the United States from 2005 to 2018 (in millions) [Statista]

What you ’ll see in 2019, and certainly, as we move forward, is a red rosy glow covering the entire globe. In 2019 you’ll hear more of “It’s not just the United States. This happens everywhere.”

 

Let’s unpack this for a second. If you were going to steal private data or credit card details, why would you do it in an environment that has world-class, or even mediocre cybersecurity measures in place? If everyone else is even slightly less protected, that’s where you’re going to find people targeting data, but we hear more about it in regions where regulation and compliance is a major focus.

 

To that end, we don’t necessarily see 2019 as the year where regulators start hitting companies with massive fines for compliance. Maybe by the end of the year, or if you see outright egregious negligence. But, you’ll find that companies have put in the legwork when it comes to compliance.

Having your head in the cloud(s) when it comes to managing risk… not a bad idea

McKinsey reports that, by 2020, organizations will be spending more than six times on cloud-specific products than they do on general IT services; and according to a survey by LogicMonitor, up to 83% of all enterprise workloads will be in the cloud around that same time.

LogicMonitor’s Future of the Cloud Study [Forbes]

Organizations continue to capitalize on the business benefits of the digital economy and, as such, end up chunking more data into the cloud. Now, we’re not saying that this is being done without some forethought, but are they classifying data as they go along and increasingly open their businesses up to the cloud?

 

Teams need to recognize that, as they transition their data to the cloud, they transition their awareness of what’s in the cloud; who is using it, when they’re using it, and why they’re using it. 2019 isn’t going be the year that businesses figure out they need to do that. What we will see, however, is increasingly cloud-friendly solutions hit the market to solve these challenges.

Social Engineering and the rise of AI and machine learning in meeting staffing issues

One of 2019’s most critical developments will be how the cybersecurity industry steps up to meet the increasing pressure on security teams to perform. According to the Global Information Security Workforce Study, the shortage of cybersecurity professionals will hit 1.8 million by 2022, but at the same time, a report by ESG shows just nine percent of millennials are interested in a career in cybersecurity.

 

What we’re going to see is how AI  and machine learning in cybersecurity technology will close the gaps in both numbers and diversity of skills.

 

Organizations today have to solve the problem of cybersecurity by hiring for a host of specialized competencies; network security, application security, data security, email security and now, cloud security. Whatever it is, underscore security, those skills are crucial to any organization’s security posture.

 

Here’s the thing, there aren’t a lot of people that claim to know cloud security, database security, application security, data security, or file security. There just isn’t a lot. We know that and we know businesses are trying to solve that problem, often by doing the same old things they’ve always done, which is the most common solution. Do more antimalware, do more antivirus, do more things that don’t work. In some cases, however, they’re doing things around AI and trying to solve the problem by leveraging technology. The latter will lead to a shift where organizations dive into subscription services.

 

There are two facets driving this behavior: the first is the fact that, yes, they realize that they are not the experts, but that there are experts out there. Unfortunately, they just don’t work for them, they work for the companies that are offering this as a service.

 

Secondly, companies are recognizing that there’s an advantage in going to the cloud, because, and this is a major determining factor, it’s an OpEx, not CapEx. The same thing is true of subscription services whether that be in the cloud or on-prem, it doesn’t matter. Driven by skills shortages and cost, 2019 will see an upswing in subscription services, where organizations are actually solving cybersecurity problems for you.

 

We should add here, however, that as more organizations turn to AI and machine learning-based decision making for their security controls, attackers will try to leverage that to overcome those same defenses.

Special mention: The ‘trickledown effect’ of Cyberwarfare

The fact is, cyber attacks between nations do happen, and it’s a give and take situation. This is the world we live in, these are acceptable types of behavior, quite frankly, right now, that won’t necessarily lead to war these days. But someone still stands to gain.

 

Specifically, they’re attacking third-party business, contractors and financial institutions. That’s why cybersecurity is so important, there needs to be an awareness that somebody might be stealing your data for monetary gain. It might be somebody stealing your data for political gain too, and protecting that data is just as critical, regardless of who’s taking it.

 

Now, while state-hacking isn’t necessarily an outright declaration of war these days, it doesn’t end there. The trickledown effect of nation-state hacking is particularly concerning, as sophisticated methods used by various governments eventually find their way into the hands of resourceful cybercriminals, typically interested in attacking businesses and individuals.

Repeat offenders

No cybersecurity hit list would be complete without the things that go bump in the night and, while all of them might not necessarily be ballooning, they’ll always be a thorn in security teams’ sides.

  • Following the 2017 Equifax breach, API security made it onto the OWASP Top 10 list and remains there for a good reason. With the expanding use of APIs and challenges in detecting attacks against them, we’ll see attackers continuing to take aim at APIs as a great target for a host of different threats; including brute force attacks, App impersonation, phishing and code injection.
  • Bad actors already understand that crypto mining is the shortest path to making a profit, and continue to hone their techniques to compromise machines in the hope of mining crypto-coins or machines that can access and control crypto-wallets.
  • Low effort, easy money, full anonymity and potentially huge damage to the victim… what’s not to like when it comes to ransomware? It’s unlikely that we’ll see these types of attacks go away anytime soon.

 

If there’s one overriding theme we’d like to carry with us into 2019 it’s the concept of general threat intelligence, the idea that it’s better to have some understanding of the dangers out there and to do something, rather than nothing at all.

 

We often talk about the difference between risk and acceptable risk or reasonable risk, and a lot of companies make the mistake of trying to boil the ocean… trying to solve every single problem they can, ultimately leaving teams feeling overwhelmed and short on budget.

 

Acceptable risk isn’t, “I lost the data because I wasn’t blocking it. I get it. And it wasn’t a huge amount of data because at least I have some controls in place to prevent somebody from taking a million records, because nobody needs to read a million records. Nobody’s going to read a million records. So, why did I let it happen in the first place?”

 

Acceptable risk is “I know it happened, I accept that it happened, but it’s a reasonable number of events, it’s a reasonable number of records, because the controls I have in place aren’t so specific, aren’t so granular that they solve the whole problem of risk, but they take me to a world of acceptable risk.”

 

It’s better to begin today, and begin at the size and relevance that you can, even if that only takes you from high to medium risk, or reasonable to acceptable risk.

The post The Year Ahead: Cybersecurity Trends To Look Out for In 2019 appeared first on Blog.

How to Check for Blind Spots in Your Security Program

There are so many delegated operations in any business — finance, legal, physical plant functions, etc. — that any number of them can be easily overlooked. Without checking over every minute detail, the overall business appears to function with minimal involvement.

Of course, there are a thousand invisible hands working in the background to keep everything running as smoothly as possible, and these people often don’t get the recognition they deserve, especially when it comes to your enterprise security program.

What You See Versus What You Get

At a high level, we tend to hold some of the same assumptions about cybersecurity as our personal health: If problems aren’t showing on the outside, everything must be good on the inside.

Management and employees alike are guilty of this approach — it’s just human nature. If money is spent, actions are taken and IT says all is well with security, the rest of the organization simply goes about its business, presumably in good health. But there’s almost always something lurking behind the scenes that the experts have overlooked or chosen to ignore. Therein lies the real challenge with security: The situation is often not what it appears to be.

How Confirmation Bias Works Against Your Security Program

It’s easy to go through the motions to evoke the image of a functioning cybersecurity strategy. That’s what confirmation bias trains our minds to look for. We see what we want to see — such as positive security outcomes — and then we seek the evidence necessary to prove our case.

Below are just a few aspects of security that might create an illusion of protection and resiliency if not appropriately supplemented:

  • A formal security committee that meets periodically to review network events and security projects and develop and implement new policies.
  • Well-written policies and procedures that are communicated to users and enforced where necessary.
  • Oversight of the aspects of security visibility, control and response by managed security services providers, security auditors and consultants.
  • Network, application and endpoint security controls that work to protect users from themselves and keep external attackers away.
  • Cyber insurance coverage that provides a safety net for when a security incident takes place.

While these are indeed elements of a well-run security program, all business leaders must face the reality that their organization might not be quite as resilient as they assume, no matter how much money and resources they’ve devoted to it.

Elements of a Well-Planned Security Program

If not properly formed and managed, security committees can be dysfunctional and end up impeding rather than promoting security. Consider what can be done better or differently to improve communication and oversight.

Likewise, security policies can be too heavily relied upon. Although security documentation is a necessary element that auditors, regulators and others will ask for, it can create more security problems than it solves if it’s not backed up by technical controls and a culture of privacy and security.

It’s essential to communicate your security documentation — standards, policies, procedures — regularly and proactively and enforce it thoroughly. Consider bringing on users and executive management to be part of the solution and help fill security gaps. Conducting security awareness training and soliciting program feedback are two ways to keep people engaged with your security program.

Furthermore, security controls, when not properly integrated, can be a challenge to interface with your network and security management environment. Vulnerability and penetration testing are only as good as your response to the lessons learned. Review those takeaways along with your information risk assessment to determine what gaps exist and attain a reasonable level of security.

When All Else Fails, Trust But Verify

If the proper implementation and oversight are lacking in a security program, the risks can fester, whether leadership fully understands them or not. With the constant, multifaceted activity in security, it’s easy to get distracted and leave stones unturned.

That’s why you should trust but verify: Keep operations running smoothly, but watch for common indicators that a device accessing your network is operated by a malicious actor. Artificial intelligence (AI)-enabled security monitoring can help fill security gaps in your organization.

There’s no such thing as perfect security, but if you closely examine its various functions individually and as a whole, you will almost certainly find room for improvement in areas involving people, process and technology. The important thing is to constantly seek new knowledge, do the best with what you’ve got, think for yourself and hold others accountable. Security has to be an enterprisewide team effort.

The post How to Check for Blind Spots in Your Security Program appeared first on Security Intelligence.

Quantum Cryptography: The next-generation of secure data transmission

Quantum cryptography is secure from all future advances in mathematics and computing, including from the number-crunching abilities of a quantum computer. Data proliferation continues to take place at an ever-accelerating

The post Quantum Cryptography: The next-generation of secure data transmission appeared first on The Cyber Security Place.

New Shamoon Malware Variant Targets Italian Oil and Gas Company

Shamoon is back… one of the most destructive malware families that caused damage to Saudi Arabia's largest oil producer in 2012 and this time it has targeted energy sector organizations primarily operating in the Middle East. Earlier this week, Italian oil drilling company Saipem was attacked and sensitive files on about 10 percent of its servers were destroyed, mainly in the Middle East,

10 Tips for Protecting Your Company’s Data Against Insider Threats in 2019

Perhaps because of their incredible scope or their shocking prevalence, data breaches are creating a lot of buzz right now. It seems that a new event happens every week, and

The post 10 Tips for Protecting Your Company’s Data Against Insider Threats in 2019 appeared first on The Cyber Security Place.

Read: New Attack Analytics Dashboard Streamlines Security Investigations

Attack Analytics, launched this May, aimed to crush the maddening pace of alerts that security teams were receiving. For security analysts unable to triage this avalanche of alerts, Attack Analytics condenses thousands upon thousands of alerts into a handful of relevant, investigable incidents.  Powered by artificial intelligence, Attack Analytics is able to automate what would take a team of security analysts days to investigate and to cut that investigation time down to a matter of minutes.

Building upon the success of our launch, we are now introducing the Attack Analytics Dashboard.  Aimed at SOC (Security Operations Center) analysts, managers, and WAF administrators to provide a high-level summary of the type of security attacks that are hitting their web applications; it helps to speed up security investigations and quickly zoom in on abnormal behaviors.

The WAF admin or the SOC can use the Dashboard to get a high-level summary of the security attacks that have happened over a period of time (the last 24 hours, 7 days, 30 days, 90 days or other customized time range):

  • Attack Trends: Incidents and events
  • Top Geographic Areas: Where attacks have originated
  • Top Attacked Resources
  • Breakdown of Attack Tool Types
  • Top Security Violations (Bad Bots, Illegal Resource Access, SQL injections, Cross-Site Scripting, etc.)

Events vs. incidents

Upon entering the Attack Analytics Dashboard, you can see the Incidents tab, which shows the attack trends across time, classified according to severity (critical, major and minor).  A quick scan allows you to understand if a sudden jump in incidents may deserve immediate attention.

In the Events tab, you can see the number of events vs. incidents which have occurred over a specific period of time. For example – the marked point in the graph shows that on October 4th there were 2,142 alerts that were clustered into 19 security incidents. If you want to understand what happened on this day, you can drill down and investigate these 19 incidents.

Next, you can see the Top Attack Origin countries which have attacked your websites over a specified period of time. This again could help identify any abnormal behavior from a specific country. In the snapshot below, you can see the “Distributed” incidents. This means that this customer experienced 4 distributed attacks, with no dominant country, and could imply the attacks originated from botnets spread across the world.

Top attacked resources

Top Attacked Resources provides a snapshot of your most attacked web resources by percentage of critical incidents and the total number of incidents. In this example, singular assets are examined as well as a distributed attack across the customer’s assets. In the 3rd row, you can see that the customer (in this case, our own platform) experienced 191 distributed attacks. This means that each attack targeted a few hosts under our brand name; for example, it may have been a scanning attack aimed at finding vulnerable hosts.

Attack tool types

A SOC Manager/WAF admin might also want to understand the type of attack tools that are being used.  In the example below, on the left, you see the distribution of incidents according to the tool types and on the right, you see the drill-down into the malicious tools, so you can better understand your attack landscape. Over the last 90 days, there were 2.38K incidents that used malicious tools. On the right we can see the breakdown of the different tools and the number of incidents for each one – for example, there were 279 incidents with a dominant malicious tool called LTX71.

We think you’ll quickly discover the benefits which the new Attack Analytics Dashboard provides as it helps you pinpoint abnormal behaviors and speed up your security investigations. It should also assist you in providing other stakeholders within your company a high-level look at the value of your WAF.

And right now, we have even more dashboard insight enrichments in the works, such as:

  • False Positives Suspects: Incidents our algorithms predict to be highly probable of being false positives.
  • Community Attacks (Spray and Pray Attacks): Provide a list of incidents that are targeting you as part of a larger campaign – based on information gathered from our crowdsourced customer data.

Stay tuned for more!

The post Read: New Attack Analytics Dashboard Streamlines Security Investigations appeared first on Blog.

Professionally Evil Insights: Professionally Evil CISSP Certification: Breaking the Bootcamp Model

ISC2 describes the CISSP as a way to prove “you have what it takes to effectively design, implement and manage a best-in-class cybersecurity program”.  It is one of the primary certifications used as a stepping stone in your cybersecurity career.   Traditionally, students have two different options to gain this certification; self-study or a bootcamp.  Both of these options have pros and cons, but neither is the best.

Bootcamps are a popular way to cram for the certification test.  Students spend five days in total immersion into the topics of the CBK.  This is an easy way to pass the exam for lots of students because it focuses them on the CISSP study materials for the bootcamp timeframe.  But there are a few negatives to this model.  First is the significant cost.  The typical prices we see are between $3500 and 5000 with outliers as high as almost $7000.  The second issue is that it takes the student away from their life for the week.  Finally, most people finish the bootcamp with the knowledge to pass the exam but since it is crammed in, they quickly forget most of the information.

Self-Study is the other common mechanism for studying for the CISSP exam.  This allows a dedicated student to learn the information at their pace and time frame.  It also allows for them to decide how much to spend.  From books to online videos and practice exams the costs vary.  The main problem with the method is that students often get distracted by life and work while trying to accomplish it.

But there is an answer that combines the benefits of both previous options.  Secure Ideas has developed a mentorship program designed to provide the knowledge necessary to pass the certification, while working through the common body of knowledge (CBK).  All done in a manner that encourages retention of the knowledge.  And it is #affordabletraining!

The mentorship program is designed as a series of weekly mentor led discussion and review sessions along with various student support and communication methods, spanning a total of 9 weeks.  These work together to provide the student a solid foundation to not only help in passing the certification but to continue as a collection of information for everyday work.   This class is set up to cover the 8 domains of the ISC2 CBK:

  • Security and Risk Management
  • Asset Security
  • Security Architecture and Engineering
  • Communication and Network Security
  • Identity and Access Management (IAM)
  • Security Assessment and Testing
  • Security Operations
  • Software Development Security

The Professionally Evil CISSP Mentorship program uses multiple communication and knowledge sharing paths to build a comprehensive learning environment focused on both passing the CISSP certification and gaining a deep understanding of the CBK.

The program consists of the following parts:

  • Official study guide book
  • Weekly live session with instructor(s)
    • Live session will also be recorded
  • Private Slack team for students and instructors to communicate regularly
  • Practice exams
  • While we believe students will pass on their first try, we also include the option for students to take the program as many times as they want, any time we offer it.  🙂

You can sign up for the course over at https://attendee.gototraining.com/r/2538511060126445313 for only $1000.  Our early bird pricing is $800 and is good until January 31.  Just use the Coupon code EARLYBIRD at checkout.  Veterans, active duty military and first responders also get a significant discount.  Email info@secureideas.com for more information.



Professionally Evil Insights

Data Security Blog | Thales eSecurity: It’s time to think twice about retail loyalty programs

As I was starting to write this blog, yet another retail program data breach occurred, for Marriott’s Starwood loyalty program. In this case, it looks as though the attackers had been on the Starwood network for somewhere around three years, mining out their reservations database (keep in mind that Marriott only acquired Starwood in 2016). Since in Tech we often travel “for a living”, I found in my bag an older Starwood preferred guest card. Not used in years. But it looks like my own personal data has been breached – again.

Lack of Preceiced Need

What I’d originally planned to write about was a topic that directly applies – why retailers of all stripes are not investing in data security. We had some results this year from the 100+ US retail IT security professionals that were surveyed for the 2018 Thales Data Threat Report that differed from every other segment we polled (healthcare, federal government, financial services). To make a long story short – the top reason that they didn’t invest in data security was “lack of perceived need” at 52%.

In other segments there were lots of legacy concerns that don’t apply to modern data security solutions (like those from Thales). These include concerns about complexity, possible impacts on performance, lack of resources to manage, and lack of budget (if it’s complex, and takes lots of resources, then sure it’s probably expensive). I’ve noted those as “legacy” concerns as modern data security solutions can be much less complex than in the past (take a look at our Vormetric Transparent Encryption solution, which offers strong protection with minimal impacts on applications, operations and systems). They typically use hardware encryption built into today’s CPUs for minimal overhead and are available on platforms so that resource loadings are minimal even as you add more solutions to secure data in new applications and environments as your needs grow.

Data Breaches

But none of these reasons rose to the top in retail. “Lack of perceived need” was the number one reason they didn’t deploy.

This “lack of perceived need” response comes against a backdrop of lamentable results around breaches for retail also highlighted in the results: 75% had a data breach (ever), 50% had a data breach in the last year, and 26% had a breach both this year and in the past (half of those breached this year!).

This had me asking a simple question – Why?

  • Doing the math perhaps? Has someone been doing the math, and decided it was cheaper to take the hit of a nearly certain data breach rather than reduce their attack surfaces and increase their vigilance on internal data stores and networks, as well as cloud-based environments? Are they just convinced it won’t happen on their watch (also referred to as “visiting de Nile” at my house)? It’s true that prices for basic remediation (offering customers a year or two of free credit reporting) seem to be falling. Since that plus notifications are the only consequences in most cases, it is certainly a possibility.
  • Not worried about customer churn? Is it that too many retailers have looked around at other retailers with recent breaches, and noticed no shortage of customers? When the Target and Home Depot breaches happened there was a sizeable hit for several quarters if I recall the financial results – perhaps that’s no longer a the case.

Whatever the reason, it’s an appalling attitude.

Which brings us back to our title: “Retail loyalty programs – it’s time to think twice”. Are you really going to allow an organization to put an app on your phone, and a backend big data analytics set or database with lots of personal information about your preferences, personal history, addresses, credit/debit cards and more when they won’t take seriously the protection of your information? My answer is “No”.

As a result I’ve become picky. Retailers need to pretend that I’m from Missouri and “show me” they are serious about data security before I’m ready to let them that far into my life.

You should consider it too.

It’s just a waiting game. Once enough of your personal information has been breached, it’s only a matter of time before someone decides your name and identity are ideal next targets. That makes a cavalier attitude about data security much less forgivable.

For more information about optimal data security solutions for retailers, please visit Thales eSecurity’s dedicated landing page.

The post It’s time to think twice about retail loyalty programs appeared first on Data Security Blog | Thales eSecurity.



Data Security Blog | Thales eSecurity

Not all data collection is evil: Don’t let privacy scandals stall cybersecurity

Facebook continues to be criticized for its data collection practices. The media is hammering Google over how it handles data. JPMorgan Chase & Company was vilified for using Palantir software to allegedly invade the privacy of employees. This past June marked the five-year anniversary of The Guardian’s first story about NSA mass surveillance operations. These incidents and many others have led to an era where the world is more heavily focused on privacy and trust. … More

The post Not all data collection is evil: Don’t let privacy scandals stall cybersecurity appeared first on Help Net Security.

A New Privacy Frontier: Protect Your Organization’s Gold With These 5 Data Risk Management Tips

This is the third and final blog in a series about the new digital frontier for data risk management. For the full picture, be sure to read part 1 and part 2.

Mining customer information for valuable nuggets that enable new business opportunities gets riskier by the day — not only because cyberthieves constantly find new ways to steal that gold, but also due to the growing number of privacy regulations for corporations that handle increasingly valuable data.

The enactment of the European Union (EU)’s General Data Protection Regulation (GDPR) in May of this year was just the start. Beginning in early 2020, the California Consumer Privacy Act of 2018 (CCPA) will fundamentally change the way businesses manage the personal information they collect from California residents. Among other changes, organizations will find a much broader definition of personal information in the CCPA compared to other state data breach regulations. Pundits expect this legislation to be followed by a wave of additional data privacy laws aimed at shoring up consumers’ online privacy.

One major factor behind these new regulations is the widely perceived mishandling of personal information, whether intentionally or unintentionally as a result of a serious data breach perpetrated by cybercriminals or malicious insiders.

Taming the Wild West With New Privacy Laws

The first GDPR enforcement action happened in September, when the U.K. Information Commissioner’s Office charged Canadian data analytics firm AggregateIQ with violating the GDPR in its handling of personal data for U.K. political organizations. This action highlights the consequences that come with GDPR enforcement beyond the regulation’s potential penalty of up to 20 million euros, or 4 percent of a company’s annual revenues worldwide, whichever is higher. It can also require the violator to cease processing the personal information of affected EU citizens.

Although the CCPA does not take effect until January 2020, companies that handle the personal information of Californians will need to begin keeping records no later than January 2019 to comply with the new mandate, thanks to a 12-month look-back requirement. The act calls for new transparency and disclosure processes to address consumer rights, including the ability to opt in and out, access and erase personal data, and prevent its sale. It applies to most organizations that handle the data of California residents, even if the business does not reside in the state, and greatly expands the definition of personal information to include IP addresses, geolocation data, internet activity, households, devices and more.

While it’s called the Consumer Privacy Act, it really applies to any resident, whether they are a consumer, employee or business contact. There may still be corrections or clarifications to come for the CCPA — possibly including some exclusions for smaller organizations as well as health and financial information — but the basic tenants are expected to hold.

Watch the on-demand webinar to learn more

Potential Civil Lawsuits and Statutory Penalties

The operational impact of these new regulations will be significant for businesses. For example, unlike other regulations, companies will be required to give consumers a “do not sell” button at the point of collecting personal information. Companies will also be required to include at least two methods to submit requests, including a toll-free number, in their privacy statements.

The cost of failure to comply with data privacy regulations is steep. Organizations could face the prospect of civil penalties levied by the attorney general, from $2,500 for each unintentional violation up to $7,500 for each intentional violation, with no upper limit. Consumers can also sue organizations that fail to implement and maintain reasonable security procedures and practices and receive statutory payments between $100 and $750 per California resident and incident or actual damages, whichever is greater. As one of the most populous states in the nation, representing the fifth-largest economy in the world, a major breach affecting California residents could be disastrous.

5 Tips to Help Protect Your Claim

The need to comply with data privacy regulations has obviously taken on greater urgency. To do it effectively requires a holistic approach, rather than one-off efforts aimed at each specific set of regulations. Organizations need a comprehensive program that spans multiple units, disciplines and departments. Creating such a program can be a daunting, multiyear effort for larger organizations, one that requires leadership from the executive suite to be successful. The following five tips can help guide a coordinated effort to comply with data privacy regulations.

1. Locate All Personal and Sensitive Data

This information is not just locked up in a well-secured, centralized database. It exists in a variety of formats, endpoints and applications as both structured and unstructured data. It is handled in a range of systems, from human resources (HR) to customer relationship management (CRM), and even in transactional systems if they contain personally identifiable data.

Determining where this information exists and its usage, purpose and business context will require the help of the owners or custodians of the sensitive data. This phase can take a significant amount of time to complete, so take advantage of available tools to help discover sensitive data.

2. Assess Your Security Controls

Once personal data is identified, stakeholders involved in creating a risk management program must assess the security controls applied to that data to learn whether they are adequate and up-to-date. As part of this activity, it is crucial to proactively conduct threshold assessments to determine whether the business and operating units are under the purview of the CCPA.

At the same time, it’s important to assess how personal information is handled and by whom to determine whether processes for manipulating the data need to change and whether the access rights of data handlers are appropriate.

3. Collaborate Across the Enterprise

Managing data risk is a team effort that requires collaboration across multiple groups within the organization. The tasks listed here require the involvement of data owners, line-of-business managers, IT operations and security professionals, top executives, legal, HR, marketing, and even finance teams. Coordination is required between data owners and custodians, who must establish appropriate policies for who can access data, how it should be handled, the legal basis for processing, where it should be stored, and how IT security professionals should be responsible for enforcing those policies.

4. Communicate With Business Leaders

Effectively communicating data risk, including whether existing controls are adequate or require additional resources and how effectively the organization is protecting customer and other sensitive data, requires a common language that can be understood by business executives. Traditional IT security performance metrics, such as block rates, vulnerabilities patched and so on, don’t convey what the real business risks are to C-level executives or board members. It’s critical to use the language of risk and convey data security metrics in the context of the business.

5. Develop a Remediation Plan

Once the business’s compliance posture with the CCPA is assessed, organizations should develop risk remediation plans that account for all the processes that need to change and all the relevant stakeholders involved in executing the plan.

Such a plan should include a map of all relevant personal information that takes into account where the data is stored, how it is used and what controls around that data need to be updated. It should also describe how the organization will safely enable access, deletion and portability requests of California residents, as well as process opt-out requests for sharing their data.

Automate Your Data Risk Management Program

Thankfully, there are tools available to help automate some of the steps required in developing and maintaining a holistic data risk management initiative. Useful data from security information and event management (SIEM), data loss prevention (DLP), application security, and other IT tools can be combined with advanced integration platforms to streamline efforts.

Privacy mandates such as the GDPR and the CCPA are just the start; a California-style gold rush of data privacy regulations is on the horizon. Countries such as Brazil and India are already at work on new data privacy laws. A comprehensive data risk management program established before more regulations go into effect is well worth its weight in gold.

Watch the on-demand webinar

The post A New Privacy Frontier: Protect Your Organization’s Gold With These 5 Data Risk Management Tips appeared first on Security Intelligence.

How to Future-Proof Your Enterprise With Quantum-Safe Cryptography

Quantum computers are poised to solve currently intractable problems for traditional technology. At some point in the next 10 or 15 years, quantum computers may be powerful enough to put your data at risk by compromising your cryptography. Data protected by today’s encryption methods may become susceptible to decryption by the unprecedented processing power of the emerging quantum computer.

Act Today to Prepare for the Future

The urgency to act now is based on a data risk timeline. Data stored today may need to remain confidential or valid for up to 30 years. There are four factors that influence the data risk timeline:

  1. The strength of your current cryptographic algorithms. Weaker algorithms may be at risk before stronger algorithms. The challenge is to know your complete cryptographic inventory.
  2. The security time value of data being protected. How long must the data be protected throughout the life cycle of a product?
  3. Crypto-agility. How quickly can an enterprise upgrade existing cryptographic deployments? For some organizations, it may take years.
  4. The pace of quantum technology improvements.

What Is Quantum-Safe Cryptography?

Quantum-safe cryptography refers to algorithms that run on today’s classical computers but are secure against quantum adversaries. The implication is that we can protect data today.

IBM develops and standardizes quantum-safe cryptographic algorithms in an open and collaborate fashion. Cryptographic standards are important to facilitate the widespread and interoperable adoption of security. IBM believes that lattice-based cryptography has the best combination of quantum-resistant properties and is part of three lattice-based consortium submissions to the National Institute of Standards and Technology (NIST)’s call for post-quantum standards.

Why Crypto-Agility Is Crucial

Few enterprises know the full range of cryptographic solutions they have deployed. For some, it may take years to upgrade their cryptography, as with migrations from SHA-1 to SHA-2 or Triple Data Encryption Standard (TDES) to Advanced Encryption Standard (AES). The transition from today’s cryptography to quantum-safe technology offers an opportunity to rethink how applications consume cryptography. Cryptographic agility is a key aspect of cybersecurity, and organizations would be wise to leverage it as part of their quantum-safe journey.

If you’re interested in setting up a Quantum Risk Assessment for your organization, please get in touch.

The post How to Future-Proof Your Enterprise With Quantum-Safe Cryptography appeared first on Security Intelligence.

How Daniel Gor Helps Protect the World — and His Grandparents — From Financial Fraud

Daniel Gor might be “just a regular guy” by his own account, but he’s doing important work that shouldn’t be overlooked. As a solution engineer on IBM Trusteer’s fraud analyst team, Daniel spends his days helping to protect our hard-earned cash from fraudsters.

“There’s a nice feeling knowing that you’re with the good guys,” Daniel said as he talked about social engineering and automated hacking from his office in Tel Aviv, Israel. And, as the product of two cultures, Daniel has a more global view of financial fraud than most.

Born in New York and raised in Miami through his early years, Daniel moved to Israel at the age of seven when his parents decided they wanted to be closer to their families. Today Daniel has a family of his own — a wife and seven-month-old daughter — and still lives close to his extended family in Ra’anana, a suburb not far from Tel Aviv.

He said the impact of two very different cultures sometimes comes out in his work style: A combination of American diligence and persistence with a hint of the typical Israeli “chutzpah.” He said his experiences in the army, as part of Unit 8200 in the Israeli Intelligence Corps, and at university gave him “perspective about how to get things done and how to approach tasks.”

Namely, he said, there’s an element of searching for the truth, “even if you don’t go by all the rules.” That comes in handy when writing policies for his fraud analyst colleagues.

Humble Beginnings as a Financial Fraud Analyst

Daniel graduated from university less than two years ago and went straight to work at IBM Trusteer. He started as a fraud analyst, conducting research to determine the rules the team needed to establish to protect financial data for a range of banks. The team writes rules and policies that are applied behind the scenes for the banks’ different applications; these, in turn, help identify behavioral anomalies that may indicate a fraud attempt.

Each analyst is responsible for monitoring the performance of the policies and rules at several banks; this often constitutes hundreds of rules and reams of data. Daniel’s firsthand experience as an analyst informs his current work as a solution engineer to automate processes designed to assist analysts in this monitoring and, in addition, implement machine learning algorithms that can strengthen the policies even more.

But rules and policies are just one part of the equation. Banks also need to build a picture of what each customer’s “digital identity” looks like so they can detect fraud sooner and more efficiently. Without an idea of how Joe from Jacksonville regularly interacts with his accounts, the bank will never know whether Joe’s profile has been compromised. This is an entirely new research field that Daniel is a part of.

Daniel Gor

Automated Behavioral Analysis Is a Game-Changer

In his present role as a solution engineer, Daniel partners with the team to analyze behavior indicators using machine learning models. He trains the models to identify behavioral anomalies and then writes those models as rules in the bank’s policies.

So that phone call you got from the bank asking if you were currently hesitant or suspiciously stalling while committing a transaction? That’s likely because, thanks to Daniel’s work, your bank identified an anomaly in your normal behavior patterns.

Daniel believes automation technology and AI have had a “great impact” on security in the financial sector.

“The machine learning algorithms are so smart now, they can detect anomalies only by mouse movement or the time that the fraudster spends on a page inside the account,” he explained. “The AI allows us to detect those anomalies in the user’s behaviors.”

Standing Up for Good Values

Unfortunately, fraudsters continue to exploit our human innocence and conduct artful sophistry such as social engineering to target vulnerable banking customers and steal their credentials. Daniel said he’s been surprised at the sophistication and methods used by these fraudsters, who can go so far as calling customers posing as bank personnel to supposedly help them recover money.

“In a way, I was surprised at how people can exploit people’s good natures and vulnerabilities,” he said.

In light of this threat, Daniel noted that he works in cybersecurity so his grandparents can live their lives without fear of being deceived every time the phone rings. And to those who are considering following in his footsteps, Daniel encouraged aspiring cybersecurity professionals to “just do it.” While tech careers are becoming more and more coveted, he believes the goal of working in a company “where you feel you’re adding to the world with good values” is worth aspiring to.

“In a way, I can say that I’m working for myself,” he said. “I want my money to be safe in a place only people I trust have access to, and it’s very important for the world to have these kinds of shields from people that are eventually trying to steal our money, to steal credentials. The world needs companies that are here to prevent those kinds of cases.”

Meet Fraud Analyst Shir Levin

The post How Daniel Gor Helps Protect the World — and His Grandparents — From Financial Fraud appeared first on Security Intelligence.

Data Security Blog | Thales eSecurity: Perspectives on the ‘Paris Call’

“We the People of the United States, in Order to form a more perfect Union”

“Four score and seven years ago”

“I have a dream”

These are very well known quotes to every American. These quotes where opening salvos by great leaders who knew we had to come together for change and for good. Although the quotes I know off the top of my head are provincial, I also know that when there is a time that requires change, a time people must come together, for good, we should be listening to great leaders around the world.

Earlier this month, French President Emmanuel Macron made the call to come together and address a global challenge, the need for data security in cyberspace. Without data security there can be no trust, bad actors can wreak havoc, and we the people can have our lives quickly turned upside down by hackers. There isn’t a day that goes by without news of how hackers, terrorists, and nation states are infiltrating the foundations of what President Macron defines as “information and communication technologies (ICT).”

Perspectives on the Paris Call

Macron made the opening salvo to address this problem, globally and together, not only through piecemeal regulations. He rolled out the “Paris Call for Trust and Security in Cyberspace”. He called for leaders to reaffirm “our support to an open, secure, stable, accessible and peaceful cyberspace, which has become an integral component of life in all its social, economic, cultural and political aspects.”

Essentially, he is asking to apply the best practices we learned as a society from world wars and large scale disasters to the new world of cyberspace. The document calls for leaders to condemn malicious cyber activities in peacetime, just as we do for traditional invasions and attacks on infrastructure and indiscriminant attacks on individuals. He asks that we support victims of malicious use of ICTs and for stakeholders to cooperate to protect and respond to such attacks.

The Paris Call lists out nine norms, all of which you can find in the link above. Here’s a sampling of three:

  • Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities
  • Prevent ICT (information and communication technologies) enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sector
  • Strengthen the security of digital processes, products and services, throughout their lifecycle and supply chain

The U.K., Canada, and New Zealand have all signed on, along with leadership from Microsoft, Google, IBM, and HP. It is reported that the United States is in ‘talks’ and has not yet signed onto the initiative. We should all hope that China and Russia join in this effort too. What is important is that the call has been made and it has early success. I’m hopeful that this is the start of more collaboration and ultimately a safer cyber environment for working, living and playing in cyberspace. Incredible changes for good often take time and may never be entirely reached, but they always start with the call for moving together towards a dream with the goal of perfection. It is time for us to start this journey, globally and together.

Have questions? Leave a comment below, or follow Thales eSecurity on Twitter, LinkedIn and Facebook.

The post Perspectives on the ‘Paris Call’ appeared first on Data Security Blog | Thales eSecurity.



Data Security Blog | Thales eSecurity

Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility

This article explains how Imperva application security integrates with AWS Security Hub to give customers better visibility and feedback on the security status of their AWS hosted applications.

Securing AWS Applications

Cost reduction, simplified operations, and other benefits are driving organizations to move more and more applications onto AWS delivery platforms; because all of the infrastructure maintenance is taken care of by AWS.  As with migration to a cloud service, however, it’s important to remember that cloud vendors generally implement their services in a Shared Security Responsibility Model.  AWS explains this in a whitepaper available here.

Imperva solutions help diverse enterprise organizations maintain consistent protection across all applications in their IT domain (including AWS) by combining multiple defenses against Application Layer 3-4 and 7 Distributed Denial of Service (DDoS) attacks, OWASP top 10 application security risks, and even zero-day attacks.  Imperva application security is a top-rated solution by both Gartner and Forrester for both WAF and DDoS protection.

Visibility Leads to Better Outcomes

WAF security is further enhanced through Imperva Attack Analytics, which uses machine learning technology to correlate millions of security events across Imperva WAFs assets and group them into a small number of prioritized incidents, making security teams more effective by giving them clear and actionable insights.

AWS Security Hub is a new web service that provides a consolidated security view across AWS Services as well as 3rd party solutions.  Imperva has integrated its Attack Analytics platform with AWS Security Hub so that the security incidents Attack Analytics generates can be presented by the Security Hub Console.

Brief Description of How the Integration Works

The integration works by utilizing an interface developed for AWS Security Hub for what is essentially an “external data connector” called a Findings Provider (FP). The FP enables AWS Security Hub to ingest standardized information from Attack Analytics so that the information can be parsed, sorted and displayed. This FP is freely available to Imperva and AWS customers on Imperva’s GitHub page listed at the end of this article.

Figure 1: Screen Shot of Attack Analytics Incidents in AWS Security Hub

The way the data flows between Attack Analytics and AWS Security Hub is that Attack Analytics exports the security incidents into an AWS S3 bucket within a customer account, where the Imperva FP can make it available for upload.

Figure 2: Attack Analytics to AWS Security Hub event flow

To activate AWS Security Hub to use the Imperva FP, customers must configure several things described in the AWS Security Hub documentation. As part of the activation process, the FP running in the customer’s environment needs to acquire a product-import token from AWS Security Hub. Upon FP activation, the FP is authorized to import findings into their AWS Security Hub account in the AFF format, which will happen at configurable time intervals.

It’s critically important that organizations maintain robust application security controls as they build or migrate applications to AWS architectures.  Imperva helps organizations ensure every application instance can be protected against both known and zero-day threats, and through integration with AWS Security Hub, Imperva Attack Analytics can ensure organizations always have the most current and most accurate status of their enterprise application security posture.

 

Security Hub is initially being made available as a public preview.  We are currently looking for existing Attack Analytics customers that are interested in working with us to refine our integration. If you’re interested in working with us on this please get in touch.  Once SecurityHub becomes generally available we intend to release our Security Hub integration as an open source project on Imperva’s GitHub account.

The post Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility appeared first on Blog.

Phishers Up Their Game to Combat User Awareness

In an attempt to undermine the security industry’s effort to educate end users about phishing campaigns, malicious actors are evolving in their tactics, according to Zscaler. In a recent blog

The post Phishers Up Their Game to Combat User Awareness appeared first on The Cyber Security Place.

Third parties: Fast-growing risk to an organization’s sensitive data

The Ponemon Institute surveyed more than 1,000 CISOs and other security and risk professionals across the US and UK to understand the challenges companies face in protecting sensitive and confidential information shared with third-party vendors and partners. According to the findings, 59 percent of companies said they have experienced a data breach caused by one of their vendors or third parties. In the U.S., that percentage is even higher at 61 percent — up 5 … More

The post Third parties: Fast-growing risk to an organization’s sensitive data appeared first on Help Net Security.

The Top Security Breaches in History

By Telemessage, Technology has indeed enabled companies of all sizes and nature to rethink and innovate the way they do business. Through the power of the internet, digital platforms, and

The post The Top Security Breaches in History appeared first on The Cyber Security Place.

Cloud interoperability and app mobility outrank cost and security for primary hybrid cloud benefits

Enterprises plan to increase hybrid cloud usage, with 91% stating hybrid cloud as the ideal IT model, but only 18% stating they have that model today, according to Nutanix. Application mobility across any cloud is a top priority for 97% of respondents – with 88% of respondents saying it would “solve a lot of my problems.” IT decision makers ranked matching applications to the right cloud environment as a critical capability, and 35% of organizations … More

The post Cloud interoperability and app mobility outrank cost and security for primary hybrid cloud benefits appeared first on Help Net Security.

Imperva and Amazon Partner to Help Mitigate Risks Associated With Cloud Migration

Helping our customers reduce the risks associated with migrating to the cloud, and preventing availability and security incidents, has been a major development focus for Imperva over the last several years.  

Why the partnership matters

Although cloud service providers take a host of IT management burdens off of your shoulders when using their platforms, service level agreements (SLA) for platform availability and security don’t cover what runs on the platform. While they protect the platform itself, they are very clear that management, compliance and security responsibilities for your applications and data are yours alone.  Amazon calls this a Shared Responsibility Model.

What we do

For applications, Imperva helps customers ensure that they don’t suffer from Application Layer 3-4 and 7 Distributed Denial of Service (DDoS) attacks and protects against all OWASP top 10 application security risks and even zero-day attacks.  Imperva application security is a top-rated solution by both Gartner and Forrester for both WAF and DDoS protection.

Additionally, for cloud database migrations, Imperva helps ensure customers don’t leave gaps in their compliance and security controls as they migrate their database to the AWS EC2 Infrastructure as a Service (IaaS) platform.  As of December 2017, we also cover Platform as a Service (PaaS) offerings such as Amazon RDS.

Staying Agile

Most organizations operate hybrid IT environments, hosting some applications and data in on-premises data centers, and some on public cloud platforms – or multiple vendor cloud platforms. Imperva supports these configurations and provides solutions to integrate security into Continuous Integration and Continuous Deployment (CI/CD) processes used by DevOps project teams.

Imperva recently acquired the Prevoty Runtime Application Self Protection (RASP) solution; so our customers can automate security deployment in DevOps project delivery processes, to ensure applications and data are always protected.  

Stop by the Imperva booth at re:Invent 2019 and get a personal update on our solutions for AWS, and don’t miss our subject matter expert, Peter Klimek, speak about strategies for a proactive and preventative security approach in session: DEM44: Security Challenges in a DevOps World in the Expo Pilvi Theatre.

The post Imperva and Amazon Partner to Help Mitigate Risks Associated With Cloud Migration appeared first on Blog.

Cybersecurity and ethical data management: Getting it right

Data can provide information, information can lead to insight and knowledge, and knowledge is power. It’s no wonder, then, that seemingly everybody in this modern, computerized world of ours loves

The post Cybersecurity and ethical data management: Getting it right appeared first on The Cyber Security Place.

Patched Facebook Vulnerability Could Have Exposed Private Information About You and Your Friends

In a previous blog we highlighted a vulnerability in Chrome that allowed bad actors to steal Facebook users’ personal information; and, while digging around for bugs, thought it prudent to see if there were any more loopholes that bad actors might be able to exploit.

What popped up was a bug that could have allowed other websites to extract private information about you and your contacts.

Having reported the vulnerability to Facebook under their responsible disclosure program in May 2018, we worked with the Facebook Security Team to mitigate regressions and ensure that the issue was thoroughly resolved.

Identifying the Threat

Throughout the research process for the Chrome piece, I browsed Facebook’s online search results, and in their HTML noticed that each result contained an iframe element — probably used for Facebook’s own internal tracking. Being pretty familiar with the unique cross-origin behavior of iframes, I came up with the following technique:

To start, let’s take a look at the Facebook search page, we have an endpoint that expects a GET request with a number of search parameters. The endpoint, like most search endpoints, is not cross-site request forgery (CSRF) protected, which normally allows users to share the search results page via a URL.

This is fine in most cases since no action is being made by the user, making this CSRF attack meaningless by itself. The thing is, iFrames, unlike most web elements, are exposed in part to cross-origin documents; combine that with the search CSRF issue and you have a real problem on your hands.

Check out the proof of concept here:

Attack Flow

For this attack to work we need to trick a Facebook user to open our malicious site and click anywhere on the site, (this can be any site we can run JavaScript on) allowing us to open a popup or a new tab to the Facebook search page, forcing the user to execute any search query we want.

Since the number of iframe elements on the page reflects the number of search results, we can simply count them by accessing the fb.frames.length property.

By manipulating Facebook’s graph search, it’s possible to craft search queries that reflect personal information about the user.

For example, by searching: “pages I like named `Imperva`” we force Facebook to return one result if the user liked the Imperva page or zero results if not:

Similar queries can be composed to extract data about the user’s friends. For example, by searching “my friends who like Imperva” I can check if the current user has any friends who like the Imperva Facebook page.

Other interesting examples of the kind of data it was possible to extract:

  • Check if the current Facebook users have friends from Israel: https://www.facebook.com/search/me/friends/108099562543414/home-residents/intersect
  • Check if the user has friends named “Ron”: https://www.facebook.com/search/str/ron/users-named/me/friends/intersect
  • Check if the user has taken photos in certain locations/countries: https://www.facebook.com/search/me/photos/108099562543414/photos-in/intersect
  • Check if the current user has Islamic friends: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/intersect
  • Check if the current user has Islamic friends who live in the UK: https://www.facebook.com/search/me/friends/109523995740640/users-religious-view/106078429431815/residents/present/intersect
  • Check if the current user wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_me%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies
  • Check if the current user’s friends wrote a post that contains a specific text: https://www.facebook.com/search/posts/?filters_rp_author=%7B%22name%22%3A%22author_friends%22%2C%22args%22%3A%22%22%7D&q=cute%20puppies

This process can be repeated without the need for new popups or tabs to be open since the attacker can control the location property of the Facebook window by running the following code:

This is especially dangerous for mobile users, since the open tab can easily get lost in the background, allowing the attacker to extract the results for multiple queries, while the user is watching a video or reading an article on the attacker’s site.

As a researcher, it was a privilege to have contributed to protecting the privacy of the Facebook user community, as we continuously do for our own Imperva community.

The post Patched Facebook Vulnerability Could Have Exposed Private Information About You and Your Friends appeared first on Blog.

Which Threats had the Most Impact During the First Half of 2018?

One of the best ways for organizations to shore up their data security efforts and work toward more proactive protection is by examining trends within the threat environment.

Taking a look at the strategies for attack, infiltration and infection currently being utilized by hackers can point toward the types of security issues that will continue in the future and enable enterprises to be more prepared with the right data and asset safeguarding measures.

Each year brings both continuing and emerging threats which can complicate security efforts. Awareness of the most impactful threats – including those that might have been popular in the past, as well as the new approaches spreading among cybercriminals – is crucial in the data security landscape.

Recently, Trend Micro researchers examined the data protection and cyberthreat issues prevalent during the first half of 2018 and included these findings in the 2018 Midyear Security Roundup: Unseen Threats, Imminent Losses report.

Let’s take a closer look at this research, as well as top identified threats that impacted businesses during the first six months of this year.

Widespread vulnerabilities and software patching

Back in 2014, the world was introduced to Heartbleed. At the time, it was one of the largest and most extensive software vulnerabilities, impacting platforms and websites leveraging the popular OpenSSL cryptographic software library. The bug made global news because of the vast number of websites it affected, as well as the fact that it enabled malicious actors to access, read and potentially leak data stored in systems’ memory.

Since then, a few additional vulnerabilities have been identified, including two at the beginning of 2018. Design flaws within microprocessing systems – since dubbed Meltdown and Spectre – were identified by researchers. Unfortunately, though, these weren’t the only high-profile vulnerabilities to make headlines this year.

As Trend Micro reported in May, eight other vulnerabilities were uncovered following Meltdown and Spectre, which also impacted Intel processors, including four that were considered “high” severity threats. Because these processors are used by a considerable number of devices within businesses and consumer environments across the globe, the emerging vulnerabilities were significantly worrisome for security admins and individual users alike.

Vulnerabilities that affect such large numbers of devices and users can be a significant challenge for enterprise security postures. Taking a cue from Heartbleed, the Register reported that despite the fact that a patch was released several years earlier, an estimated 200,000 systems were still vulnerable to the bug in early 2017.

Installing software updates in a timely manner is a top facet of patching best practices.

Spectre, Meltdown and the series of other identified vulnerabilities showcase the key importance of proper patching. Even Intel worked to drive this point home in a released statement encouraging users to maintain a beneficial patching strategy.

“We believe strongly in the value of coordinated disclosure and will share additional details on any potential issues as we finalize mitigations,” Intel noted, according to TechSpot. “As a best practice, we continue to encourage everyone to keep their systems up-to-date.”

The mere presence of an identified vulnerability can create security weaknesses, but an unpatched system can boost the chances of an attack or breach incident even further. It’s imperative that, in light of these widespread vulnerabilities, enterprises ensure their patching processes are comprehensive and proactive.

Cryptocurrency mining steals valuable resources

Researchers also noted that while cryptocurrency mining activity became more prevalent in 2017, this trend continued into the first half of 2018. Cryptocurrency mining programs can be more of an issue than many users might realize, as such a malicious initiative can rob enterprise infrastructures of key computing resources required to maintain top performance of their critical systems and applications, not to mention result in increased utility costs.

During the first six months of 2018, researchers recorded a more than 140 increase in cryptocurrency mining activity through Trend Micro’s Smart Protection Network Infrastructure. What’s more, 47 new miner malware families were identified during Q1 and Q2, demonstrating that cryptocurrency mining will continue to be a top initiative for hackers.

“Unwanted cryptocurrency miners on a network can slow down performance, gradually wear down hardware, and consume power – problems that are amplified in enterprise environments,” Trend Micro researchers stated in the Unseen Threats, Imminent Losses report. “IT admins have to keep an eye out for unusual network activity considering the stealthy but significant impact cryptocurrency mining can have on a system.”

Ransomware: No end in sight

For years, ransomware infections have been a formidable threat to organizations within every industry, and the first half of 2018 saw no change in this trend. Researchers again identified an increase in ransomware infection activity – 3 percent. While this may seem small, the current rate at which ransomware attacks take place make this rise significant.

At the same time, Trend Micro discovered a 26 percent decrease in new ransomware families. This means that while hackers are continuing to leverage this attack style to extort money from victims, they are utilizing existing, standby ransomware samples, creating fewer opportunities for zero-day ransomware threats.

Data breaches remain a constant issue for businesses of all shapes and sizes.

Mega breaches: An increasingly frequent issue

As the sophistication and potential severity of hacker activity continue to rise, so too do the consequences of successful attacks.

According to data from the Privacy Rights Clearinghouse, there was a 16 percent increase in data breaches reported in the U.S. during the first half of 2018, including 259 incidents overall. Fifteen of these events were considered “mega breaches,” or those that exposed 1 million records or more over the course of the breach and subsequent fallout.

Such incidents surpass traditional breaches in widespread effects on the victim company, its users and customers and the industry sector at large. Most of these mega breaches (71 percent) took place within the healthcare industry, and when one considers the significant amount of sensitive data healthcare institutions deal with, such threat environment conditions aren’t that surprising.

It’s also important to consider not only the traditional impact of regular and mega breaches – including losses related to company reputation and image, revenue, customer acquisition and retention and more – but the compliance costs that can emerge as well. This is an especially imperative consideration in the age of the EU’s General Data Protection Regulation, which became enforceable in May.

“This regulation … sets a high bar for data security and privacy protection,” Trend Micro’s report stated. “It imposes considerable fines for noncompliant organizations … Moreover, it has quite a long reach since any organization holding EU citizens’ data is affected.”

Check out Trend Micro’s GDPR Resource Center to learn more about maintaining compliance with this standard.

Read Trend Micro’s Unseen Threats, Imminent Losses report for more information about the top threats identified during the first half of this year.

The post Which Threats had the Most Impact During the First Half of 2018? appeared first on .

44% of Security Professionals Spend More than 20 Hours a Week Responding to Alerts

As the global cybersecurity climate continues to heat up, so too do the subsequent levels of alert fatigue IT security professionals have to deal with.

A recent survey by Imperva reveals that nine percent of UK security teams battle with over five million alerts each week. Five million, just let that sink in for a minute.

We spoke to 185 security professionals at Infosecurity Europe, revealing that nine percent of security professionals have to deal with over a million security alerts each day, leaving 22% feeling “stressed and frustrated.”

Fighting false positives

Our survey revealed that 63% of organizations often struggle to pinpoint which security incidents are critical, while 66% admitted they have ignored an alert due to a previous false-positive result.

Today’s security teams are on the receiving end of an avalanche of alerts and; while many of these alerts represent false positives, a large number also alert teams to critical events which, if ignored, could put an organization at serious risk. With IT security teams already spread thin, these alerts pile on additional pressure, which can become overwhelming.

Alert fatigue

The study also asked how many hours respondents spend every day dealing with security incidents; revealing that only 25% spend less than an hour, 31% spend between one and four hours, while 44% of security professionals admitted to spending over four hours every day dealing with security incidents.

Additionally, when respondents were asked what happens when the Security Operations Centre (SOC) has too many alerts for analysts to process, worryingly, nine percent said they turn off alert notifications altogether. 23% of Respondents said they ignore certain categories of alerts, 58% said they tune their policies to reduce alert volumes, and a lucky 10% said they hire more SOC engineers.

Not all businesses have the luxury to hire more staff when alert volume becomes too high, so Imperva has developed a solution which can help address this burden. Attack Analytics uses the power of artificial intelligence to automatically group, consolidate and analyze thousands of web application firewall (WAF) security alerts across different environments to identify the most critical security events. The solution combats security alert fatigue and allows security teams to easily identify the attacks that pose the highest risk.

The post 44% of Security Professionals Spend More than 20 Hours a Week Responding to Alerts appeared first on Blog.

Imperva Joins Global Cybersecurity Tech Accord

Imperva is dedicated to the global fight to keep people’s data and applications safe from cybercriminals. What this means for our Imperva Threat Research team is that we spend a lot of time researching new cyber attacks, creating mitigations and writing powerful software. We believe that nothing grows in a vacuum, and as such understand the importance of collaboration as a member of the global cybersecurity ecosystem.

To this end, when we heard about the Cybersecurity Tech Accord, we knew it provided a unique opportunity for us to not only continue protecting our customers but to help make “cyberspace” safer for everyone. We’ve committed to working hand-in-hand with 61 other global companies, in doing so improving the security, stability, and resilience of cyberspace.

About the Cybersecurity Tech Accord

The Cybersecurity Tech Accord is a public commitment among 61 global companies to protect and empower the global online community and to improve the security, stability, and resilience of cyberspace.

Our Tech Accord commitment:

  • We will protect all of our users and customers everywhere. For us at Imperva, that’s part of our DNA. That means prioritizing security, integrity, and reliability of our software, to decrease the likelihood, frequency, exploitability, and severity of vulnerabilities.
  • We will oppose cyberattacks on innocent citizens and enterprises from anywhere. That means that we will not help governments launch cyber attacks on innocent citizens and enterprises from anywhere.

    It also means that we will protect against tampering with and exploitation of technology products and services during their development, design, distribution, and use.
  • We will help empower users, customers and developers to strengthen cybersecurity protection. That means we will provide our users, customers and the wider developer ecosystem with information and tools that enable them to understand current and future threats and protect themselves against them.

    It also means that we will support civil society, governments, and international organizations in their efforts to advance security in cyberspace and to build cybersecurity capacity in developed and emerging economies alike.
  • We will partner with each other and with like-minded groups to enhance cybersecurity. We feel like this is the heart of the Cybersecurity collaboration, the commitment to first fight cyber-crime and cyber-terrorism, only then be business rivals.

    It means we will work with each other and will establish formal and informal partnerships with industry, civil society, and security researchers, across proprietary and open source technologies, to improve technical collaboration, coordinated vulnerability disclosure, and threat sharing, as well as to minimize the levels of malicious code being introduced into cyberspace.

    It also means we will encourage global information sharing and civilian efforts to identify, prevent, detect, respond to, and recover from cyber attacks and ensure flexible responses to the security of the wider global technology ecosystem.

We are excited about being a part of the Cybersecurity Tech Accord and look forward to collaborating with fellow members.
One of our first collaborations will be a webinar for the Global Forum on Security Expertise (GFSE).
About the Global Forum on Cyber Expertise (GFCE)

The Global Forum on Cyber Expertise (GFCE) is a global platform for countries, international organizations and private companies to exchange best practices and expertise on cyber capacity building. The aim is to identify successful policies, practices, and ideas and multiply these on a global level. Together with partners from NGOs, the tech community and academia GFCE members develop practical initiatives to build cyber capacity.

Our webinar with the GFCE will be on application security. As you can imagine, this is a topic we are very passionate about – so we hope you’ll join us.

The post Imperva Joins Global Cybersecurity Tech Accord appeared first on Blog.

The Top 3 Reasons to Integrate DLP with a Cloud Access Security Broker (CASB)

Companies of all sizes are adopting cloud-based services, such as Microsoft Office 365, as a way to give their end-users greater flexibility and easier access to core business applications.  This requires corporate IT departments to reexamine their current data security posture, including Data Loss Prevention policies to better monitor and control sensitive data that are being created in the cloud, traversing from endpoint devices to cloud applications, and vice versa.

For those of you who want to extend your DLP policies to the cloud and create a seamless, unified data protection experience, check out the latest integration between McAfee DLP and McAfee Skyhigh Security Cloud DLP (CASB).   Here’s why:

Your upgrade is painless

With the latest integration of McAfee Endpoint DLP and Skyhigh Security Cloud, existing DLP customers can easily extend current enterprise DLP policies to the cloud via the McAfee ePO console.  Connecting the two solutions can be as easy as one click and as fast as under one minute.

Your DLP detection is consistent

Consistent data protection policies will be created to protect the data, whether it is residing on the endpoint, being shared via the network, or traversing to cloud applications.  This is done via the McAfee ePO console by sharing the on-prem DLP classification tags which help define cloud DLP policies.  These tags are available out-of-the-box.

Your single view for all incident management and reporting

With the McAfee ePO console, you have a single pane of glass management experience.  DLP violations can be viewed in McAfee ePO whether the incident is from an on-prem device or a cloud application.

With the integration, there are additional benefits you can gain including real-time activity monitoring and threat protection against Shadow IT, the ability to identify anomalous behavior using Cloud data, integration with McAfee® Global Threat Intelligence to inspect cloud data, along with out-of-the-box policy templates based on business requirement, compliance regulation, industry, cloud service, and third-party benchmark.

Figure 1. Simple architecture of McAfee DLP and McAfee Skyhigh Security Cloud Integration

With more data being created in and sent to the cloud every day, it is more important than ever to have a set of consistent DLP policies that protects data from any leakage vectors – whether it’s corporate endpoints, unmanaged devices, in the network or in cloud applications.  For a view of the integration in action, check out the video below:

The post The Top 3 Reasons to Integrate DLP with a Cloud Access Security Broker (CASB) appeared first on McAfee Blogs.

Microsoft and Imperva Collaboration Bolsters Data Compliance and Security Capabilities

This article explains how Imperva SecureSphere V13.2 has leveraged the latest Microsoft EventHub enhancements to help customers maintain compliance and security controls as regulated or sensitive data is migrated to Azure SQL database instances.

Database as a Service Benefits

Platform as a Service (PaaS) database offerings such as Azure SQL are rapidly becoming a popular option for organizations deploying databases in the cloud.

One of the benefits of Azure SQL, which is essentially a Relational Database as a Service (RDaaS), is that all of the database infrastructure administrative tasks and maintenance are taken care of by Microsoft – and this is proving to be a very compelling value proposition to many Imperva customers.

Security is a Shared Service

What you should remember with any data migration to a cloud service, is that while hardware and software platform maintenance is no longer your burden, you still retain the responsibility for security and regulatory compliance.  Cloud vendors generally implement their services in a Shared Security Model.  Microsoft explains this in a whitepaper you can read here.

To paraphrase in the extreme, Microsoft takes responsibility for the security of the cloud, while customers have responsibility for security in the cloud.

This means Microsoft provides the services and tools (such as firewalls) to secure the infrastructure (such as networking and compute machines), while you are responsible for application and database security.

Though this discussion is about how it works with Azure SQL, the table below from the Microsoft paper referenced above shows the shared responsibility progression across all of their cloud offerings.

Figure 1:  Shared responsibility model from the Microsoft Whitepaper Shared Responsibilities for Cloud Computing

Brief Description of How Continuous Azure SQL Monitoring Works

SecureSphere applies multiple services in the oversight of data hosted by Azure SQL.  The Services include but are not limited to the following:

  • Database vulnerability assessment
  • Sensitive data discovery and classification
  • User activity monitoring and audit data consolidation
  • Audit data analytics
  • Reporting

The vulnerability assessment and data discovery are done by scanning engines that have some kind of service account access to database interfaces.  The activity monitoring is done by a customizable policy engine, pre-populated with compliance and security rules for common compliance and security requirements such as separation of duties – but fully customizable for company or industry-specific requirements.

With Azure SQL, SecureSphere monitoring and audit activity leverages the Microsoft EventHub service.  Recent enhancements to EventHub, on which Microsoft and Imperva collaborated, provide a streaming interface to database log records that Imperva SecureSphere ingests, analyzes with its policy engine (and other advanced user behavior analytics), and then takes appropriate action to prioritize, flag, notify, or alert security analysts or database administrators about the issues.

Figure 2:  Database monitoring event flow for a critical security alet

Benefits of Imperva SecureSphere for Azure SQL Customers

A Key benefit that a solution such as SecureSphere Database Activity Monitoring (DAM) provides is integrating the oversight of Azure SQL into a broad oversight lifecycle all enterprise databases.  With SecureSphere, here are some things you can do to ensure the security of your data in the cloud:

  • Secure Hybrid enterprise database environments: While many organizations now pursue a “cloud first” policy of locating new applications in the cloud, few are in a position to move all existing databases out of the data center, so they usually maintain a hybrid database estate – which SecureSphere easily supports.
  • Continuously monitor cloud database services: You can migrate data to the cloud without losing visibility and control. SecureSphere covers dozens of on-premises relational database types, mainframe databases, and big data platforms.  It supports Azure SQL and other RDaaS too – enabling you to always know who is accessing your data and what they are doing with it.
  • Standardize and automate security, risk management, and compliance practices: SecureSphere implements a common policy for oversight and security across all on-premises and cloud databases.  If SecureSphere detects that a serious policy violation has occurred, such as unauthorized user activity,  it can immediately alert you.  All database log records are consolidated and made available to a central management console to streamline audit discovery and produce detailed reports for regulations such as SOX, PCI DSS and more.
  • Continuously assess database vulnerabilities: SecureSphere Discovery and Assessment streamlines vulnerability assessment at the data layer. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities. The vulnerability assessment process, which can be fully customized, uses industry best practices such as DISA STIG and CIS benchmarks.

It’s critically important that organizations extend traditional database compliance and security controls as they migrate data to new database architectures such as Azure SQL. Imperva SecureSphere V13.2 provides a platform to incorporate oversight of Azure SQL instances into broad enterprise compliance and security processes that include both cloud and on-premises, and data assets.

The post Microsoft and Imperva Collaboration Bolsters Data Compliance and Security Capabilities appeared first on Blog.

Explainer Series: RDaaS Security and Managing Compliance Through Database Audit and Monitoring Controls

As organizations move to cloud database platforms they shouldn’t forget that data security and compliance requirements remain an obligation. This article explains how you can apply database audit and monitoring controls using Imperva SecureSphere V13.2 when migrating to database as a service cloud offering.

Introduction to RDaaS

A Relational Database as a Service (RDaaS) provides the equipment, software, and infrastructure needed for businesses to run their database in a vendor’s cloud, rather than putting something together in-house. Examples of RDaaS include AWS Relations Database Services (RDS) and Microsoft Azure SQL.

Benefits of RDaaS adoption

The advantages of RDaaS adoption can be fairly substantial. Here are just a few of the benefits:

  • Allows you to preserve capital rather than using it for equipment or software licenses and convert IT costs to an operating expense
  • Requires no additional IT staff to maintain the database system
  • Resiliency and dependability are guaranteed by the cloud provider

Who is responsible for cloud-based DB security?

From a high-altitude viewpoint, cloud security is based on a model of “shared responsibility” in which the concern for security maps to the degree of control any given actor has over the architecture stack.  Using Amazon’s policy as an example, Amazon states that AWS has “responsibility for the security of the cloud,” while customers have “responsibility for security in the cloud.”

What does that mean for you?  It means cloud vendors provide the tools and services to secure the infrastructure (such as networking and compute machines), while you are responsible for things like application or database security. For example, cloud vendors help to restrict access to the compute instances on which a database is deployed (by using security groups/firewalls and other methods); but they don’t restrict who among your users has access to what data.

The onus is on you to establish security measures that allow only authorized users to access your cloud-database— just as with a database in your own “on-premises” data center – and you control what data they can access. Securing your data and ensuring compliance in on-premises data centers is typically done by database activity monitoring against the database and fortunately, similar measures can be deployed in the public cloud as well.

How Imperva SecureSphere ensures compliance and security in the cloud

The benefit that a solution such as Imperva SecureSphere Database Activity Monitoring (DAM) provides is integrating the oversight of an RDaaS into a standardized methodology across all enterprise databases.  With SecureSphere, here are some things you can do to ensure the security of your data in the cloud:

Monitor cloud database services

Migrate data to the cloud without losing visibility and control. SecureSphere is a proven, highly scalable system that covers dozens of on-premises relational database types, mainframe databases, and big data platforms.  It has been extended to support Amazon RDS and Azure SQL RDaaS databases too. SecureSphere enables you to always know who is accessing your data and what they are doing with it.

Unify monitoring policy

Implement a common security and compliance policy for consistent oversight and security across all on-premises and cloud databases. SecureSphere uses the policy to continuously assess threats and observe database user activity – and detects when the policy is violated – alerting you of critical events such as risky user behavior or unauthorized database access.

Automate compliance auditing processes

Demonstrate proof of compliance and simplify audits by consolidating audit log collection and reporting across all monitored assets. SecureSphere makes all the log data available to a central management console to streamline audit discovery and produce detailed reports for regulations such as SOX, PCI DSS and more.

Asses vulnerabilities and detect exposed databases

SecureSphere Discovery and Assessment streamlines vulnerability assessment at the data layer. It provides a comprehensive list of over 1500 tests and assessment policies for scanning platform, software, and configuration vulnerabilities. Assessment policies are available for Amazon RDS Oracle and Postgress RDaaS as well as  Microsoft Azure SQL.  More will soon be available.  The vulnerability assessment process, which can be fully customized, uses industry best practices such as DISA STIG and CIS benchmarks.

Support Hybrid Clouds

While many organizations now pursue a “cloud first” policy of locating new applications in the cloud, few are in a position to move all existing databases out of the data center, so they usually must maintain a hybrid database estate – which SecureSphere gracefully supports.
For some customers, it may be worth deploying SecureSphere on the RDaaS vendor’s infrastructure when monitoring large databases, to optimize for cost and performance.  SecureSphere is available vendor appropriate virtual instances for both AWS and Azure, deployable individually or in HA configurations.

There is a critical need for visibility across an organization’s entire application and data infrastructure, no matter where it is located. Imperva SecureSphere provides a platform to incorporate oversight of RDaaS instances into a broad enterprise compliance and security lifecycle process.
Learn more about how Imperva solutions can help you ensure the safety of your database and enterprise-wide data.

The post Explainer Series: RDaaS Security and Managing Compliance Through Database Audit and Monitoring Controls appeared first on Blog.

Creating Ripples: The Impact and Repercussions of GDPR, So Far

“GDPR is coming, GDPR is coming!” For months this was all we heard – everyone was discussing GDPR’s impending arrival on May 25th, 2018, and what they needed to do to prepare for the new privacy regulation. GDPR – the General Data Protection Regulation – first came to fruition on April 14th, 2016, as a replacement for the EU’s former legislation, Data Protection Directive. At its core, GDPR is designed to give EU citizens more control over their personal data. But in order for that control to be placed back in consumers’ hands, organizations have to change the way they do business. In fact, just five months after the implementation date, we’ve already seen GDPR leave an impact on companies. Let’s take a look at the ramifications that have already come to light because of GDPR, and how the effects of the legislation may continue to unfold in the future.

Even though the EU gave companies two years to ensure compliance, many waited until the last minute to act. Currently, no one has been slapped with the massive fines, but complaints are already underway. In fact, complaints have been filed against Google, Facebook, and its subsidiaries, Instagram and WhatsApp. Plus, Max Schrem’s None of Your Business (NOYB) and the French association La Quadrature du Net have been busy filing complaints all around Europe. “Data Protection officials have warned us that they will be aggressively enforcing the GDPR, and they watch the news reports. European Economic Area (EEA) residents are keenly aware of the Regulation and its requirements, and are actively filing complaints,” said Flora Garcia, McAfee’s lead privacy and security attorney, who managed our GDPR Readiness project.

However, the ramifications are not just monetary, as the regulation has already affected some organizations’ user bases, as well as customer trust. Take Facebook for example – the social network actually attributes the loss of 1 million monthly active users to GDPR, as reported in their second quarter’s earnings. Then there’s British Airlines, who claims in order to provide online customer service and remain GDPR compliant, their customers must post personal information on social media. Even newspapers’ readership has been cut down due to the legislation, as publications such as the Los Angeles Times and Chicago Tribune stopped allowing European readers access to their sites in order to avoid risk. “This is the new normal, and all companies need to be aware of their GDPR obligations. Companies outside of the EEA who handle EEA data need to know their obligations just as well as the European companies,” Garcia says.

GDPR has had tactical repercussions too; for instance, it has changed the communication on the way the IT sector stores customer data. A consumer’s ‘right to be forgotten’ means organizations have to clearly explain how a customer’s data has been removed from internal systems when they select this option, but also ensure a secure backup copy remains. GDPR also completely changes the way people view encrypting and/or anonymizing personal data.

What’s more — according to Don Elledge, guest author for Forbes, GDPR is just the tip of the iceberg when it comes to regulatory change. He states, “In 2017, at least 42 U.S. states introduced 240 bills and resolutions related to cybersecurity, more than double the number the year before.” This is largely due to the visibility of big data breaches (Equifax, Uber, etc.), which has made data protection front-page news, awakening regulators as a result. And with all the Facebook news, the Exactis breach, and the plethora of data leaks we’ve seen this so far this year, 2018 is trending in the same direction. In fact, the California Consumer Privacy Act of 2018, which will go into effect January 1st, 2020, is already being called the next GDPR. Additionally, Brazil signed a Data Protection Bill in mid-August, which is inspired by GDPR, and is expected to take effect in early 2020. The principles are similar, and potential fines could near 12.9 million USD. And both China and India are currently working on data protection legislation of their own as well.

So, with GDPR already creating ripples of change and new, similar legislation coming down the pipeline, it’s important now more than ever that companies and consumers alike understand how a piece of data privacy legislation affects them. Beyond that, companies must plan accordingly so that their business can thrive while remaining compliant.

To learn more about GDPR and data protection, be sure to follow us at @McAfee and @McAfee_Business, and check out some of our helpful resources on GDPR.

 

The information provided on this GDPR page is our informed interpretation of the EU General Data Protection Regulation, and is for information purposes only and it does not constitute legal advice or advice on how to achieve operational privacy and security. It is not incorporated into any contract and does not commit promise or create any legal obligation to deliver any code, result, material, or functionality. Furthermore, the information provided herein is subject to change without notice, and is provided “AS IS” without guarantee or warranty as to the accuracy or applicability of the information to any specific situation or circumstance. If you require legal advice on the requirements of the General Data Protection Regulation, or any other law, or advice on the extent to which McAfee technologies can assist you to achieve compliance with the Regulation or any other law, you are advised to consult a suitably qualified legal professional. If you require advice on the nature of the technical and organizational measures that are required to deliver operational privacy and security in your organization, you should consult a suitably qualified privacy professional. No liability is accepted to any party for any harms or losses suffered in reliance on the contents of this publication.

 

The post Creating Ripples: The Impact and Repercussions of GDPR, So Far appeared first on McAfee Blogs.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.


Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Game-Changing Sensitive Data Discovery

I've tried not to let my blog become a place where I push products made by my employer. It just doesn't feel right and I'd probably lose some portion of my audience. But I'm making an exception today because I think we have something really compelling to offer. Would you believe me if I said we have game-changing DLP data discovery?

How about a data discovery solution that costs zero to install? No infrastructure and no licensing. How about a solution that you can point at specific locations and choose specific criteria to look for? And get results back in minutes. How about a solution that profiles file shares according to risk so you can target your scans according to need. And if you find sensitive content, you can choose to unlock the details by using credits which are bundle-priced.

Game Changing. Not because it's the first or only solution that can find sensitive data (credit card info, national ID numbers, health information, financial docs, etc.) but because it's so accessible. Because you can find those answers minutes after downloading. And you can get a sense for your problem before you pay a dime. There's even free credits to let you test the waters for a while.

But don't take our word for it. Here are a few of my favorite quotes from early adopters: 
“You seem to have some pretty smart people there, because this stuff really works like magic!”

"StealthSEEK is a million times better than [competitor]."

"We're scanning a million files per day with no noticeable performance impacts."

"I love this thing."

StealthSEEK has already found numerous examples of system credentials, health information, financial docs, and other sensitive information that weren't known about.

If I've piqued your interest, give StealthSEEK a chance to find sensitive data in your environment. I'd love to hear what you think. If you can give me an interesting use-case, I can probably smuggle you a few extra free credits. Let me know.



Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.