Category Archives: Data Security

Businesses facing post breach financial fallout by losing customer trust

44% of Americans, 38% of Brits, 33% of Australians, and 37% of Canadians have been the victim of a data breach, according to newly released research conducted by PCI Pal. The findings suggest that a combination of recent high-profile data breaches in each region, the development of assorted laws and regulations to protect consumer data privacy (e.g. the California Consumer Privacy Act, Europe’s General Data Protection Regulations, Canada’s Personal Information Protection and Electronic Documents Act, … More

The post Businesses facing post breach financial fallout by losing customer trust appeared first on Help Net Security.

New Breach Exposes an Entire Nation: Living and the Dead

A misconfigured database has exposed the personal data of nearly every Ecuadorian citizen, including 6.7 million children.

The database was discovered by vpnMentor and was traced back to Ecuadorean company Novaestra. It contained 20.8 million records, well over the country’s current population of 16 million. The data included official government ID numbers, phone numbers, family records, birthdates, death dates (where applicable), marriage dates, education histories, and work records.

“One of the most concerning parts about this data breach is that it includes detailed information about people’s family members,” stated a blog from vpnMentor announcing the discovery of the leak. “Most concerningly, the leaked data seems to include national identification numbers and unique taxpayer numbers. This puts people at risk of identity theft and financial fraud.”

The leaked data also included financial information for individuals and businesses including bank account status, account balance, credit type, job details, car models, and car license plates.

“The information in both indexes would be as valuable as gold in the hands of criminal gangs,” wrote ZDNet reporter Catalin Cimpanu. “Crooks would be able to target the country’s most wealthy citizens (based on their financial records) and steal expensive cars (having access to car owners’ home addresses and license plate numbers).” 

The exposed database was on a server running Elasticsearch, a software program that enables users to query large amounts of data. Elasticsearch has been involved in several high profile data leaks, mostly due to configuration mistakes. Other recent Elasticsearch leaks included a Canadian data mining firm’s records for 57 million US citizens, a medical database storing the data on 85 percent of Panamanian citizens, and a provincial Chinese government database that contained 90 million personal and business records. 

The post New Breach Exposes an Entire Nation: Living and the Dead appeared first on Adam Levin.

What Does GDPR Mean for Your Organization?

GDPR ,or the General Data Prevention Regulation, is a new law that has been enforced by the European Union since May 25, 2018. The goal of this regulation is to update the Data Protection Directive of 1995; this was was enacted before the widespread use of the internet, which has drastically changed the way data is collected, transmitted, and used.

Another key component of the GDPR is to update regulations about data protection for sensitive personal information. It places an emphasis on the need to protect any and all collected data.

At the core of this new regulation, it aims to simplify, update, and unify the protection of personal data.

Why Does GDPR Matter to You?

The main changes from GDPR mean that companies can no longer be lax about personal data security. In the past, they can get away with simple tick-boxes to achieve compliance. This is no longer the case.

Here are the top points to consider regarding the General Data Prevention Regulation.

  1. A company does not have to be based in the EU to be covered by the GDPR. As long as they collect and use personal data from citizens of the EU, they must adhere to this regulation.
  2. The fines for violating the regulations set forth by the GDPR are huge. Serious infringements such as not having the right customer consent to process their data can net the violating company a fine of 4% of their annual global income, or 20 million Euros — whichever one is bigger.
  3. Personal data definition has become wider and now includes items such as the IP address and identity of their mobile device.
  4. Individuals now have more rights over the use of their personal data for security purposes. Companies can no longer use long-worded terms and conditions in order to obtain explicit consent from their customers to process their data.
  5. GDPR has made technical and organizational measures of protecting personal data to be mandatory. Companies now need to hash and encrypt personal data in order to protect them.
  6. Registries relating to data processing are now mandatory as well. What this means is that organizations need to have a written record (electronically) of all the activities they would do with the personal data, which captures that lifecycle of data processing.
  7. Impact assessments for data protection, such as data profiling, will now be required.
  8. Reporting any and all data breaches is now mandatory. Organizations have a maximum of 72 hours to report a breach in their security, which places personal data at risk. If it poses a high risk for individuals, then it should be reported immediately or without delay.
  9. If an organization processes a large amount of data, they will be required to have a Data Protection Officer, who is in charge of monitoring compliance with the regulation and reports directly to the highest management level of the company.
  10. The GDPR is mainly focused on data protection by design and by default.

There is no doubt that the legal and technical changes the GDPR requires in order to comply at an organizational level is big. Achieving compliance takes more than information security or legal teams alone. It takes the creation of a GDPR task force to find an organization that understands the changes and effects on its operation. They will work together in order to meet compliance requirements set forth by the new regulation.

Also Read,

GDPR: Non-Compliance Is Not An Option

GDPR Compliance And What You Should Know

How Will The GDPR Survive In The Jungle of Big Data?

The post What Does GDPR Mean for Your Organization? appeared first on .

Four in five businesses need ways to better secure data without slowing innovation

While data loss protection is critical to Zero Trust (ZT), fewer than one in five organizations report their data loss prevention solutions provide transformational benefits and more than 80 percent say they need a better way to secure data without slowing down innovation, according to Code42. ZT architectures are based on the principle of “trust no one, verify everything,” abolishing the idea of a trusted network within a data security perimeter and requiring companies to … More

The post Four in five businesses need ways to better secure data without slowing innovation appeared first on Help Net Security.

Prediction: 2020 election is set to be hacked, if we don’t act fast

Since 1993, hackers have traveled to Las Vegas from around the world to demonstrate their skills at DefCon’s annual convention, and every year new horrors of cyber-insecurity are revealed as they wield their craft. Last year, for example, an eleven-year-old boy changed the election results on a replica of the Florida state election website in under ten minutes.

This year was no exception. Participants revealed all sorts of clever attacks and pathetic vulnerabilities. One hack allowed a convention attendee to commandeer control of an iPhone with a non-Apple-issue charging cord, one that is identical to the Apple version. Another group figured out how to use a Netflix account to steal banking information. But for our purposes, let’s focus on election security because without it democracy is imperiled. And if you think about it, what are the odds of something like DefCon being permitted in the People’s Republic of China?

Speaking of China (or Russia or North Korea or Iran or…) will the 2020 election be hacked?

In a word: Yes.

In 2016 Russia targeted elections systems in all 50 states.

A CNN article about DefCon’s now annual Voting Village, described the overall problem: Many election officials and key players in the election business are not sufficiently worried to anticipate, recognize and meet the challenges ahead.

While many organizations welcome the hijinks of DefCon participants — including the Pentagon — the voting machine manufacturers don’t generally seem eager to have hackers of any stripe show them where they are vulnerable… and that should worry you.

DefCon participants are instructed to break things, and they do just that. This year, Senator Ron Wyden (D-Ore.) toured DefCon’s Voting Village and he left with these words: “We need paper ballots, guys.”

Was the Senator right? It’s the easiest solution, but not the only one. Because elections machines are thus far preeminently breakable, we still need audited paper trails.

Paper trails are mission critical

After railing against previous findings of DefCon participants, Election Systems and Software (ES&S) CEO Tom Burt reversed his position in a Roll Call op-ed that called for paper records and mandatory machine testing in order to secure e-voting systems. It’s a welcome move as far as cybersecurity experts are concerned.

After a midterm election featuring irregularities in GeorgiaNorth Carolina and other smaller hacks, and warnings from the likes of Special Prosecutor Robert Mueller, there has been no meaningful action nationwide when it comes to election security, while the specter of serious interference remains. Senate Majority Leader Mitch McConnell (R-Ky.) has steadfastly refused to allow even bi-partisan election security legislation to come to the floor for a vote, much less a debate, and for that reason he and the Republican party are blameworthy for placing politics above protecting our most cherished democratic right.

While the news is on overheated cycles covering every tweet, or sound bite, uttered by President Trump, critical issues like cybersecurity are not being addressed, and this matters — given recent DefCon news of election machines connected to the internet when they shouldn’t be, and the persistent threat of state-sponsored attacks on our democracy.

Think DARPA’s $10 million un-hackable election machine proves all is well? Not quite. Bugs during the set up of the DARPA wonder machine meant that DefCon’s participants didn’t have enough time to properly break the thing. In the absence of definitive proof to the contrary, we have to assume it can be hacked.

What Now?

Instead of discussing the nation’s Voter ID laws, we need to focus on securing the vote.

It is well-established fact that Russia attempted to interfere in the 2016 election in all 50 states, and Israel — an ally of the president — recently disclosed that the Russian government identified President Trump as the candidate most likely to benefit Russia, and used cyberbots to help him win. The fact that President Trump won the election on the strength of just 80,000 votes spread across three key swing states shows how important it is to address the issue. We’re not talking about a blunderbuss approach to hacking the election here. Plausible outcomes can be constructed. It’s been known to happen before.

Some experts think it may soon be too late to secure 2020 against the threat of state-sponsored hacks. I do not. But I think the time to delay to score political points has passed, and now is the time for action.

The post Prediction: 2020 election is set to be hacked, if we don’t act fast appeared first on Adam Levin.

Importance of Security Analytics

Security analytics is a process of collecting data, aggregating, and using tools to analyze the data in order to monitor and identify threats. Depending on the tools being used, this process can incorporate diverse sets of data in detecting patterns and algorithms. Security analytics can also collect data from several points, such as:

  • Cloud sources.
  • Endpoint devices.
  • Network traffic.
  • Non-IT contextual data.
  • Business applications and software.
  • External threat intelligence.
  • Access management data.

Adaptive learning techniques have also become available through recent developments that fine-tune detection models depending on experience, learnings, and anomaly detection for security analytics. They can accumulate and analyze data in real time from:

  • Geographical location.
  • Asset metadata.
  • IP context.
  • Threat intelligence.

The data collected by the tools can then be used for immediate detection of threats or for future analysis to identify patterns and create better protocols or defenses.

Security Analytics Benefits

Organizations get several key benefits when they use security analytics:

Proactive Security

Security analytics can analyze the data from several different sources in order to identify threats and security incidents based on the findings. They do this by analyzing logged data, along with other sources, to pinpoint the correlation between all of them.

Regulatory Compliance

One of the most important aspects of security analytics is compliance. Depending on the industry, organizations that manage sensitive data are required by law to comply with regulations for security. By maintaining proper analytics for threat detection, organizations can ensure their compliance with these regulations.

Improved Forensics

In conducting forensic investigations on security threats and breaches, analytics play a vital role. Since it has collated and gathered data from different sources, personnel can use security analytics to identify what happened and repair any damages that were caused by the breach. This also helps in creating proactive policies to avoid a similar attack or breach.

Use Cases of Security Analytics

There are several use cases for security analytics. This includes detecting threats, improving data visibility, monitoring network traffic, and even analyzing user behavior. Here are more use cases of security analytics:

  • Detect suspicious patterns from user behavior analysis.
  • Monitor employee activity.
  • Detect data exfiltration by hackers.
  • Analyze network traffic to identify potential threats.
  • Detect insider threats.
  • Identify improper account use.
  • Hunt for threats.
  • Find compromised accounts.
  • Demonstrate compliance whenever there is an audit.

And above all, the main goal of any security analytics is to take raw data and turn that into actionable insights to pinpoint and identify potential threats and provide an immediate response. This adds a critical layer of security on the amount of data generated by users, software, applications, networks, and others.

Also Read,

New Hybrid Computing, Same Security Concerns

What is Network Security and its Types

Microfocus, Endace: Strong Network Analytics System To Be Developed

The post Importance of Security Analytics appeared first on .

Interacting with governments in the digital age: What do citizens think?

Most U.S. citizens acknowledge and accept that state and local government agencies share their personal data, even when it comes to personal information such as criminal records and income data, according to a new survey conducted by YouGov and sponsored by Unisys. However, the survey found they remain concerned about the security of the data. The survey of nearly 2,000 (1,986) U.S. citizens living in eight states found that more than three-quarters (77%) accept that … More

The post Interacting with governments in the digital age: What do citizens think? appeared first on Help Net Security.

How I Learned to Stop Worrying and Love Vendor Risk

Insider risk, supply chain vulnerability and vendor risk all boil down to the same thing: the more people have access to your data, the more vulnerable it is to being leaked or breached.

This summer brought an interesting twist to that straight-forward situation: Can data leaked by an employee or a contractor be a good thing?

In July, a Belgian contractor who had been hired to transcribe Google Home recordings shared several of them with news outlet VRT. The leak revealed that customers were being recorded without their consent, often times after unintentionally triggering their devices. Google’s response was immediate. They went after the contractor. (Never mind that they were doing something that they had denied. The leaked recordings were for research!!!)

“Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” the company said in a press release.

Translation: We’re not sorry we got caught doing whatever we want, but we are sorry we hired the wrong vendor and will try not to do that again.

An Apple contractor shared a similar story with the Guardian a short time later. Recordings taken from the company’s audio assistant Siri were also being transcribed by third-party contractors. This time the news was worse. The company’s watch was consistently recording users without any explicit prompting. Weeks later, a contractor for Microsoft went to Vice with what at this point had become a familiar story, this time in connection with both Skype and Cortana.

Whistleblower or Data Leak?

The typical narrative is that someone with inside knowledge of a company or its technology is able to exploit it to some sort of ill purpose. The accused hacker behind the recent Capital One data breach had previously worked for Amazon Web Services and was able to exploit her knowledge of a common firewall misconfiguration to steal customer data: more than 100 million records. Anthem and Boeing similarly suffered large-scale breaches perpetrated by insiders.

What makes the rash of recent data leaks noteworthy is that external contractors had access to data that they didn’t think they should have, and they did something about it. With the exception the leaked data in question was passed along to press outlets for the express purpose of preserving customer data. And it worked, at least in the short term. Apple and Google suspended their use of human transcribers, and Microsoft has made their privacy policy more explicit.

HR or IT?

What’s interesting here (other than the revelation that just about every major IoT speech-recognition product on the market has been spying on us without telling us) is what it reveals about insider risk.

It seems increasingly apparent that risk has as much to do with a company’s HR department as it does its cybersecurity policy. A single disgruntled employee with an axe to grind is a familiar scenario, and one that can be mitigated through careful data management, but widespread unhappiness with a company’s ethical practices is significantly more difficult to manage. It brings to mind that semi-old adage, now-defunct company motto at Google: Don’t be evil. Or rather, be nicer to make yourself less of a target.

Google has had to contend with internal protests ranging from its involvement with Chinese censorship to its work with U.S. border and immigration agencies. Both Amazon and Microsoft experienced similar unrest among employees for their contracts with ICE. While none of these have led to large-scale data breaches yet, knowing that there are potentially thousands of employees and contractors with access to sensitive information and a motive to leak, it is a matter of serious concern.

The new law of the cyber jungle: Widespread disapproval exponentially increases one’s attackable surface.

While employee whistleblowers are nothing new (just ask Enron or Big Tobacco), it’s semi-terra incognita in our era of massive data breaches. We’re used to thinking of any kind of data breach and any kind of data leak as being a bad thing, and it usually is. But there is a grey area when companies are not playing by the rules in an environment where people are highly motivated to call them out for bad behavior.

What’s the Takeaway?

From a strictly technical perspective, even a well-intentioned data leak has the unfortunate side effect of showing where in the supply chain companies are most vulnerable. If hackers weren’t aware that organizations were entrusting intimate customer data to external contractors, they most certainly know it now.

The post How I Learned to Stop Worrying and Love Vendor Risk appeared first on Adam Levin.

Don’t Trade Convenience for Security: Protect the Provenance of your Work

I recently volunteered as an AV tech at a science communication conference in Portland, OR. There, I handled the computers of a large number of presenters, all scientists and communicators who were passionate about their topic and occasionally laissez-faire about their system security. As exacting as they were with the science, I found many didn’t […]… Read More

The post Don’t Trade Convenience for Security: Protect the Provenance of your Work appeared first on The State of Security.

More than 50% of Canadians Affected by Data Breaches

19 million Canadians are estimated to have been affected by data breaches between late 2018 and 2019, slightly more than half the population of the country. 

The news was released by the Office of the Privacy Commissioner of Canada after the passage of the Personal Information and Electronic Documents Act (PIPEDA). Data breach reports have nearly sextupled since PIPEDA went into effect, with 446 incidents between November 2018 and June 2019.

One notable exception to the PIPEDA reporting requirements is Canadian political parties, which are not required to report data breaches, and often compile large amounts of data on voters. 

Hacking or “internal bad actors” account for the majority of the data breaches reported, with unintentional data leaks and the loss or theft of equipment comprising the bulk of the remainder.

Read more here.

The post More than 50% of Canadians Affected by Data Breaches appeared first on Adam Levin.

Expect More Spam Calls and SIM-Card Scams: 400 Million Phone Numbers Exposed

As much as I love this one friend of mine, nothing is private when we’re together. You probably have a friend like this. The relationship is really great so you stay friends despite all, but this particular friend simply cannot know something about you without sharing it with others no matter how hard you try to get them to understand it’s totally uncool. 

Facebook Is an Open Book

They did it again this week with news that 419 million records, including phone numbers and user IDs, were scraped from Facebook and stored in a database that was just sitting online accessible to anyone who might like to peruse it. More than 130 million of those compromised by the discovery were American users. Another 18 million were UK users. A whopping 50 million hailed from Vietnam. 

Facebook later claimed about half that number were affected, or 220 million records. 

The information is at least a year old, which was when Facebook stopped allowing developers to have user phone numbers. So, we can call this a Facebook privacy facepalm legacy attack. It’s a sad state of Facebook privacy news fatigue that the urge is so strong to create privacy fail sub-categories—but there you have it. Introducing the legacy fail. 

Why It Matters

Some of the information out there was granular enough to allow a variety of scams, but the most serious is SIM-card swapping scams, where a criminal, armed with enough information about you, and most crucially your phone number, arranges to have your number moved to a phone in the criminal’s possession. 

Once the number has been transferred, the criminal has control of any accounts that are identified by caller ID (including many financial institutions) as well as any accounts protected by two-factor authentication. It is believed this was the method used to recently hack Jack Dempsey’s Twitter account. 

What You Can Do

Assume that you are a target, and tighten your protections. Your phone provider will have tips on the best practices to avoid SIM-card attacks, and common sense can be your guide regarding any unexpected phone calls, and practice the Three Ms:

Minimize your exposure. Don’t authenticate yourself to anyone unless you are in control of the interaction, don’t over-share on social media, be a good steward of your passwords, safeguard any documents that can be used to hijack your identity, and freeze your credit.

Monitor your accounts. Check your credit report religiously, keep track of your credit score, review major accounts daily if possible. (You can check two of your credit scores for free every month on Credit.com.) If you prefer a more laid back approach, see No. 5 above.

Manage the damage. Make sure you get on top of any incursion into your identity quickly and/or enroll in a program where professionals help you navigate and resolve identity compromises–oftentimes available for free, or at minimal cost, through insurance companies, financial services institutions and employers.

The post Expect More Spam Calls and SIM-Card Scams: 400 Million Phone Numbers Exposed appeared first on Adam Levin.

Defining the Principle of Least Privilege (POLP)

The Principle of Least Privilege, or POLP, is the idea that any user, program, or even process should only be provided the bare minimum of privilege for them to perform their function. For example, a new user created for the purpose of pulling records from a database may not need administrative privileges, while a programmer who updates lines of legacy code does not need access to financial records. The main principle of POLP is also known as the Principle of Least Authority, or POLA, and the Principle of Minimal Privilege, or POMP.

Following POLP is considered best practice for information security.

How It Works

The POLP works by granting just enough access to perform a specific task. Within an IT environment, this reduces the risk of malicious attacks gaining access to critical systems, as well as sensitive data, due to a low-level account user, a single device, or an application being compromised. By implementing the Principle of Least Privilege, this contains the compromise to the area of origin, which stops it from spreading.

Examples of Principle of Least Privilege (POLP)

The Principle of Least Privilege is applicable on every level of a system, including end users, devices, processes, networks, applications, systems, and all other facets of the IT environment. Here are examples of how POLP can work in practice.

User Accounts With POLP

An employee who’s tasked to enter information into a database requires access to the specific database. If a malware is able to infect this employee’s device, the infection would be limited to this database because that employee does not have access to other databases or systems.

MySQL Account With POLP

A MySQL account can use POLP by employing several different accounts to do a unique task. An online form that allows users to sort data should only use an account with sorting privileges. This way, if an attacker gains access, they are only granted one specific privilege. However, if that account has the ability to delete records, for example, the attacker would be able to wipe out the entire database.

“Just in Time” Least Privilege

A user who rarely needs root privileges should only be granted such freedom when working on a specific task. Otherwise, those privileges should be pulled. Disposable credentials are a great way to implement POLP and increase security.

POLP Benefits

POLP was established for enhanced security and so carries many benefits.

  • Enhanced Security – Edward Snowden was able to access and take millions of NSA files because he had administrator privileges even though his task was simply to create backups. Ever since, the NSA has implemented POLP.
  • Limit Malware Attacks – If a system or device is infected by malware, POLP is able to contain it to the original infection and prevent it from spreading throughout the network.
  • Improve Audits – The scope of an audit will dramatically reduce when POLP is in effect. On top of that, several regulations actually require companies to abide by this principle.
  • Improved Stability – The Principle of Least Privilege increases the system stability by limiting the effects of changes.

POLP Best Practices

  1. Do a privilege audit – Check all current accounts, programs, and processes to see if they have the right privileges or too much.
  2. Create accounts with least privilege – By default, new user accounts created should have the least possible privilege set and higher ones to be set later on.
  3. Separate privileges – There should be separate administrative accounts from standard ones and higher accounts from low-level system functions.
  4. Use “just in time” privilege – When possible, you should restrict raised privileges in moments of need only.
  5. Trace individual actions – Automatic auditing can simplify tracking and mitigation of damage.
  6. Regularize – In the practice of POLP, privilege audits should be done regularly to prevent old user accounts and processes from accumulating privileges they do not need.

Also Read,

API Security, Developers And Users Responsibility

5 Informative Security Podcasts to Listen To

Cybersecurity In Mid-2019: Nothing To See Here, Same Problems

The post Defining the Principle of Least Privilege (POLP) appeared first on .

Voice Deepfake Scams CEO out of $243,000

The CEO of a UK-based energy firm lost the equivalent of $243,000 after falling for a phone scam that implemented artificial intelligence, specifically a deepfake voice.

The Wall Street Journal reported that the CEO of an unnamed UK energy company received a phone call from what sounded like his boss, the CEO of a German parent company, telling him to wire €220,000 (roughly $243,000) to a bank account in Hungary. The target of the scam was convinced that he was speaking with his boss due to a “subtle German accent” and specific “melody” to the man’s voice and wired the money as requested. 

According to a representative of Euler Hermes Group SA, the firm’s insurance company, the CEO was targeted by a new kind of scam that used AI-enhanced technology to create an audio deepfake of his employer’s voice. While the technology to generate convincing voice recordings has been available for a few years, its remains relatively uncommon in the commission of fraud.

Security experts worry the exploit could spark a new trend. 

“[W]e’re seeing more and more artificial intelligence-based identity fraud than ever before,” said David Thomas, CEO of identity verification company Evident in an article on Threatpost. “Individuals and businesses are just now beginning to understand how important identity verification is. Especially in the new era of deep fakes, it’s no longer just enough to trust a phone call or a video file.”

Read the Wall Street Journal article here (subscription required).

The post Voice Deepfake Scams CEO out of $243,000 appeared first on Adam Levin.

If You Have to Ask How Much a Data Breach Costs, You Can’t Afford One

According to IBM Security’s 2019 Cost of a Data Breach Report, the average time to identify and contain a breach was a whopping 279 days, and it took even longer to discover and deal with a malicious attack. The average cost of an incident was $3.9 million, and the average cost per record, $150.

A malicious hacker can do serious damage to an organization. Breaches are not a cheap date. Capital One estimated the first-year cost of its recent breach would be $100-150 million. Add to that figure the aggregate cost of as many as 30 other companies suspected hacker Paige Thompson may have hit, and it should be abundantly clear that the damage that can be racked up by just one sociopath is astounding. Equifax was recently ordered to pay $700 million in damages for its megabreach, a figure many derided as a wrist slap.

By now, it shouldn’t be news that the probability of a breach or data compromise hitting your company, or one you do business with, is right up there with two more familiar likelihoods; namely, death and taxes. Likewise, the particular cause of a data breach or compromise is about as predictable as our individual approaches to death and taxes.

You need look no further than very recent news to illustrate the point.

U.K.-based Suprema sells a security tool used by organizations worldwide, including law enforcement. It allows users to control access in high security environments. It’s called Biostar 2, and it failed, leaking fingerprints, photographs, facial recognition data, names, addresses, passwords, and employment history records. Reports say 23 gigabytes of data containing 30 million records were in the wind, including data used by London’s Metropolitan Police, Power World Gyms, Global Village and Adecco Staffing. The cause, human error. The cost here is twofold. Fingerprints in the wind stay in the wind. They can’t be changed. There is no way to put a price on that, but at $150 per record, we might spitball and put it around $4.5 billion.

In other news, an FDNY employee flouted department data security policy by downloading data on a personal, unencrypted hard drive that subsequently went missing. The drive contained sensitive personal information and protected health information associated with more than 10,000 people treated or taken to the hospital by the department’s EMS. It was reported there were also nearly 3,000 Social Security numbers possibly exposed. This leak “only” comes in at a potential cost of around $1.5 million using the $150 a record estimate in the 2019 Cost of a Data Breach Report published by IBM Security. The cost of this unnecessary diversion is of course unknowable.

Another all too familiar way companies get got is by proxy. Choice Hotels recently reported the compromise of 700,000 guest records, which were exposed when a vendor copied their data. The mismanaged data was subsequently discovered by a hacker and held for a ransom, a request the hotel reportedly ignored. Ironically, the data had been on the server to test a “security offering” so there was nothing to ransom since the data was only copied from a server that was still controlled by the company. (That said, ransomware continues to be a very real threat, and it relies for the most part on employee error.)

Honda had a comprised database with more than 134 million records, and the Electronic Entertainment Expo, or E3 as it is popularly known, leaked press badge information that included names, phone numbers and home addresses of attendees, and do you know what these entities as well as all of the aforementioned organizations did not do? They didn’t do cyber right.

We all need to listen to the wisdom of The Office’s Dwight Schrute who said, “Whenever I’m about to do something, I think, ‘Would an idiot do that?’ And if they would, I do not do that thing.” True that’s easier said than done, and Schrute is a fictionalized proof of that. Human error is not the only threat to a company, but it is the most persistent one. Many of the hit parade of hacks were avoidable, but without an organizational culture predicated on staying safe, it’s hard to make must progress in the war against stupid mistakes.

Data breaches and compromises are expensive, result in an enormous amount of collateral everyday life damage and are more common than inter-relationship bickering. As with love spats and their aftermaths, there is always room for improvement. While it is folly to believe that any company can be made 100% hack or leak proof, they can become harder-to-hit targets. Security can be baked into all processes–from onboarding to new product launches to the storing of key data. They are more avoidable than one might be led to believe, but it requires a sea change in attitude and more importantly a complete change in the way everything digital is done with security always foremost in any given process.

The post If You Have to Ask How Much a Data Breach Costs, You Can’t Afford One appeared first on Adam Levin.

Why Is a Data Classification Policy Absolutely Important?

Today, data is a valuable commodity. Without it, company executives cannot make well-informed decisions, marketers won’t understand their market’s behavior, and people will have a hard time finding each other over social media platforms. But not all data are equal, which is why companies must have a data classification policy in place to safeguard the important and sensitive data.

What Is a Data Classification Policy?

Data classification policy is an organizational framework aimed at guiding employees on how to treat data. During the creation of a data classification policy, categories for data are created to help the company distinguish which data are considered confidential and which are considered public.

A data classification policy applies to all kinds of data acquired by the company. Both digital and written data must be inspected with equal importance and classified appropriately according to the data classification policy.

Data Classification Policy and Cyber Security

When it comes to cybersecurity and risk management against unexpected data breaches, data classification policies play an important role.

Data classification policies help rank-and-file employees, as well as C-level management, identify which set of data must be treated with utmost care. A well-crafted data classification policy would view corporate decisions as strictly confidential, and such highly-sensitive information must be secured with the highest possible form of encryption.

Data policies also shed light on what data are considered public, personal, confidential, and sensitive. Each classification is given a different level of security under the policy, and each data set is given to key personnel for compilation, collection, and storage.

Because of the nature of the policy, data classification plays a supporting role in a company’s cybersecurity program, making it harder for corporate spies to retrieve valuable company data. The data classification policy must also provide details on where the data should be stored and who has authority to retrieve them.

Data Classification Services

Information security firms know how risky data theft is for companies, especially for Fortune 500 companies that have a large volume of sensitive data. That’s why many information security companies offer data classification services to help companies reduce their overall vulnerability.

Data security experts provide data classification services that include tools, training, and collaboration with clients in the creation of a data classification program. Many data classification services build the data classification policy from the ground up and help with the implementation of the policy. They also conduct security checks to help ensure that the level of security does not fall.

Conclusion

With companies receiving a large volume of data every day, it’s difficult for company employees and managers to stop and think about how a piece of data must be classified and handled. Without a clear and well-structured policy in place, employees are left to decide how data are stored and managed.

If you believe in the importance of data security, then having a well-structured data classification policy and availing data classification services from data security experts will give your company the data protection it needs to prevent heavy damages in case of a data breach.

Also Read,

Defining Data Classification

Common Sense Ways Of Handling Data, Digital Or Not

Key Factors for Data – Centric Data Protection

The post Why Is a Data Classification Policy Absolutely Important? appeared first on .

Google Discovers Massive iPhone Hack

Researchers at Google announced the discovery of a hacking campaign that used hacked websites to deliver malware to iPhones.

Project Zero, Google’s security research team, discovered fourteen previously unknown vulnerabilities, called zero day exploits, that were capable of compromising iPhones. Further research revealed a small collection of hacked websites capable of delivering malware to iPhone users visiting those sites.

“There was no target discrimination; simply visiting the hacked site was enough for the exploited server to attack your device, and if it was successful, installing a monitoring implant. We estimate that these sites receive thousands of visitors per week,” wrote Project Zero member Ian Beer in a blog post announcing their findings.

The data accessible on the compromised phones included the user’s location, their passwords, chat histories, contact lists, and full access to their Gmail accounts. 

“Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services… even after they lose access to the device,” said Beer.

The hacking campaign was active for at least two years before it was discovered by Project Zero. The research team informed Apple of their findings, and the targeted vulnerabilities were patched in an update in February 2019. 

The post Google Discovers Massive iPhone Hack appeared first on Adam Levin.

Data Residency: A Concept Not Found In The GDPR

Are you facing customers telling you that their data must be stored in a particular location?

Be reassured: As a processor of data, we often encounter a discussion about where the data is resident, and we are often facing people certain that their data must be stored in a given country. But the truth is, most people don’t have the right answer to this legal requirement.

To understand the obligations and requirements surrounding data storage, you first need to understand the difference in concepts between “data residency” and “data localization.”

What Are Data Residency and Data Localization?

Data residency is when an organization specifies that their data must be stored in a geographical location of their choice, usually for regulatory, tax or policy reasons. By contrast, data localization is when a law requires that data created within a certain territory stays within that territory.

People arguing that data must be stored in a certain location are usually pursuing at least one of the following three objectives:

  1. To allow data protection authorities to exert more control over data retention and thereby have greater control over compliance.
  2. In the EU, it is seen as means to encourage data controllers to store and process data within the EU or within those countries deemed to have the same level of data protection as in the EU, as opposed to moving data to those territories considered to have less than “adequate” data protection regimes. The EU has issued only 13 adequacy decisions: for Andorra, Argentina, Canada (commercial organizations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, US (Privacy Shield only) and Uruguay.
  3. Finally, it is seen by some as a tool to strengthen the market position of local data center providers by forcing data to be stored in-country.

However, it is important to note that accessing personal data is considered a “transfer” under data protection law—so even if data is stored in Germany (for example), if a company has engineers in India access the data for customer service or support purposes, it has now “moved” out of Germany. Therefore, you can’t claim “residency” in Germany if there is access by a support function outside the country. Additionally, payment processing functions also sometimes occur in other countries, so make sure to consider them as well. This is an important point that is often missed or misunderstood.

Having understood the concept of data residency and data localization, the next question is, are there data residency or localization requirements under GDPR?

In short: No. GDPR does not introduce and does not include any data residency or localization obligations. There were also no data residency or localization obligations under the GDPR’s predecessor, the Data Protection Directive (95/46/EC). In fact, both the Directive and the GDPR establish methods for transferring data outside the EU.

Having said that, it is important to note that local law may impose certain requirements on the location of the data storage (e.g., Russia’s data localization law, German localization law for health and telecom data, etc.).

So, if there is no data residency or localization requirement under GDPR, can we transfer the data to other locations?

The GDPR substantially repeats the requirements of the Data Protection Directive, which states that you need to have legal transfer means if you move data outside of the EU into a jurisdiction with inappropriate safeguards (see map here). The legal transfer means are:

  • Adequacy— A decision by the EU Commission that a country has adequate protection level;
  • Binding Corporate Rules— Binding internal rules of a company to be approved by data protection authorities;
  • Standard Contractual Clauses / Model Clauses—Individually negotiated contracts between controller and processor
  • Privacy Shield— For US companies only; this is a replacement self-certification program for the Safe Harbor.

I have heard that Privacy Shield and Standard Contractual Clauses are under serious scrutiny? What is this all about?

Following the European Court of Justice decision that the EU-US Safe Harbor arrangement does not provide adequate protection for the personal data of EU data subjects, the EU and US entered into a new arrangement to enable the transfer of data (the Privacy Shield). However, a number of non-governmental organizations and privacy advocates have started legal action to seek decisions that the Privacy Shield and the EU Standard Contractual Clauses do not provide sufficient protection of data subjects’ personal data.

It remains to be seen how the European Court of Justice will decide in these cases. They are expected to rule on these matters by the end of 2019.

I have heard that the Standard Contractual Clauses/Model Clauses might be updated.  What is that all about? 

In order to protect data being transferred outside of the European Union, the Union issued three Standard Contractual Clause templates (for controller to controller transfers and for controller to processor transfers). These have not been updated since they were first introduced in 2001, 2004 and 2010, respectively. However, the European Union’s consumer commissioner, under whom privacy falls, has indicated that the EU is working on an updated version of the Standard Contractual Clauses. It remains to be seen how the Clauses will be modernized and whether the shortcomings, concerns and gripes of existing Standard Contractual Clauses will be addressed to the satisfaction of all parties.

One thing is for certain, however—the data protection space will only get more attention from here on out, and those of us working in this space will have to become more accustomed to complexities such as those surrounding Data Residency.

 

This blog is for information purposes only and does not constitute legal advice, contractual commitment or advice on how to meet the requirements of any applicable law or achieve operational privacy and security. It is provided “AS IS” without guarantee or warranty as to the accuracy or applicability of the information to any specific situation or circumstance. If you require legal advice on the requirements of applicable privacy laws, or any other law, or advice on the extent to which McAfee technologies can assist you to achieve compliance with privacy laws or any other law, you are advised to consult a suitably qualified legal professional. If you require advice on the nature of the technical and organizational measures that are required to deliver operational privacy and security in your organization, you should consult a suitably qualified privacy professional. No liability is accepted to any party for any harms or losses suffered in reliance on the contents of this publication.

 

The post Data Residency: A Concept Not Found In The GDPR appeared first on McAfee Blogs.

Data Privacy and Security Risks in Healthcare

Healthcare is a business much like all verticals I work with; however, it has a whole different set of concerns beyond those of traditional businesses. The compounding threats of malware, data thieves, supply chain issues, and the limited understanding of security within healthcare introduces astronomical risk. Walking through a hospital a few weeks ago, I was quickly reminded of how many different devices are used in healthcare—CT scanners, traditional laptops, desktops, and various other devices that could be classified as IoT.

Sitting in the hospital, I witnessed people reporting for treatment being required to sign and date various forms electronically. Then, on a fixed-function device, patients were asked to provide a palm scan for additional biometric confirmation. Credit card information, patient history, and all sorts of other data was also exchanged. In my opinion, patients should be asking, “Once the sign-in process is complete, where is the patient data stored, and who has access to it? Is it locked away, encrypted, or sent to the “cloud” where it’s stored and retrieved as necessary? If it’s stored on the cloud, who has access to that?” I do recall seeing a form asking that I consent to releasing records electronically, but that brings up a whole new line of questions. I could go on and on …

Are these challenges unique to healthcare? I would contend that at some level, no, they’re not. Every vertical I work with has compounding pressures based on the ever-increasing attack surface area. More devices mean more potential vulnerabilities and risk. Think about your home: You no doubt have internet access through a device you don’t control, a router, and many other devices attached to that network. Each device generally has a unique operating system with its own set of capabilities and with its own set of complexities. Heck, my refrigerator has an IP address associated with it these days! In healthcare, the risks are the same, but on a bigger scale. There are lives at stake, and the various staff members—from doctors, to nurses, to administrators—are there to hopefully focus on the patient and the experience. They don’t have the time or necessarily the education to understand the threat landscape—they simply need the devices and systems in the hospital network to “just work.”

Many times, I see doctors in hospital networks and clinics get fed up with having to enter and change passwords. As a result, they’ll bring in their personal laptops to bypass what IT security has put in place. Rogue devices have always been an issue, and since those devices are accessing patient records without tight security controls, they are a conduit for data loss. Furthermore, that data is being accessed from outside the network using cloud services. Teleradiology is a great example of how many different access points there are for patient data—from the referring doctor, to the radiologist, to the hospital, and more.

Figure 1:  Remote Tele-radiology Architecture

With healthcare, as in most industries, the exposure risk is potentially great. The solution, as always, will come from identifying the most important thing that needs to be protected, and figuring out the best way to safeguard it. In this case, it is patient data, but that data is not just sitting locked up in a file cabinet in the back of the office anymore. The data is everywhere—it’s on laptops, mobile devices, servers, and now more than ever in cloud services such as IaaS, PaaS and SaaS. Fragmented data drives great uncertainty as to where the data is and who has access to it.

The security industry as a whole needs to step up. There is a need for a unified approach to healthcare data. No matter where it sits, there needs to be some level of technical control over it based on who needs access to it. Furthermore, as that data is traversing between traditional data centers and the cloud, we need to be able to track where it is and whether or not it has the right permissions assigned to it.

The market has sped up, and new trends in technology are challenging organizations every day. In order to help you keep up, McAfee for Healthcare (and other verticals) are focusing on the following areas:

  • Device – OS platforms—including mobile devices, Chrome Books and IoT—are increasingly locked down, but the steadily increasing number of devices provides other avenues for attack and data loss.
  • Network – Networks are becoming more opaque. HTTP is rarely used anymore in favor of HTTPS, so the need for a CASB safety net is essential in order to see the data stored with services such as Box or OneDrive.
  • Cloud – With workloads increasingly moving to the cloud, the traditional datacenter has been largely replaced by IaaS and PaaS environments. Lines of business are moving to the cloud with little oversight from the security teams.
  • Talent – Security expertise is extremely difficult to find. The talent shortage is real, particularly when it comes to cloud and cloud security. There is also a major shortage in quality security professionals capable of threat hunting and incident response.

McAfee has a three-pronged approach to addressing and mitigating these concerns:

  • Platform Approach – Unified management and orchestration with a consistent user experience and differentiated insights, delivered in the cloud.
    • To enhance the platform, there is a large focus on Platform Driven Managed Services—focused on selling outcomes, not just technology.
  • Minimized Device Footprint – Powerful yet minimally invasive protection, detection and response spanning full-stack tech, native engine management and ‘as a service’ browser isolation. This is becoming increasingly important as the typical healthcare environment has an increasing variety of endpoints but continues to be limited in resources such as RAM and CPU.
  • Unified Cloud Security – Spanning data centers, integrated web gateway/SaaS, DLP and CASB. The unification of these technologies provides a safety net for data moving to the cloud, as well as the ability to enforce controls as data moves from on-premise to cloud services. Furthermore, the unification of DLP and CASB offers a “1 Policy” for both models, making administration simpler and more consistent. Consistent policy definition and enforcement is ideal for healthcare, where patient data privacy is essential.

In summary, security in healthcare is a complex undertaking. A vast attack surface area, the transformation to cloud services, the need for data privacy and the talent shortage compound the overall problem of security in healthcare. At McAfee, we plan to address these issues through innovative technologies that offer a consistent way to define policy by leveraging a superior platform. We’re also utilizing sophisticated machine learning to simplify the detection of and response to bad actors and malware. These technologies are ideal for healthcare and will offer any healthcare organization long-term stability across the spectrum of security requirements.

The post Data Privacy and Security Risks in Healthcare appeared first on McAfee Blogs.

The GDPR – One Year Later

A couple of weeks ago, one famous lawyer blogged about an issue frequently discussed these days: the GDPR, one year later.

The sky has not fallen. The Internet has not stopped working. The multi-million-euro fines have not happened (yet). It was always going to be this way. A year has gone by since the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) became effective and the digital economy is still going and growing. The effect of the GDPR has been noticeable, but in a subtle sort of way. However, it would be hugely mistaken to think that the GDPR was just a fad or a failed attempt at helping privacy and data protection survive the 21st century. The true effect of the GDPR has yet to be felt as the work to overcome its regulatory challenges has barely begun.”[1]

It’s true that since that publication, the CNIL issued a €50 million fine against Google,[2] mainly for lacking a clear and transparent privacy notice. But even that amount is purely negligible compared to the fact that just three months before that, Google had been hit with a new antitrust fine from the European Union, totaling €1.5 billion.

So, would we say that despite the sleepless nights making sure our companies were ready to comply with privacy, privacy pros are a bit disappointed by the journey? Or what should be our reaction, as privacy pros, when people around us ask, “Is your GDPR project over now?”

Well, guess what? Just like we said last year, it’s a journey and we are just at the start of this voyage. But in a world where cloud has become the dominant way to access IT services and products, it might be useful to highlight a project to which the GDPR gave birth, the EU Cloud Code of Conduct.[3]

Of course, cloud existed prior to the GDPR and many regulators around the world had given guidance well before the GDPR on how to tackle the sensitivity and the risks arising from outsourcing IT services in the cloud.[4] But before the GDPR, most cloud services providers (CSPs) were inclined to attempt to force their customers (the data controllers) to “represent and warrant” that they would act in compliance with all local data laws, and that they had all necessary consents from data subjects to pass data to the CSP processors pursuant to the services. This scenario, although not sensible under EU data protection law, was often successful, as the burden of non-compliance used to lie solely with the customer as controller.

The GDPR changed that in Recital 81, making processors responsible for the role they also play in protecting personal data. Processors are no longer outside the ambit of the law since “the controller should use only processors providing sufficient guarantees, in particular in terms of expert knowledge, reliability and resources, to implement technical and organizational measures which will meet the requirements of this Regulation, including for the security of processing.

The adherence of the processor to an approved code of conduct or an approved certification mechanism may be used as an element to demonstrate compliance with the obligations of the controller.”[5]

With the GDPR, processors must implement appropriate technical and organizational security measures to protect personal data against accidental or unlawful destruction or loss, alteration, unauthorized disclosure, or access.

And adherence to an approved code of conduct may provide evidence that the processor has met these obligations, which brings us back to the Cloud Code of Conduct. One year after the GDPR, the EU Cloud Code of Conduct General Assembly reached a major milestone in releasing the latest Code version that has been submitted to the supervisory authorities.

The Code describes a set of requirements that enable CSPs to demonstrate their capability to comply with GDPR and international standards such as ISO 27001 and 27018. It also proves that the GDPR has marked a strong shift in the contractual environment.

In this new contractual arena, a couple of things are worth emphasizing:

  • The intention of the EU Cloud Code of Conduct is to make it easier for cloud customers (particularly small and medium enterprises and public entities) to determine whether certain cloud services are appropriate for their designated purpose. It covers the full spectrum of cloud services (SaaS, PaaS, and IaaS), and has an independent governance structure to deal with compliance as well as an independent monitoring body, which is a requirement of GDPR.
  • Compliance to the code does not in any way replace the binding agreement to be executed between CSPs and customers, nor does it replace the right for customer to request audits. It introduces customer-facing versions of policies and procedures that allow customers to know how the CSP works to comply with GDPR duties and obligations, including policies and processes around data retention, audit, sub-processing, and security.

The Code proposes interesting tools to enable CSPs to comply with the requirements of the GDPR. For instance, on audit rights, it states that:

“…the CSP may e.g. choose to implement a staggered approach or self-service mechanism or a combination thereof to provide evidence of compliance, in order to ensure that the Customer Audits are scalable towards all of its Customers whilst not jeopardizing Customer Personal Data processing with regards to security, reliability, trustworthiness, and availability.”[6]

Another issue that often arises when negotiating cloud agreements: engaging a sub-processor is permissible under the requirements of the Code, but it requires—similar to the GDPR—a prior specific or general written authorization of the customer. A general authorization in the cloud services agreement is possible subject to a prior notice to the customer. More specifically, the CSP needs to put in place a mechanism whereby the customer is notified of any changes concerning an addition or a replacement of a sub-processor before that sub-processor starts to process personal customer data.

The issues highlighted above demonstrate the shift in the contractual environment of cloud services.

Where major multinational CSPs used to have a minimum set of contractual obligations coupled with minimum legal warranties, it is interesting to note how the GDPR has been able to drastically change the situation. Nowadays, the most important cloud players are happy to demonstrate their ability to contractually engage themselves. The more influential you are as a cloud player, the more you have the ability to comply with the stringent requirements of the GDPR.

 

[1] Eduardo Ustaran – The Work Ahead. https://www.linkedin.com/pulse/gdpr-work-ahead-eduardo-ustaran/

[2] https://www.cnil.fr/en/cnils-restricted-committee-imposes-financial-penalty-50-million-euros-against-google-llc

[3] https://eucoc.cloud/en/detail/news/press-release-ready-for-submission-eu-cloud-code-of-conduct-finalized/

[4] https://acpr.banque-france.fr/node/30049

[5] Article 40 of the GDPR

[6] Article 5.6 of the Code

The post The GDPR – One Year Later appeared first on McAfee Blogs.