Category Archives: Data Security

Data Privacy and Security Risks in Healthcare

Healthcare is a business much like all verticals I work with; however, it has a whole different set of concerns beyond those of traditional businesses. The compounding threats of malware, data thieves, supply chain issues, and the limited understanding of security within healthcare introduces astronomical risk. Walking through a hospital a few weeks ago, I was quickly reminded of how many different devices are used in healthcare—CT scanners, traditional laptops, desktops, and various other devices that could be classified as IoT.

Sitting in the hospital, I witnessed people reporting for treatment being required to sign and date various forms electronically. Then, on a fixed-function device, patients were asked to provide a palm scan for additional biometric confirmation. Credit card information, patient history, and all sorts of other data was also exchanged. In my opinion, patients should be asking, “Once the sign-in process is complete, where is the patient data stored, and who has access to it? Is it locked away, encrypted, or sent to the “cloud” where it’s stored and retrieved as necessary? If it’s stored on the cloud, who has access to that?” I do recall seeing a form asking that I consent to releasing records electronically, but that brings up a whole new line of questions. I could go on and on …

Are these challenges unique to healthcare? I would contend that at some level, no, they’re not. Every vertical I work with has compounding pressures based on the ever-increasing attack surface area. More devices mean more potential vulnerabilities and risk. Think about your home: You no doubt have internet access through a device you don’t control, a router, and many other devices attached to that network. Each device generally has a unique operating system with its own set of capabilities and with its own set of complexities. Heck, my refrigerator has an IP address associated with it these days! In healthcare, the risks are the same, but on a bigger scale. There are lives at stake, and the various staff members—from doctors, to nurses, to administrators—are there to hopefully focus on the patient and the experience. They don’t have the time or necessarily the education to understand the threat landscape—they simply need the devices and systems in the hospital network to “just work.”

Many times, I see doctors in hospital networks and clinics get fed up with having to enter and change passwords. As a result, they’ll bring in their personal laptops to bypass what IT security has put in place. Rogue devices have always been an issue, and since those devices are accessing patient records without tight security controls, they are a conduit for data loss. Furthermore, that data is being accessed from outside the network using cloud services. Teleradiology is a great example of how many different access points there are for patient data—from the referring doctor, to the radiologist, to the hospital, and more.

Figure 1:  Remote Teleradiology Architecture

With healthcare, as in most industries, the exposure risk is potentially great. The solution, as always, will come from identifying the most important thing that needs to be protected, and figuring out the best way to safeguard it. In this case, it is patient data, but that data is not just sitting locked up in a file cabinet in the back of the office anymore. The data is everywhere—it’s on laptops, mobile devices, servers, and now more than ever in cloud services such as IaaS, PaaS and SaaS. Fragmented data drives great uncertainty as to where the data is and who has access to it.

The security industry as a whole needs to step up. There is a need for a unified approach to healthcare data. No matter where it sits, there needs to be some level of technical control over it based on who needs access to it. Furthermore, as that data is traversing between traditional data centers and the cloud, we need to be able to track where it is and whether or not it has the right permissions assigned to it.

The market has sped up, and new trends in technology are challenging organizations every day. In order to help you keep up, McAfee for Healthcare (and other verticals) are focusing on the following areas:

  • Device – OS platforms—including mobile devices, Chromebooks and IoT—are increasingly locked down, but the steadily increasing number of devices provides other avenues for attack and data loss.
  • Network – Networks are becoming more opaque. HTTP is rarely used anymore in favor of HTTPS, so the need for a CASB safety net is essential in order to see the data stored with services such as Box or OneDrive.
  • Cloud – With workloads increasingly moving to the cloud, the traditional datacenter has been largely replaced by IaaS and PaaS environments. Lines of business are moving to the cloud with little oversight from the security teams.
  • Talent – Security expertise is extremely difficult to find. The talent shortage is real, particularly when it comes to cloud and cloud security. There is also a major shortage in quality security professionals capable of threat hunting and incident response.

McAfee has a three-pronged approach to addressing and mitigating these concerns:

  • Platform Approach – Unified management and orchestration with a consistent user experience and differentiated insights, delivered in the cloud.
    • To enhance the plaform, there is a large focus on Platform Driven Managed Services—focused on selling outcomes, not just technology.
  • Minimized Device Footprint – Powerful yet minimally invasive protection, detection and response spanning full-stack tech, native engine management and ‘as a service’ browser isolation. This is becoming increasingly important as the typical healthcare environment has an increasing variety of endpoints but contuinues to be limited in resources such as RAM and CPU.
  • Unified Cloud Security – Spanning data centers, integrated web gateway/SaaS, DLP and CASB. The unification of these technologies provides a safety net for data moving to the cloud, as well as the ability to enforce controls as data moves from on-prem to cloud services. Furthermore, the unification of DLP and CASB offers a “1 Policy” for both models, making administration simpler and more consistent. Consistent policy definition and enforcement is ideal for healthcare, where patient data privacy is essential.

In summary, security in healthcare is a complex undertaking. A vast attack surface area, the transformation to cloud services, the need to for data privacy and the talent shortage compound the overall problem of security in healthcare. At McAfee, we plan to address these issues through innovative technologies that offer a consistent way to define policy by leveraging a superior platform. We’re also utilizing sophisticated machine learning to simplify the detection of and response to bad actors and malware. These technologies are ideal for healthcare and will offer any healthcare organization long-term stability across the spectrum of security requirements.

The post Data Privacy and Security Risks in Healthcare appeared first on McAfee Blogs.

Sprint Data Breach Due To Bug Revealed

U.S. telecom giant, Sprint has recently revealed that a certain number of Sprint customer accounts were taken over by unauthorized users using a loophole in’s “add a line” feature. The company disclosed this information as per their June 22 internal report and the following information of affected users are now in the hands of unknown personalities:

  • Full name
  • Billing address
  • Subscriber ID
  • Account creation date
  • Account number
  • Phone number
  • Device ID
  • Device Type
  • Monthly recurring charges
  • Upgrade eligibility
  • Add-on services

Even with a huge laundry list of information was stolen, Sprint remains calm as the telecom giant claims that the information lost to the breach was not substantial enough to for identity theft to thrive. Sprint on their part issued a force reset of their customer’s PIN in order to lessen the chance of further security breaches. The forced PIN change was initiated on June 25, three full days after the discovery of the incident.

“Sprint has taken appropriate action to secure your account from unauthorized access and has not identified any fraudulent activity associated with your account at this time. Sprint re-secured your account on June 25, 2019. We apologize for the inconvenience that this may cause you. Please be assured that the privacy of your personal information is important to us. Please contact Sprint at 1-888-211-4727 if you have any questions or concerns regarding this matter,” explained Sprint in its official press release.

The company urges all its affected customers to visit, a website operated by the U.S. Federal Trade Commission. Sprint claims that the preventive and security measures provided by the FTC will be very helpful for customers that continue to worry about the data breach incident. As of this writing, Sprint has not disclosed the details on what actually happened to’s “add a line” feature, and how it caused Sprint customers to get hacked through the use of the website.

On their part, Samsung claims that they keep their systems and website secure, and no Samsung customer info from their systems was leaked to the outside world. “We recently detected fraudulent attempts to access Sprint user account information via, using Sprint login credentials that were not obtained from Samsung. We deployed measures to prevent further attempts of this kind on and no Samsung user account information was accessed as part of these attempts,” said a Samsung spokesperson.

Also Read;

Five Important Things about Data Security

Data Breaches have become a common threat in online transactions

Beware of Fake Samsung Firmware Update App


The post Sprint Data Breach Due To Bug Revealed appeared first on .

As cyber attacks increase, the cloud-based database security market grows

The cloud-based database security market is expected to register a CAGR of 19.5% over the forecast period 2019-2024, according to ResearchAndMarkets. With the increasing adoption of Big Data platforms and relational databases becoming the prime target for data thieves, the demand for cloud-based database security is expected to gain traction. Key highlights There has been increasing volumes of data being generated from information-escalated applications like storage and mining of huge or commercial data. These applications … More

The post As cyber attacks increase, the cloud-based database security market grows appeared first on Help Net Security.

Why PCI DSS Compliance Is Important For Smartcards?

As more and more people are conducting their everyday financial transaction needs through the use of smartcards, that is the reality on the ground. People use less cash, and the growing demand for the use of debit/credit cards is globally speaking the release of EMV cards to replace magnetic stripe cards are not yet fully implemented. Hence the PCI DSS Goals and Requirements are established in order to guide the financial sector.

The six goals with their corresponding requirements are enumerated below:

1. Build and maintain secure networks and systems:

Install and maintain a firewall to protect cardholder data

This is the responsibility of system administrators and their team of IT staff. The smartcard itself is just a frontend, the “magic” of using a piece of plastic card in on its backend, the servers that supports the electronic transactions. Both the merchant and the bank are connected by this network that is expected to run 24/7, as ecommerce never stops as office hours stop.

Do not use vendor-supplied defaults for system passwords and other security parameters

Trouble comes with the “default”, there is a term in the IT support industry called the “tyranny of the default”, where the end-user are totally dependent on the default values. Default values for passwords are documented in the web, never use them for a production system.

2. Protect cardholder data

Protect stored cardholder data

Physical security is still one of the strongest security to implement. But immediately succeeding it is the stored data itself that gets read and written through machines like ATMs and POS terminals. It is the full responsibility of banks and merchants that their terminals fully comply with the current security standards.

Encrypt when transmitting cardholder data over an open public network

This is a common practice across the industry, no one will trust a merchant with non-encrypted POS, and no one will ever transact with a bank that has no reasonable implementation of encryption standards practice all around the world for securing their customer’s data.

3. Maintenance of vulnerability management program

Protect all systems as malware and update anti-virus software regularly

Malware infection vulnerability is the very reason why POS and ATM machines are usually running a variant of the Unix and Linux operating systems. This is due to the number of malware available in the Windows platform, it is not recommended for use in merchandising and banking purposes.

Develop and maintain highly secure systems and applications

Many banks maintain their old but still dependable Unix systems, some banks even uses the decades-old mainframe systems for the same reason, security.

4. Introducing powerful access control methods

Restrict access to cardholder data to the extent necessary for business

Also known as user account control, only those bank employees and merchant staff tasks with handling data of customers should have access to customer information.

Identify and authenticate access to system components

Aside from time-tested vaults, banks using their Unix/Linux systems have elaborate components that work together in a secure fashion.

Restrict physical access to cardholder data

Same as number 7, however, securing data on the card is itself is the full responsibility of the owner. Misuse of the card does not make the bank responsible for fraudulent transactions.

5. Regular monitoring and testing of the network

  • Track and monitor all access to network resources and cardholder data
  • Test security systems and processes regularly

6. Development of information security policy

  • Develop a policy to support information security for all personnel

Also, Read:

Cybersecurity Risk Readiness Of Financial Sector Measured

11 Signs That We May Be Nearing Another Global Financial Crisis

How Financial Apps Could Render You Vulnerable to Attacks

The post Why PCI DSS Compliance Is Important For Smartcards? appeared first on .

How well are healthcare organizations protecting patient information?

Healthcare organizations have high levels of confidence in their cybersecurity preparedness despite most of them using only basic user authentication methods in the face of an increasing number of patient identity theft and fraud instances in the marketplace, according to LexisNexis Risk Solutions. Key survey findings Specifically, the survey results showed: 58% believe that the cybersecurity of their patient portal is above average or superior when compared to other patient portals 65% report that their … More

The post How well are healthcare organizations protecting patient information? appeared first on Help Net Security.

Google Employees Are Eavesdropping on Customers

Google employees and subcontractors are listening to recordings gleaned from Google Home smart speakers and the Google Assistant smartphone app.

A report from Belgian news outlet VRT NWS showed that Google regularly uses staff and subcontractors to transcribe audio recordings taken from its network of home devices for the stated purpose of improving its speech recognition technology. A whistleblower employed as a subcontractor for Google shared over a thousand recordings with VRT NWS, many of which were recorded unintentionally and without the user’s consent.  

While the technology and devices are meant to be restricted to requests starting with the phrase “OK Google,” VRT NWS found that over 150 of the recordings were either made accidentally or where the command “was clearly not given.” Content of the recordings included conversations between parents and children, financial information, potential domestic violence, and medical-related questions. 

“[T]his work is of crucial importance to develop technologies sustaining products such as the Google assistant,” said a spokesman for the company, who added that roughly “0.2 percent of all audio fragments” were being analyzed by employees.

Google claims the recordings are stripped of any personally identifiable information, e.g. user names are replaced with serial numbers, etc. This ultimately does little to protect user privacy, since re-identification May be possible.

“[I]t doesn’t take a rocket scientist to recover someone’s identity; you simply have to listen carefully to what is being said… these employees have to look up every word, address, personal name or company name on Google or on Facebook. In that way, they often soon discover the identity of the person speaking,” said the VRT NWS report.

Read the VRT NWS story here



The post Google Employees Are Eavesdropping on Customers appeared first on Adam Levin.

Google’s Leaked Recordings Violates Data Security Policies

A report, based on the Belgium-based NWT VRT revealed that Google employees routinely listened to audio files recorded by Google Home Smart Home speaker, and Google Assistant smartphones.

As per ZdNet, the report elucidates how employees listen to snippets of the recordings when the user activates the device with the usual “OK Google” commands.

After receiving copies of several recordings, NWS VRT approached users, asking them to check their voices or those of their children and to talk to digital assistance or PDAs.

Google responded to the report by posting a blog titled “More information about our processes to safeguard speech data”.

Google acknowledged that it uses sequences of linguists from around the world who “understood the nuances and accents of a particular language”, and had reviewed and copied a small series of questions to better understand these languages. The terms and condition indicate that the users’ conversations are recorded.

Google blog mentions that that capturing interaction is an important part of the sound technology in the process of creating products like Google Assistant. According to them, various security measures are implemented to protect the privacy of users during the review process.

However, according to Google, the availability of the document violates the privacy policy.

Google product manager of Search David Monsees in a blog penned by him said, “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

According to Google, it applies a wide range of safeguards to protect user privacy throughout the entire review process. The blog further adds, “Language experts only review around 0.2% of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.”

The company states that Google Assistant sends audio data to Google after device activation. He also said that devices, including Google Assistant can sometimes receive something like “false accept”, which means there are fewer voices or words in the background than their software interprets as keywords.

Although Google stated that the audio was recorded after the command was heard, NWT VRT stated that out of over a thousand sample heard, 153 should never be recorded and that the “OK Google” command was not clearly given.

In February, Google detailed that its Nest Guard, the centerpiece of the Nest Secure home alarm system, would soon receive Google Assistant functionality — meaning the device needed to have both a speaker and microphone.

Users were not made aware that the Nest Guard had a microphone at all, however.
Google responded that it was nothing more than a mistake to not to tell users about the Nest Guard microphones.

Earlier this year, Amazon found a team of people to answer questions about speakers powered by Alexa Amazon, similar to Google, to improve the accuracy of its voice assistant.

The recording sent to the human team does not have a full name, but is linked to the account name, the device serial number, and the user name of the clip.

Some team members are tasked with copying commands and analyzing whether Alexa answers correctly or not. Others were asked to write background noises and poorly calculated conversations by the device.

Also, Read

Google Duplex Assistant to Reach iPhones, Most Android Phones

Google Stored G Suite Customers Passwords in Plain Text

The post Google’s Leaked Recordings Violates Data Security Policies appeared first on .

Compromise by Proxy? Why You Should Be Losing Sleep Tonight

If you’ve heard of the medical bill collector American Medical Collections Agency (AMCA), it’s probably not because you saw an ad on TV. Most likely you heard about its supernova-level mismanagement of cybersecurity, or you read that, as a consequence, the company filed for Chapter 11 bankruptcy protection,

The AMCA breach affected as many as 20 million consumers. The situation at this third- and sometimes fourth-party debt collection agency was ongoing. It affected at least five different labs: Quest Diagnostics, LabCorp, BioReference Laboratories, Carecentrix, and Sunrise Laboratories. The companies used AMCA as their customer bill payment portal.

During the eight months the vulnerability was unaddressed by AMCA, hackers had access to the company’s online payment page, and with that a cornucopia of sensitive personally identifiable information that included financial data, Social Security numbers, and, in one case, medical information.

Lamentable, Avoidable, Illegal and Expensive

This epic cybersecurity fail was avoidable. The AMCA breach was not only a failure to protect the millions of consumers whose data was exposed. It may be the result of AMCA’s failure to comply with HIPAA legislation.

We need to get a little granular here. As a third-party vendor to a HIPAA covered entity, AMCA would almost certainly be subject to the requirements of the HIPAA Privacy, Security, Enforcement, and Breach Notification Rules. According to the U.S. Department of Health and Human Services representative I contacted by email, a medical bill collector is a business associate if it receives, creates, maintains, or transmits protected health information on behalf of the covered entity for a covered function, such as seeking to obtain payment for a medical bill. Among many requirements, such business associates are directly liable for “failure to provide breach notification to a covered entity or another business associate” and “failure to take reasonable steps to address a material breach or violation of the subcontractor’s business associate agreement.”

It is unclear whether AMCA failed to take reasonable steps to address and report the breach. The AMCA spokesperson declined to comment on this story, instead sending a link to the company’s website. That said, breaches come in all shapes and sizes. Some are more avoidable than others. And breach response varies even more, with ever more divergent degrees of competency.

There are many enterprise-level solutions out there to minimize the risk of such catastrophic cybersecurity events, but they aren’t available to a company that doesn’t know what it doesn’t know. In this regard, knowledge of cyber risks and cyber defense are fungible assets.

The bottom line tells the tale best. AMCA needed to file for bankruptcy protection. While I am not in a position to say exactly why this was the case, last year’s average per record cost, according to IBM’s “2018 Cost of Data Breach Study” was $157, with the average total cost to a company coming in at $4.24 million.

In other words, getting cyber wrong can represent an extinction-level event for many organizations.

The Anatomy of Liability

The AMCA breach was discovered by Gemini Advisory analysts at the end of February 2019. A database described “USA/DOB/SSN” had been posted for purchase on the dark web. On March 1, Gemini Advisory attempted to notify AMCA, and received no response. Multiple phone messages were left regarding the breach. Still, there was no response. Gemini Advisory then notified law enforcement. AMCA did not disable their payment portal until April 8.

The AMCA breach is not an isolated incident for third-party vendors in the healthcare industry. According to a recent report cited by a letter from Sen. Mark Warner (D-Va.) to Quest Diagnostics, 20 percent of data breaches in the healthcare sector in 2018 were traced to third-party vendors. Additionally, about 56 percent of provider organizations have experienced a third-party breach.

It would follow here that the vetting process a company implements in selecting third party vendors would be fully evolved by now with industry standard approaches to cybersecurity and a host of other concerns and considerations. Sadly, many companies do not have specific policies regarding the cybersecurity requirements of subcontracted entities, much less an established path to approval that assures best cyber practices are understood and practiced throughout an organization’s data ecosystem.

When it comes to debt collection, there seems to be a more pervasive lack of standards. The debt collection industry’s lobbying organization–the Association of Credit and Collection Professionals, or ACA International–offers no services or outreach that resemble an information sharing and analysis center, or ISAC. According to the ACA representative I contacted, the ACA is not in the practice of collecting, analyzing or sharing cyber threat information. They mostly seem to lobby for an impediment-free legislative environment.

Meanwhile, the ISAC-free environment matters because hackers thrive in a low-information environment. The same or similar attack is much easier to perpetrate on multiple debt collection agencies if they have no idea there’s a threat out there. Knowing what to look for, and/or being prepared for the attack du jour is among the most powerful cyber tools. While ACA International does provide compliance guidelines as well as two opt-in data security and privacy programs in their ongoing educational seminars, it’s all passive. No one has to do anything. Cybersecurity is not a spectator sport. It is an ongoing activity that must evolve as urgently and persistently as the threats it addresses.

Vetting, Adulting: Take Your Pick

It’s time to grow up. With the lack of specific federal regulations on the cybersecurity practices of third-party vendors, the companies that subcontract with them have to self-police and develop effective vetting processes. When asked if they vetted third-party vendors–or the companies they in turn subcontract–Quest Diagnostics declined to provide me with an answer. The LabCorp response to my questions on this score were similarly unilluminating.

It should go without saying that data breaches and compromises caused by third-party subcontractors and business associates are not unique to the healthcare sector. U.S. Customs and Border Protection officials issued a statement on Monday that photos of traveler’s faces and license plates had been compromised due to a “malicious cyberattack.” The data breach originated from a subcontractor network.

The prevalence of data breaches that originate from third-parties has long been an open secret, and lawmakers are increasingly demanding answers. Sens. Robert Menendez (D-N.J.), Cory Booker (D-N.J) and Mark R. Warner (D-Va.) sent letters asking the testing labs what they did to vet the security measures of AMCA, and inquiring how the breach went unnoticed for so long. They also asked what cybersecurity measures they had at the time, and if all affected parties had been reported. Fair questions all.

If you need a more institutional take, Moody’s Investor Service designated the AMCA breach a credit negative for both Quest Diagnostics and LabCorp, and predicted the breach could result in “new regulations and requirements” regarding how U.S. companies evaluate their vendors before selecting them. We can hope.

The AMCA breach is merely the latest manifestation of the perils of hiring a third-party subcontractor insufficiently cyber-safe for this or that assignment. The lab testing companies may have had cybersecurity best practices in place, but they were only as secure as their least-protected third-party vendor. The frequency of data breaches is drastically rising, and companies that fail to operate within a cybersecurity framework when hiring third-party business associates may well find themselves on the bankruptcy-side of a catastrophic breach.

To manifest the wisdom of Yogi Berra, the only solution here is to have a solution. If you don’t have one, it’s time to find one, or practice your vetting skills on hiring a third party to help you get your cyber game where it needs to be to survive the third certainty in life: Breach happens. Survival is a skill.

The post Compromise by Proxy? Why You Should Be Losing Sleep Tonight appeared first on Adam Levin.

Prison Time for Former Equifax Executive

The former CIO of Equifax has been sentenced to prison for selling his stock in the company before news of its 2017 data breach was publicly announced.

Jun Ying, the former Chief Information Office of Equifax U.S. Information Solutions, sold his shares in the company for over $950,000 ten days before the company admitted that its data had been accessed by hackers. He was sentenced to four months in prison and ordered to pay roughly $170,000 in fines and restitution.  

“Ying thought of his own financial gain before the millions of people exposed in this data breach even knew they were victims,” said U.S. Attorney Byung J. Pak.

The Equifax data breach compromised the names, Social Security numbers, birthdates, and addresses of over 145 million Americans. Ying is the second employee of the company to be found guilty of insider trading related to the incident. 

According to reports, Ying decided to sell his shares after researching the impact of the 2015 data breach of rival company Experian on its stock prices.

Read the U.S. Department of Justice’s statement on the case here.

The post Prison Time for Former Equifax Executive appeared first on Adam Levin.

Senate Republicans Block Election Security Bill

A bill that would provide a billion dollars to states for election security was blocked by Senate Republicans.

The Election Security Act, proposed by presidential candidate Senator Amy Klobuchar (D-Minn.), would have required paper ballots for voting systems as well as for President Trump to provide a strategy for protecting institutions from foreign cyberattacks.

“There is a presidential election before us and if a few counties in one swing state or an entire state get hacked into there’s no backup paper ballots and we can’t figure out what happened, the entire election will be called into question,” said Klobuchar.

Senator James Lankford (R-Okla.), who has worked with Klobuchar on previous election security efforts, voted to stop the bill, arguing that federal funding couldn’t be effectively implemented in time for the 2020 elections. 

“No matter how much money we threw at the states right now, they could not make that so by the 2020 presidential election,” Lankford said. 

Calls for legislation to secure elections have been renewed in the wake of the redacted release of the Mueller report, which detailed Russian interference in 2016. While several bills have passed the House of Representatives, many have been blocked in the Republican-controlled Senate, particularly by Majority Leader Mitch McConnell. 

The post Senate Republicans Block Election Security Bill appeared first on Adam Levin.

The GDPR – One Year Later

A couple of weeks ago, one famous lawyer blogged about an issue frequently discussed these days: the GDPR, one year later.

The sky has not fallen. The Internet has not stopped working. The multi-million-euro fines have not happened (yet). It was always going to be this way. A year has gone by since the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) became effective and the digital economy is still going and growing. The effect of the GDPR has been noticeable, but in a subtle sort of way. However, it would be hugely mistaken to think that the GDPR was just a fad or a failed attempt at helping privacy and data protection survive the 21st century. The true effect of the GDPR has yet to be felt as the work to overcome its regulatory challenges has barely begun.”[1]

It’s true that since that publication, the CNIL issued a €50 million fine against Google,[2] mainly for lacking a clear and transparent privacy notice. But even that amount is purely negligible compared to the fact that just three months before that, Google had been hit with a new antitrust fine from the European Union, totaling €1.5 billion.

So, would we say that despite the sleepless nights making sure our companies were ready to comply with privacy, privacy pros are a bit disappointed by the journey? Or what should be our reaction, as privacy pros, when people around us ask, “Is your GDPR project over now?”

Well, guess what? Just like we said last year, it’s a journey and we are just at the start of this voyage. But in a world where cloud has become the dominant way to access IT services and products, it might be useful to highlight a project to which the GDPR gave birth, the EU Cloud Code of Conduct.[3]

Of course, cloud existed prior to the GDPR and many regulators around the world had given guidance well before the GDPR on how to tackle the sensitivity and the risks arising from outsourcing IT services in the cloud.[4] But before the GDPR, most cloud services providers (CSPs) were inclined to attempt to force their customers (the data controllers) to “represent and warrant” that they would act in compliance with all local data laws, and that they had all necessary consents from data subjects to pass data to the CSP processors pursuant to the services. This scenario, although not sensible under EU data protection law, was often successful, as the burden of non-compliance used to lie solely with the customer as controller.

The GDPR changed that in Recital 81, making processors responsible for the role they also play in protecting personal data. Processors are no longer outside the ambit of the law since “the controller should use only processors providing sufficient guarantees, in particular in terms of expert knowledge, reliability and resources, to implement technical and organizational measures which will meet the requirements of this Regulation, including for the security of processing.

The adherence of the processor to an approved code of conduct or an approved certification mechanism may be used as an element to demonstrate compliance with the obligations of the controller.”[5]

With the GDPR, processors must implement appropriate technical and organizational security measures to protect personal data against accidental or unlawful destruction or loss, alteration, unauthorized disclosure, or access.

And adherence to an approved code of conduct may provide evidence that the processor has met these obligations, which brings us back to the Cloud Code of Conduct. One year after the GDPR, the EU Cloud Code of Conduct General Assembly reached a major milestone in releasing the latest Code version that has been submitted to the supervisory authorities.

The Code describes a set of requirements that enable CSPs to demonstrate their capability to comply with GDPR and international standards such as ISO 27001 and 27018. It also proves that the GDPR has marked a strong shift in the contractual environment.

In this new contractual arena, a couple of things are worth emphasizing:

  • The intention of the EU Cloud Code of Conduct is to make it easier for cloud customers (particularly small and medium enterprises and public entities) to determine whether certain cloud services are appropriate for their designated purpose. It covers the full spectrum of cloud services (SaaS, PaaS, and IaaS), and has an independent governance structure to deal with compliance as well as an independent monitoring body, which is a requirement of GDPR.
  • Compliance to the code does not in any way replace the binding agreement to be executed between CSPs and customers, nor does it replace the right for customer to request audits. It introduces customer-facing versions of policies and procedures that allow customers to know how the CSP works to comply with GDPR duties and obligations, including policies and processes around data retention, audit, sub-processing, and security.

The Code proposes interesting tools to enable CSPs to comply with the requirements of the GDPR. For instance, on audit rights, it states that:

“…the CSP may e.g. choose to implement a staggered approach or self-service mechanism or a combination thereof to provide evidence of compliance, in order to ensure that the Customer Audits are scalable towards all of its Customers whilst not jeopardizing Customer Personal Data processing with regards to security, reliability, trustworthiness, and availability.”[6]

Another issue that often arises when negotiating cloud agreements: engaging a sub-processor is permissible under the requirements of the Code, but it requires—similar to the GDPR—a prior specific or general written authorization of the customer. A general authorization in the cloud services agreement is possible subject to a prior notice to the customer. More specifically, the CSP needs to put in place a mechanism whereby the customer is notified of any changes concerning an addition or a replacement of a sub-processor before that sub-processor starts to process personal customer data.

The issues highlighted above demonstrate the shift in the contractual environment of cloud services.

Where major multinational CSPs used to have a minimum set of contractual obligations coupled with minimum legal warranties, it is interesting to note how the GDPR has been able to drastically change the situation. Nowadays, the most important cloud players are happy to demonstrate their ability to contractually engage themselves. The more influential you are as a cloud player, the more you have the ability to comply with the stringent requirements of the GDPR.


[1] Eduardo Ustaran – The Work Ahead.




[5] Article 40 of the GDPR

[6] Article 5.6 of the Code

The post The GDPR – One Year Later appeared first on McAfee Blogs.