Author Archives: Hunton Andrews Kurth LLP

EU Recalls Children’s Smartwatch Over Security Concerns

The European Commission has issued an EU-wide recall of the Safe-KID-One children’s smartwatch marketed by ENOX Group over concerns that the device leaves data such as location history, phone and serial numbers vulnerable to hacking and alteration. The watch is equipped with GPS, a microphone and speaker, and has a companion app that grants parents oversight of the child wearer. According to a February 1, 2019 alert posted on the EU’s recall and notification index for nonfood products, flaws in the product could permit malicious users to send commands to any Safe-KID-One watch, making it call any other number, and to communicate with the child wearing the device or locate the child through GPS. The European Commission concluded that, as a result, the device does not comply with the 1994 Radio Equipment Directive. This recall follows Germany’s November 2017 ban on smartwatches for children.

Draft CCPA Regulations Expected Fall 2019

As we previously reported, the California Consumer Privacy Act of 2018 (“CCPA”) delays the California Attorney General’s enforcement of the CCPA until six months after publication of the Attorney General’s implementing regulations, or July 1, 2020, whichever comes first. The California Department of Justice anticipates publishing a Notice of Proposed Regulatory Action concerning the CCPA in Fall 2019.

The regulations aim to (1) establish procedures to facilitate consumers’ rights under the CCPA and (2) provide guidance to businesses regarding how to comply. As required under the CCPA, the regulations will address:

  • the categories of personal information;
  • the definition of unique identifiers;
  • any exceptions necessary to comply with state or federal law, including, but not limited to, those relating to trade secrets and intellectual property rights;
  • rules and procedures for (1) the submission of a consumer request to opt out of the sale of personal information pursuant to Section 1798.145(a)(1); (2) business compliance with a consumer’s opt-out request; and (3) the development and use of a recognizable and uniform opt-out logo or button by all businesses to promote consumer awareness of the opportunity to opt-out of the sale of personal information;
  • adjusting the monetary threshold in Section 1798.140(c)(1)(A) in January of every odd-numbered year to reflect any increase in the Consumer Price Index;
  • the establishment of rules, procedures and any exceptions necessary to ensure that the notices and information that businesses are required to provide are relayed in a manner that may be easily understood by the average consumer, are accessible to consumers with disabilities, and are available in the language primarily used to interact with the consumer; and
  • the establishment of rules and procedures related to the verification of consumer requests.

Written comments may be submitted by email to privacyregulations@doj.ca.gov or by mail to the California Department of Justice, ATTN: Privacy Regulations Coordinator, 300 S. Spring St., Los Angeles, CA 90013. The deadline to submit written comments is March 8, 2019.

EDPB Releases Opinion on Interplay Between the Clinical Trial Regulation and the GDPR

On January 23, 2019, the European Data Protection Board (“EDPB”) released an opinion on the interplay between the European Clinical Trials Regulation (“CTR”) and the EU General Data Protection Regulation (“GDPR”) (the “Opinion”). The Opinion was requested by the European Commission Directorate-General for Health and Food Safety (“DG SANTE”).

The CTR intends to harmonize the assessment and supervision of clinical trials throughout the EU and introduces a Clinical Trials Information System, along with rules on the protection of individuals and increased transparency requirements. The CTR is estimated to enter into force in 2020.

In the Opinion, the EDPB provides guidance on (1) the legal bases for processing personal data in the course of a clinical trial protocol and (2) secondary uses of clinical trial data outside the clinical trial protocol for scientific purposes.

Key takeaways from the Opinion include:

Processing Personal Data in the Course of a Clinical Trial Protocol

  • All processing operations related to a specific clinical trial protocol, from the commencement of the trial to the deletion of data at the end of the archiving period, are considered to be primary uses of clinical trial data.
  • For primary uses of clinical trial data, the EDPB considers processing activities as falling into one of two main categories — (1) processing operations related to the protection of health and setting standards of quality and safety for medicinal products by generating reliable and robust data (“Safety and Reliability Purposes”) and (2) processing operations related to research activities only (“Pure Research Activity Purposes”).
  • Regarding processing activities provided by the CTR and relevant national legislation related to Safety and Reliability Purposes, the EDPB concludes the appropriate legal basis for such processing is Article 6(1)(c) of the GDPR (processing necessary for compliance with a legal obligation to which the controller is subject) and, with respect to processing special categories of data, Article 9(2)(i) of the GDPR (processing necessary for reasons of public interest in the area of public health).
  • The EDPB notes that processing operations related to Pure Research Activity Purposes may fall under Article 6(1)(a) in conjunction with Article (9)(2)(a) of the GDPR (explicit consent), Article 6(1)(e) (performance of a task carried out in the public interest) or Article 6(1)(f) in conjunction with Article 9(2)(i) or (j) (legitimate interests of the controller in conjunction with the processing being necessary for reasons of public interest in the area of public health or necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes).
  • With respect to consent for Pure Research Activity Purposes, the Opinion makes clear that informed consent under the CTR must not be confused with consent to process data under the GDPR and controllers who wish to use this basis must ensure that they meet all GDPR requirements for consent, with particular attention to the “freely given” aspect of the requirements.
  • Withdrawal of consent applies in clinical trials as it would in the context of any other processing operation relying on consent. Withdrawal of consent in clinical trials will not, however, affect processing operations authorized on other grounds, including, in particular, processing for Safety and Reliability Purposes which are based on Article 6(1)(c) of the GDPR.

Secondary Uses of Clinical Trial Data Outside the Clinical Trial Protocol for Scientific Purposes

  • Regarding secondary uses of clinical trial data for scientific purposes, Article 28(2) of the CTR notes that at the time a clinical trial subject gives informed consent to participate in a clinical trial, the sponsor may ask for consent to the use of the subject’s data outside the protocol of the clinical trial, exclusively for scientific purposes. Such consent is not considered the same as consent for processing personal data under the GDPR. If a sponsor or investigator would like to make further use of personal data for any scientific purpose other than those defined in the clinical trial protocol, doing so requires another specific legal basis for the secondary purpose. The chosen legal ground may be the same or different from the legal basis of the primary use.
  • The EDPB further comments that the presumption of compatibility provided under Article 5(1)(b) of the GDPR should not be excluded from the secondary use of clinical trial data for scientific purposes. In other words, where the secondary use of the clinical trial data for scientific purposes is compatible with the original purpose, the controller may be able to further process such data without recourse to a new legal basis.

ICO Releases Discussion Paper on Regulatory Sandbox Beta Phase

On January 30, 2019, the UK Information Commissioner’s Office (“ICO”) released a discussion paper on the upcoming beta phase of its regulatory sandbox initiative (the “Discussion Paper”). The ICO had launched a call for views on creating a regulatory sandbox in September 2018, and the feedback received facilitated developing systems and processes necessary to launch the beta phase.

According to the ICO, the purpose of the sandbox is to support the use of innovative products and services that are in the public interest, to assist in developing a shared understanding of what compliance in innovative areas looks like and to support the UK in being an innovative economy.

The Discussion Paper outlines the application process for entering the beta phase of the sandbox, how the ICO sees the sandbox working in practice and the types of support it will offer organizations in the sandbox.  It also presents various questions on its proposed approach, to which it seeks feedback.

The ICO has launched an intention to apply survey to allow organizations to express interest in applying and to provide information about any product or service they plan to enter into the beta phase. Full details of the beta phase will be made available by the end of March with formal applications opening towards the end of April. The beta phase is expected to run from July 2019 to September 2020.

The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP previously submitted comments to the ICO on the regulatory sandbox and supports and encourages its development and use in data protection. CIPL plans to hold a by invitation only roundtable on the sandbox in the coming weeks.

Hunton Andrews Kurth Publishes 2018 Retail Industry Year in Review

As reported on the Hunton Retail Law Resource blog, on January 17, 2019, Hunton Andrews Kurth’s retail industry team, composed of more than 200 lawyers across practices, released their annual Retail Industry Year in Review publication.

The 2018 Retail Industry Year in Review includes many topics of interest to retailers, including the use of artificial intelligence (AI), ITC investigations, product recall insurance, antitrust enforcement in the Trump Administration, the collection and storage of biometric data, the California Consumer Privacy Act, SEC and M&A activity in 2018, the #MeToo movement and the impact of cashierless stores.

Download a copy of the publication.

European Commission Issues GDPR Infographic

On January 25, 2019, the European Commission (the “Commission”) issued an infographic on compliance with and enforcement and awareness of the EU General Data Protection Regulation (“GDPR”) since the GDPR took force on May 25, 2018. The infographic revealed that:

  • 95,180 complaints have been lodged with EU national data protection authorities (“DPAs”) under the GDPR. Most complaints were related to the use of CCTV cameras and direct marketing activities (telemarketing and promotional e-mails).
  • 41,502 data breaches have been notified to the DPAs.
  • The DPAs have initiated 255 investigations in the context of EU cross-border processing activities, most of them following individual complaints.
  • Three fines have been issued under the GDPR so far. On September 12, 2018, the Austrian DPA imposed a €5,280 fine on a sport betting café for unlawful video surveillance. On November 21, 2018, the German DPA of Baden-Württemberg imposed a €20,000 fine on a social network operator for failing to protect users’ personal data. On January 21, 2019, the French DPA imposed a €50 million fine on Google for alleged GDPR violations of the transparency, notice and consent requirements.
  • 23 EU Member States have now adapted their national legislation to the GDPR. Five Member States (Bulgaria, Greece, Slovenia, Portugal and Czech Republic) are still in the process of implementing the GDPR.

Ten Years Strong: A Decade of Privacy and Cybersecurity Insights

In January 2019, Hunton Andrews Kurth celebrates the 10-year anniversary of our award-winning Privacy and Information Security Law Blog. Over the past decade, we have worked hard to provide timely, cutting-edge updates on the ever-evolving global privacy and cybersecurity legal landscape. Ten Years Strong: A Decade of Privacy and Cybersecurity Insights is a compilation of our blog’s top ten most read posts over the decade, and addresses some of the most transformative changes in the privacy and cybersecurity field.

Read Ten Years Strong: A Decade of Privacy and Cybersecurity Insights.

Dutch DPA Publishes 2018 Report on Data Breach Statistics

On January 29, 2019, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, the “Dutch DPA”) published a report (in Dutch) on the personal data breach notifications received in 2018 (the “Report”). The EU General Data Protection Regulation (the “GDPR”) requires data controllers to notify a personal data breach to the competent Data Protection Authority (“DPA”) within 72 hours after becoming aware of it. In the Netherlands, this breach notification requirement has been in place since January 1, 2016. However, the GDPR imposed additional requirements, including: providing certain information in a breach notification; data controllers’ mandatory obligation to notify affected individuals if the breach is likely to result in a high risk to the rights and freedoms of those individuals; companies duty to document any personal data breaches.

Facts and Figures

In 2018, the number of data breach notifications the Dutch DPA received doubled, totaling 20,881 breach notifications. The most affected sectors are the health and wellbeing sectors (29% of the breaches notified), the financial sector (26% of the breaches notified), and the public sector (17% of the breaches notified). In 63% of the cases, the breach involved personal data sent to the wrong email address. The remaining 37% of the cases were related to the loss of personal data (such as in the case of a lost laptop or lost USB sticks), hacking, phishing or malware. The types of affected personal data are, in most cases, the data subjects’ name and contact details, gender, health data and national identification number.

In the Report, the Dutch DPA indicates that companies did not provide notifications for all personal data breaches that were notifiable. For example, certain companies had informed the individuals affected by a personal data breach, but did not notify the competent DPA of the breach. As a result, more personal data breaches should have been notified to the Dutch DPA in 2018 and the Dutch DPA indicated that it will specifically focus on this in 2019.

Dutch DPA Actions

The Dutch DPA took several measures in response to the breach notifications it received in 2018. The Report indicates that in many cases, the Dutch DPA (1) provided advice to companies (including about the security measures to be implemented), (2) requested additional information about the personal data breach being reported, (3) sent a letter to the company providing notification to explain the applicable requirements and (4) initiated discussions with those companies.

Since May 25, 2018, the Dutch DPA took action against reporting companies in 298 cases of the personal data breaches reported. Some of these cases are still pending. In general, these actions led to a warning which put an end to the violation. In four cases, the Dutch DPA conducted an investigation in response to the notification.

European Data Protection Board Issues Privacy Shield Report

On January 22, 2019, the European Data Protection Board (“EDPB”) issued a report on the Second Annual Review of the EU-U.S. Privacy Shield (the “Report”). Although not binding on EU or U.S. authorities, the Report provides guidance to regulators in both jurisdictions regarding implementation of the Privacy Shield and highlights the EDPB’s ongoing concerns with regard to the Privacy Shield. We previously blogged about the European Commission’s report on the second annual review of the Privacy Shield, and the joint statement of the European Commission and Department of Commerce regarding the second annual review.

In the Report, the EDPB praised certain actions and efforts undertaken by U.S. authorities and the European Commission to implement the Privacy Shield, including the following:

  • Efforts by the Department of Commerce to adapt the initial certification process to minimize inconsistencies between the Department’s Privacy Shield List and representations made by certifying organizations (in their privacy notices) regarding their participation in the Privacy Shield;
  • Enforcement actions and other oversight measures taken by the Department of Commerce and Federal Trade Commission regarding compliance with the Privacy Shield; and
  • Issuance of guidance for EU individuals on exercising their rights under the Privacy Shield, and for U.S. businesses to clarify the requirements of the Privacy Shield (g., the Department of Commerce’s FAQs available on PrivacyShield.gov).

The Report identifies continuing concerns of the EDPB, including the following key areas:

  • According to the EDPB, “a majority of companies’ compliance with the substance of the Privacy Shield’s principles remain unchecked.” The EDPB indicated that the application of the Shield principles by certifying organizations has not yet been ascertained through oversight and enforcement action by U.S. authorities.
  • With respect to the onward transfer principle, the EDPB suggested that U.S. authorities more closely monitor the implementation of this principle by certified entities, suggesting, for example, that the Department of Commerce exercise “its right to ask organizations to produce the contracts they have put in place with third countries’ partners” to assess whether the contracts provide the required safeguards and whether further guidance or action by the U.S. authorities is needed in this regard.
  • The EDPB indicated that the re-certification process “needs to be further refined,” noting that the Privacy Shield list contains outdated listings, leading to confusion for data subjects.
  • The Report highlights the uncertainty surrounding the application of the Privacy Shield to HR data, noting that conflicting interpretations of the definition of HR data has led to uncertainty as to what protections are available.

In addition, the Report notes that the EDPB is still awaiting the appointment of a permanent independent Ombudsperson to oversee the Privacy Shield program in the U.S.  Until such time as an appointment is made, the EDPB cannot determine whether the Ombudsperson “is vested with sufficient powers to remedy non-compliance” with the Privacy Shield.

CIPL Submits Comments to ICDPPC Declaration on Ethics and Data Protection in AI

On January 25, 2019, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted formal comments to the International Conference of Data Protection and Privacy Commissioners (the “International Conference”) on its Declaration on Ethics and Data Protection in Artificial Intelligence (the “Declaration”). The Declaration was adopted by the International Conference on October 23, 2018, for public consultation.

As we previously reported, the Declaration endorses several guiding principles as core values to preserve human rights as artificial intelligence technology develops. CIPL welcomes and shares many of the views expressed by the International Conference with respect to these six guiding principles.

In its comments on the Declaration, CIPL recommends several specific modifications and clarifications to the guiding principles of fairness, continued attention and vigilance and accountability, transparency and intelligibility, responsible design and development, individual empowerment and reducing or mitigating unlawful biases and discriminations.

These comments are intended to assist the newly set up International Conference permanent working group on Ethics and Data Protection in AI as it seeks to establish common governance principles on artificial intelligence at an international level.

To read CIPL’s recommendations on these principles, please view the full paper.

Hunton Briefing Reflects on GDPR Implementation and Future Challenges

On January 16, 2019, Hunton Andrews Kurth hosted a breakfast seminar in London, entitled “GDPR: Post Implementation Review.” Bridget Treacy, Aaron Simpson and James Henderson from Hunton Andrews Kurth and Bojana Bellamy from the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth discussed some of the challenges and successes companies encountered in implementing the EU General Data Protection Regulation (the “GDPR”), and also identified key data protection challenges that lie ahead. The Hunton team was joined by Neil Paterson, Group Data Protection Coordinator of TUI Group; Miles Briggs, Data Protection Officer of TUI UK & Ireland; and Vivienne Artz, Chief Privacy Officer at Refinitiv, who provided an in-house perspective on the GDPR.

The briefing provided an opportunity for companies (the “Companies”) to reflect on their achievements so far and to benchmark their GDPR experiences ahead of Data Protection Day, which is on January 28, 2019. A main takeaway of the day was that building a business friendly privacy environment is an ongoing process that must be viewed from a global perspective.

We have summarized below some of the key discussion points from the seminar.

GDPR Implementation Insights

  • Generally Satisfied with Compliance: While the Companies were reasonably satisfied with the bulk of their GDPR implementation work and are now engaged in fine-tuning their data protection compliance programs, the Companies recognized that a number of challenges remain.
  • Global Privacy Challenges: Data Protection Officers are seeking to move their companies toward sustainable privacy programs that ensure GDPR compliance, yet also address global privacy challenges beyond the GDPR. The Companies view GDPR compliance as important, but not an end in itself, at least not given recent developments in other parts of the world, such as India, Brazil, etc. The Companies recognize privacy as the new normal, and are working to build efficient programs to address privacy challenges at an international level.
  • Maintaining a Culture of Privacy Awareness: Maintaining and developing a culture of privacy awareness within their companies is a key concern for privacy leaders. Some business leaders viewed the GDPR as a completed task once the implementation date of May 25, 2018, had passed, rather than an ongoing responsibility; and privacy leaders have been working hard to correct this view.
  • Territorial Scope: Many of the Companies have struggled to interpret the territorial scope of the GDPR. Insights from the European Data Protection Board’s Guidelines on Territorial Scope (3/2018), published in November 2018, have helped to clarify the position on topics such as the location of the protected data subjects, the use of non-EU based processors and the nature of a non-EU processor’s obligations.
  • Data Processing Agreements: Implementing Article 28 requirements continues to challenge the Companies, with a broad range of positions being adopted when negotiating data processing agreements. Negotiating liability caps and exclusions can be complex, due in part to the risk of reopening broader liability and other contractual issues. It will likely take some time for market practice to evolve.
  • Increased Training and Tech-enabled Compliance Tools: The Companies mentioned that, in the year ahead, conducting data protection training and awareness programs and rolling out tech-enabled compliance tools (e.g., for DPIAs and DSARs) will play a key part in enabling ongoing compliance with the GDPR.
  • GDPR and Future Privacy Challenges: The Companies stressed the difficulties encountered in interpreting and implementing GDPR obligations in the context of artificial intelligence, machine learning and the big data challenges of tomorrow. Companies will need to find innovative ways to accommodate big data while respecting data subject rights.

Regulatory Perspective

  • Increase in Complaints and Breach Reporting: As expected, data protection authorities (“DPAs”) have already been required to deal with a significant volume of complaints (on one report, 42,230 throughout the EU), and reports of data breaches (some 500 per week in the UK in the first few weeks after the GDPR took effect). Breach notifications across EU Member States have reached levels that are barely sustainable for most EU regulators. This is a consequence of the low notification threshold set by the GDPR, and of organizations adopting a very conservative approach towards notification. The ICO has reminded organizations that not all data breaches need to be reported. Other DPAs have a differing view, pointing to the need for more comprehensive guidance on this topic.
  • Inconsistency across Member States: There are already examples of inconsistent approaches by EU DPAs in relation to the implementation of the GDPR framework. Perhaps the starkest example of this is the 21 separate DPIA frameworks adopted at a national level. Staffing levels between DPAs differ, and differences in enforcement strategy are also likely. It will take time for differences to be reconciled, and in some areas, they will remain. Just as companies require time to embed and fine tune their implementation of the GDPR, regulators will also require time to adjust to the new regulatory environment.

Future Challenges

  • Moving Beyond Local Compliance to Global Privacy Accountability: Privacy frameworks are evolving and organizations face the challenge of moving their focus from local legal compliance to implementing a global operational privacy framework. The GDPR is now viewed as a template by countries seeking to craft new privacy laws. It offers a major step forward towards an operational privacy framework, but global privacy accountability will remain a challenge.
  • Local Challenges: Privacy leaders aspire to ensure that at every level of their organization, staff recognize the privacy issues raised by each decision, and assess the privacy risk for affected data subjects.
  • Future Challenges: Major legal challenges highlighted by participants included Brexit, the e-Privacy Regulation and the likelihood of legal challenges under the GDPR.

Hunton is hosting its next seminar in its London office on “Practical Insights on the Design and Implementation of Data Protection Impact Assessments,” on March 6, 2019.

Illinois Supreme Court Says Biometric-Data Protection Law Does Not Require Allegation of Actual Injury

The Illinois Supreme Court ruled today that an allegation of “actual injury or adverse effect” is not required to establish standing to sue under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”). This post discusses the importance of the ruling to current and future BIPA litigation.

The Illinois Supreme Court rendered a decision on January 25, 2019, that gives the green light to certain plaintiffs seeking redress under the BIPA. BIPA provides a private right of action to Illinois residents “aggrieved” by private entities that collect their biometric data (including retina scans, fingerprints and face geometry) without complying with the statute’s notice and consent requirements. Hundreds of cases have been filed under the law, including many putative class actions, enticed by per-violation statutory damages of $1,000 or more.

In the opinion, the Illinois Supreme Court unanimously found that allegations of a technical violation alone can sustain an action, and that limiting BIPA claims to those individuals who can plead and prove an actual injury would depart from the plain and unambiguous meaning of the law. The case is styled Stacy Rosenbach v. Six Flags Entertainment Corp., No. 123186 (Ill.).

BIPA currently is the most watched statute in the U.S. concerning the collection and use of biometric data because it is the only such law that provides a private right of action. The court’s decision resolves a jurisdictional issue that had derailed some prior lawsuits. Today’s decision promises to ramp up an already steady stream of litigation both in and outside of Illinois.

Use of biometric technology by businesses for employee timekeeping, customer identification, and other applications is increasing. The importance of strict compliance with BIPA for companies operating in Illinois is now unavoidably clear.

Elizabeth Denham, UK Information Commissioner Receives Queen’s Honor

On December 29, 2018, the UK Information Commissioner’s Office announced that Elizabeth Denham, UK Information Commissioner, was awarded a CBE for her services to protecting information. Denham’s award was announced in the United Kingdom’s 2019 New Year’s Honours list. This honor reflects Denham’s achievements as the UK Information Commissioner and the enhanced leadership, visibility and impact that she has brought to the role and the Office.

The title of Commander of the Most Excellent Order of the British Empire is bestowed by the Queen for prominent national or regional roles and to those making distinguished or notable contributions in their own specific field of activity.

In speaking about the award, Denham commented: “My hope is that this honour will assist in drawing attention to the importance of data protection to citizens, particularly this year, which has seen major reforms in data protection law with the introduction of the General Data Protection Regulation – reforms to keep pace with changes in our digital age.”

Previously, Denham received the Queen Elizabeth II Diamond Jubilee Medal for her service as an Officer of the Legislature of British Columbia, Canada and was named most influential person in data-driven business in the 2018 edition of the DataIQ 100.

Belgian DPA Publishes Prior Consultation Form in the Context of a DPIA

The Belgian Data Protection Authority (the “Belgian DPA”) recently published on its website a form to be completed for prior consultation in the context of a data protection impact assessment (“DPIA”).

Under Article 35 of the EU General Data Protection Regulation (the “GDPR”), data controllers must consult the supervisory authority where a DPIA indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk. The form must be completed and sent back to the Belgian DPA by email or by mail.

The form, which may be used by companies to request a prior consultation or to cancel a submitted request for prior consultation, includes questions to be answered in case of cross-border activities. Answers to these questions will allow the Belgian DPA to determine whether it should be deemed a lead DPA with respect to a submitted prior consultation request, or to refer the case to the appropriate DPA.

The form also includes queries regarding details of the processing activity, as well as questions to help assess risks related to the processing activity and how the company will manage such risk. If the Belgian DPA concludes that an envisaged processing activity may infringe the GDPR, it will issue an opinion regarding such processing activity within eight weeks. This period may be extended by six additional weeks depending on the complexity of the case.

Together with the form, the Belgian DPA also published a guide to assist companies in determining whether or not they must conduct a DPIA and when they must consult the Belgian DPA following a DPIA.

In the guide, the Belgian DPA notes the three types of “high risk” processing activities that always require a DPIA under the GDPR, as well as the Article 29 Working Party’s list of nine factors to consider when determining whether a processing activity is  “high risk.”

View the form (in French and Dutch) and the guide (in French and Dutch).

Reported Cyber Attacks on U.S. Electric Utilities and Government Agencies

Hundreds of contractors and subcontractors with connections to U.S. electric utilities and government agencies have been hacked, according to a recent report by the Wall Street Journal. The U.S. government has linked the hackers to a Russian state-sponsored group, sometimes called Dragonfly or Energetic Bear. The U.S. government alerted the public that the hacking campaign started in March 2016, if not earlier, although many of its victims were unaware of the incident until notified by the Federal Bureau of Investigation and Department of Homeland Security, the Wall Street Journal reports.

Instead of using sophisticated techniques to directly attack utilities companies, the hackers largely “exploited trusted business relationships using impersonation and trickery” to access the networks of U.S. electric utilities, such as by planting malware on sites of online publications frequently read by utility engineers and through clever spear phishing emails. According the article, Jonathan Homer, the Department of Homeland Security’s Chief of Industrial Control Systems Group, reported in a briefing to utilities last year that the hackers could have caused temporary power outages. While the exact number of utilities and vendors compromised is unknown the article goes on, industry experts say that the hackers likely still have access to some systems.

European Commission Adopts Japan Adequacy Decision

On January 23, 2019, the European Commission announced that it has adopted its adequacy decision on Japan (the “Adequacy Decision”). According to the announcement, Japan has adopted an equivalent decision and the adequacy arrangement is applicable with immediate effect.

Prior to the adoption of the Adequacy Decision, Japan implemented a series of additional safeguards designed to ensure that data transferred from the EU to Japan will be protected in line with European standards. These include:

  • A set of supplementary rules to bridge the difference between EU and Japanese standards on various issues, including sensitive data, the exercise of individual rights and onward transfer of EU data to third countries;
  • Safeguards concerning Japanese public authorities’ access to EU personal data for criminal law enforcement and national security purposes; and
  • A complaint-handling mechanism, administered and supervised by the Japanese Personal Information Protection Commission, to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities.

In terms of next steps, an initial joint review will be carried out after two years to evaluate the functioning of the framework. The assessment will cover all aspects of the Adequacy Decision, including the application of the additional safeguards mentioned above. Representatives of the European Data Protection Board will participate in the portion of the review relating to access to data by Japanese public authorities for law enforcement and national security purposes. Following this initial review, periodic reviews of the framework will take place at least every four years.

Commenting on the Adequacy Decision, Věra Jourová, Commissioner for Justice, Consumers and Gender Equality, noted in the announcement that the “decision creates the world’s largest area of safe data flows” and that “this arrangement will serve as an example for future partnerships in this key area [of privacy] and help setting global standards.”

The European Commission has also released a factsheet and Q&As on the adequacy arrangement with Japan.

CNIL Fines Google €50 Million for Alleged GDPR Violations

On January 21, 2019, the French Data Protection Authority (the “CNIL”) imposed a fine of €50 million on Google LLC under the EU General Data Protection Regulation (the “GDPR”) for its alleged failure to (1) provide notice in an easily accessible form, using clear and plain language, when users configure their Android mobile device and create a Google account, and (2) obtain users’ valid consent to process their personal data for ad personalization purposes. The CNIL’s enforcement action was the result of collective actions filed by two not-for-profit associations. This fine against Google is the first fine imposed by the CNIL under the GDPR and the highest fine imposed by a supervisory authority within the EU under the GDPR to date.

Background

On May 25, 2018, the Austrian not-for-profit association None Of Your Business (“NOYB”) filed a collective action with the CNIL pursuant to Article 80 of the GDPR, arguing that mobile phone users using Google’s Android operating system are required to accept Google’s privacy policy and general terms of use of Google services in order to use their mobile phones. On May 28, 2018, the French not-for-profit association La Quadrature du Net (“LQDN”) also filed a collective action, arguing that Google did not have a valid legal basis to process users’ personal data for behavioral analysis and targeted advertising purposes.

On June 1, 2018, the CNIL shared these two complaints with other EU data protection supervisory authorities with a view toward designating a lead supervisory authority in accordance with Article 56 of the GDPR. On September 21, 2018, the CNIL nonetheless carried out an online inspection to assess whether the processing activities carried out by Google in the context of the Android operating system complied with the French Data Protection Act and the GDPR.

CNIL’s Jurisdiction over Google LLC’s Processing Activities

Google challenged the jurisdiction of the CNIL arguing that its Irish affiliate, Google Ireland Limited, is Google LLC’s European headquarters and its main establishment for the purposes of the GDPR’s one-stop-shop mechanism and that the complaints should have been handled by the Irish Data Protection Commissioner as Google’s lead supervisory authority.

According to the CNIL, the evidence provided by Google revealed that Google Ireland Limited was simply involved in various activities carried out by Google LLC in the EU and did not have decision-making powers over the personal data processing activities covered in the privacy policy presented to users when creating a Google account during the configuration of their Android mobile phones. Accordingly, the CNIL concluded that Google did not have a main establishment in the EU and that the one-stop-shop mechanism was therefore inapplicable. As a result, the CNIL was competent to evaluate the data processing activities carried out by Google LLC. The CNIL did not consult the European Data Protection Board regarding identification of a possible lead supervisory authority, and noted that the president of the Board similarly did not consider it necessary for the Board to be consulted.

Alleged GDPR Violations

  • In its ruling, the CNIL found that Google LLC had failed to (1) comply with the transparency and notice requirements of the GDPR and (2) obtain valid consent from users. With respect to the transparency obligations, the CNIL found that the disclosures provided by Google were not easily accessible for users and that information was spread between several documents. According to the CNIL, these documents included multiple buttons and links on which users had to click to access additional information, requiring sometimes up to 5 or 6 actions to obtain the relevant information about the data processing. In addition, the CNIL found that the description of the purposes (such as providing personalized services in terms of content and ads, ensuring the security of the services and products, and providing and developing services) and the types of data processed for these purposes were too vague. In the CNIL’s view, those descriptions could not allow users to understand the extent of the data processing carried out by Google and its consequences. The CNIL also found that the privacy policy was not clear with respect to the legal basis for processing personal data for ad personalization purposes (i.e., users’ consent). Further, the CNIL found that, for a certain type of data, the information provided did not include a specific retention period or the criteria that would allow users to determine that period.
  • With respect to consent, the CNIL found that, in light of the above, users’ consent for ad personalization purposes was not sufficiently informed since the information was diluted across several documents. In addition, the CNIL also found that users’ consent was not specific or unambiguous, as required by the GDPR. The CNIL noted that it was possible for users to modify some of the options associated with their Google account and to configure the display of personalized ads by ticking a box. However, in the CNIL’s view, consent was not unambiguous as the boxes in question were pre-checked by default. In this respect, the CNIL stated that unambiguous consent requires a clear affirmative action from users (e.g., by checking a box that is not pre-checked). Further, the CNIL found that consent was not specific as, before creating an account, users were asked to consent to all the processing operations carried out by Google based on consent, as further described in Google’s privacy policy. The CNIL stated that consent is specific only if it is given distinctly for each purpose.

CNIL’s Sanction

In setting its fine at €50 million, the CNIL considered the following:

  • The fact that the alleged violations relate to essential principles of the GDPR and are therefore particularly serious;
  • The fact that the alleged violations are still occurring and constitute continuous breaches of the GDPR;
  • The importance of the Android operating system in the French market; and
  • The extent of the data processing operations covered by the privacy policy presented to users when creating a Google account during the configuration of their Android mobile phone, considering the number of Google services involved and the variety of data processed via, or in relation to, the Android operating system.

The CNIL imposed its fine upon Google LLC but addressed its decision to Google France SARL in order to enforce its decision. Google LLC may appeal this decision within four months before France’s highest Administrative Court (Conseil d’Etat).

CIPL Submits Comments to EDPB’s Draft Guidelines on the Territorial Scope of the GDPR

On January 18, 2019, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the European Data Protection Board (the “EDPB”) on its draft guidelines on the territorial scope of the GDPR (the “Guidelines”). The Guidelines were adopted by the EDPB on November 16, 2018, for public consultation.

CIPL appreciates many of the clarifications and concrete examples provide by the EDPB in the Guidelines with respect to the extraterritorial reach and application of the GDPR. Such clarity is critical for the consistent application and interpretation of Article 3 by organizations and data protection authorities (“DPAs”). At the same time, however, CIPL identified several instances where the Guidelines appear to stretch the criteria triggering the application of the GDPR too far and believes that such scenarios would benefit from further clarification or adjustment.

In its comments to the Guidelines, CIPL recommends several changes or clarifications the EDPB should incorporate in its final Guidelines.

Some key recommendations include:

  • Providing detailed examples and further clarity regarding several aspects of the establishment criterion under Article 3(1) of the GDPR, including instances where the establishment threshold would not be met;
  • Clarifying several examples in the Guidelines with respect to the offering of goods and services to individuals in the EU under Article 3(2)(a) of the GDPR and adding CIPL’s proposed additional examples;
  • Providing more detail around what types of activities fall and do not fall under the definition of “monitoring” under Article 3(2)(b) of the GDPR, particularly with respect to monitoring in the employment and security contexts;
  • Further clarifying the role, responsibilities and liability of the Article 27 representative; and
  • Explaining the relationship between Article 3 on the territorial scope of the GDPR and Chapter V of the GDPR on international data transfers.

CIPL also includes, in an annex to its comments, a chart designed to illustrate the GDPR’s territorial scope at a glance. This intends to assist organizations and DPAs to quickly assess whether and to what extent an organization is subject to the GDPR. CIPL recommends the EDPB include this illustration in the final version of the Guidelines.

To read the above recommendations in more detail, along with CIPL’s other recommendations on the territorial scope of the GDPR, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 92 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics prioritized by the EDPB.

Dutch DPA Investigates the Data Processing Agreements of 30 Organizations

On January 16, 2019, the Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (the “Dutch DPA”), announced that it had requested 30 private organizations provide information about the agreements they have with other entities that process personal data on their behalf. The Dutch DPA indicated that the targeted organizations are mainly in energy, media and trade sectors.

Article 28 of the EU General Data Protection Regulation (the “GDPR”) requires data controllers enter into data processing agreements with data processors. These agreements must specify how personal data should be processed and protected. In particular, data processing agreement must stipulate the subject matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects, the obligations and rights of the data controller, and how personal data should be protected by the data processor.

Since the GDPR came into force on May 25, 2018, the Dutch DPA regularly verifies whether organizations comply with its legal requirements. The Dutch DPA has previously investigated whether governmental organizations, hospitals, insurance brokers and banks had appointed a data protection officer, for example, and verified that large private organizations, as required under Article 30 of the GDPR, were keeping a record of their processing activities.

View the press release (in Dutch).

UK House of Commons Rejects Draft Brexit Withdrawal Agreement

On January 15, 2019, the UK House of Commons rejected the draft Brexit Withdrawal Agreement negotiated between the UK Prime Minister and the EU by a margin of 432-202. While the magnitude of the loss sets in motion a process which could potentially have resulted in an early general election being held, on January 16 a majority of British Members of Parliament rejected a vote of no confidence in Theresa May’s government.

While calls for a fresh referendum are gathering momentum, and the possibility of an exit from the EU without an agreed-upon plan continues to loom large, from a data protection perspective the UK Information Commissioner’s Office’s (“ICO”) recently published guidance for businesses regarding the consequences of a UK exit without a deal remains relevant. In this guidance, the ICO has recommended six steps for companies to take in the event of a hard Brexit, including:

  • Continue to apply GDPR standards and follow current ICO guidance;
  • Identify relevant data flows from the EU to the UK and ensure appropriate data transfer mechanisms are in place in respect of those transfers once the UK leaves the EU;
  • Identify relevant data flows from the UK to any country outside the UK, as these data transfers will require a separate data transfer mechanism in due course;
  • Review and assess the company’s operations across the EU, and assess how the UK’s exit from the EU will affect the data protection regimes that apply to the company;
  • Review privacy-related documents (e.g., notices) and internal documentation to identify any details that will need to be updated once the UK leaves the EU; and
  • Ensure that key individuals within the organization are aware of these key issues, involved in relevant planning activities, and kept up to date with the latest information and guidance.

In addition, the ICO has published guidance on the effects of leaving the EU without a Withdrawal Agreement, which provides detailed explanations in relation to how various aspects of the GDPR will apply in the UK in the event of a no-deal Brexit. Those areas include data transfer restrictions, the appointment of representatives, the one-stop-shop, the ICO’s participation in the European Data Protection Board, and various other matters. Finally, the ICO has published a general overview of the issues at stake in the form of frequently asked questions.

The ICO has indicated that it will provide more detailed guidance as the situation develops further. View the ICO’s guidance.

CCPA: Employers Should Consider Implications for Employee Benefit Plans

As we move closer to implementation of the California Consumer Privacy Act of 2018 (“CCPA”), companies should consider how the new law could affect their operations in multiple ways – including, for example, data collected through their employee benefit plans.

As we have previously reported, the CCPA applies broadly to any for-profit business that meets certain thresholds and that collects personal information regarding consumers. While use of the term “consumer” may suggest a particular type of relationship, the term is defined broadly to include any California resident – and as a result, in its current form the CCPA also will apply to information collected by covered businesses about their California employees. Whether the CCPA also applies to data collected about California residents under employee benefit plans of covered businesses will likely depend in part on the type of plan:

  • Health Plans. Following its amendment in September 2018, the CCPA includes an exemption for protected health information (“PHI”) collected by a covered entity or business associate subject to the HIPAA privacy rules. Because employer-sponsored health plans are HIPAA-covered entities, any PHI held by a self-insured plan and subject to HIPAA will be outside the reach of the CCPA. The exemption also applies to PHI held by business associates, such as third-party administrators for health plans. However, certain other health-related information that is held by an employer outside of the health plan – such as information related to disability benefits or sick leave – is not covered by this exemption.
  • Retirement and other ERISA Plans. The CCPA does not specifically address its application to benefit plans not covered by HIPAA. For plans that are subject to the Employee Retirement Income Security Act of 1974 (“ERISA”), such as 401(k) plans and other qualified retirement plans, it is possible that the CCPA could be preempted by ERISA – but unlike the health plan exemption, it is not clear from the statute.
    • In general, ERISA preempts state laws that govern a central matter of plan administration or that impermissibly interfere with nationally uniform plan administration. For example, in its 2016 decision in Gobeille v. Liberty Mutual Insurance Company, the U.S. Supreme Court held that ERISA preempted a Vermont law requiring various entities, including self-insured plans and third party administrators, to report payments relating to health care claims and other information regarding health care services.
    • The CCPA imposes new requirements regarding retention and deletion of personal information, and certain disclosures regarding use of personal information. Because reporting, disclosure and recordkeeping are key areas of regulation under ERISA, it is possible the law could be preempted on the basis that it impermissibly interferes with plan administration. In the absence of further guidance, however, it is not certain to what extent preemption would apply – and it is also possible that a court could find that ERISA preempts some aspects of the law but not others.
  • Non-ERISA Benefits and Employment Practices. Even if the CCPA is ultimately determined to be preempted in the context of ERISA plans, it will still apply to data collection by an employer in its capacity as an employer, as well as data related to benefits and policies not covered by ERISA. This includes information collected by an employer in connection with administering vacation, sick leave, paid time off or leaves of absence. Other benefits that are generally not subject to ERISA include health savings accounts, dependent care flexible spending accounts, many short-term disability plans and certain voluntary benefits.

The California State Legislature is expected to consider more changes to the CCPA in 2019 – so we may receive more guidance about the application of the law in the employment context. In the meantime, employers and benefit plan sponsors subject to the CCPA will want to consider how the new law could apply to their own benefit plans and the data of their plan participants and beneficiaries. Since many plans are administered by third party record-keepers, employers and plan sponsors may also want to reach out to their vendors to ask about any plans being put in place to comply with the CCPA.

Advocate General Finds Search Engine Operators May Limit the Scope of Right to Be Forgotten to the EU

On January 10, 2018, Advocate General Maciej Szpunar (“Advocate General”) of the Court of Justice of the European Union (“CJEU”) issued an Opinion in the case of Google v. CNIL, which is currently pending before the CJEU. In the Opinion, the Advocate General provided his views concerning the territorial scope of the right to be forgotten under the relevant EU Data Protection Directive in the case at hand.

Background

The CJEU previously held in 2014’s Costeja that individuals have a right to request, under certain conditions, that their personal data no longer be displayed by search engines in response to searches of the individual’s name. This is the “right to de-listing” or “right to de-referencing”, more commonly known as the “right to be forgotten.”

In May 2015, the French data protection authority (the “CNIL”) formally notified Google that in responding to such a request, Google must delist the results on all of its search engine’s domain name extensions—meaning, worldwide. Google refused to comply, limiting what it removed to relevant results generated from searches entered on domain names corresponding to EU Member States’ versions of Google’s search engine. Google further proposed a “geo-blocking” technique (after the time limit prescribed in the CNIL’s formal notice) that would prevent an Internet user searching the delisting-requester’s name from accessing the link results at issue from an IP address located in the user’s EU Member State residence, regardless of the version of the search engine used. The CNIL regarded this an inadequate proposal, and found that Google had failed to comply with the formal notice within the prescribed time limit. As a result, the CNIL imposed a fine of €100,000 on Google. Google appealed that decision before France’s Council of State (France’s highest administrative court). The Council of State decided to refer to the CJEU several questions relating to the territorial scope of the right to be forgotten.

The Opinion

The Advocate General first observed that the provisions of the EU Data Protection Directive do not expressly address the territorial scope issue. In his view, a distinction should be made based on the location of the search request, such that if a search is input outside of the EU, the results should not be impacted by the de-listing of the search results in the EU.

The Advocate General explained that the EU Treaties apply to EU Member States and that EU law should not apply beyond the territory of the EU Member States. The Advocate General recognized that EU law may have extraterritorial effect but such effect only applies in exceptional cases, such as in competition law or trademark law cases affecting the EU internal market.

Further, the Advocate General stressed that the right to be forgotten must be balanced against other fundamental rights such as the legitimate public interest in accessing the information sought, and that the audience concerned is not worldwide but instead European. In his view, the CNIL’s approach entailed a risk that individuals in non-EU countries would be prevented from accessing information and, in turn, that non-EU countries could prevent individuals in the EU from accessing information. Accordingly, “a race to be bottom” could occur to the detriment of the freedom of expression at both the European and worldwide level.

Based on this reasoning, the Advocate General concluded that search engine operators are not required to carry out de-listing of specific links on all the domain names of their search engines. Instead, search engine operators should be required to remove the links in question from results generated following a search performed within the EU. In this respect, the Advocate General underscored, search engine operators should take every measure available to them to ensure full and effective de-listing within the EU when such a request is made by a device with an EU IP address. This is true regardless of the domain name used by the Internet user who performed the search.

Next Steps

The CJEU will now begin its deliberation in the Google v. CNIL case and the final judgment is expected in the coming months. The Advocate General’s Opinion is not binding on the CJEU, but is highly influential. After the CJEU has issued a final judgment, France’s Council of State will decide the case in accordance with the CJEU’s ruling.

Illinois BIPA Suit Dismissed for Lack of Article III Standing

As we previously reported in February 2017, an Illinois federal judge denied a motion to dismiss two complaints brought under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”) by individuals who alleged that Google captured, without plaintiff’s consent, biometric data from facial scans of images that were uploaded onto Google Photos. The cases subsequently were consolidated, and on December 29, 2018, the Northern District of Illinois dismissed the case on standing grounds, finding that despite the existence of statutory standing under BIPA, neither plaintiff had claimed any injury that would support Article III standing.

In Spokeo, Inc. v. Robins, the Supreme Court held that Article III standing requires a concrete and particularized injury even in the context of a statutory violation. The court here likewise concluded that although the plaintiffs in this case had statutory standing under BIPA, the procedural, statutory violation alone was insufficient in satisfying the standing requirement.

In asking whether either plaintiff adequately alleged such requisite injury, the court considered Google’s collection and retention of the facial scans. With respect to the retention issue, the court followed the 7th Circuit ruling in Gubala v. Time Warner Cable, Inc. that, while in violation of the Cable Communications Policy Act, the retention of individual information alone, without information disclosure or sufficient risk of information disclosure, did not confer Article III standing.

Regarding collection, the court considered (1) Patel v. Facebook Inc., a similar case brought in the Northern District of California that was not dismissed, involving a plaintiff who alleged that Facebook’s use of facial recognition for tagging photos violated BIPA’s notice and consent requirements; and (2) common law tort analogues. The Illinois court (1) declined to follow the California court, reasoning that there was an insufficient showing that the Illinois legislature intended to create a cause of action that would arise from the violation of BIPA’s notice and consent requirements alone; and (2) found that the two common law tort analogues bearing the closest relationship to the alleged injury, intrusion upon seclusion and misappropriation, were not appropriate in this case because the harms alleged by the plaintiffs were incompatible with or did not align with the harms of the tort of intrusion upon seclusion or misappropriation. Specifically, the templates that Google created were based on faces, which are regularly publicly exposed, and were not made publicly available or used by Google for commercial purposes. As such, the court dismissed the claim, holding that neither plaintiff in this case had claimed an injury that would support Article III standing.

A number of BIPA actions remain pending in federal and state courts. It remains to be seen whether other courts will agree with the Northern District of Illinois regarding the unavailability of BIPA claims based solely on procedural violations of the act.

Massachusetts Amends Data Breach Law; Imposes Additional Requirements

On January 10, 2019, Massachusetts Governor Charlie Baker signed legislation amending the state’s data breach law. The amendments take effect on April 11, 2019.

Key updates to Massachusetts’s Data Breach Notification Act include the following:

  • The required notice to the Massachusetts Attorney General and the Office of Consumer Affairs and Business Regulation will need to include additional information, including the types of personal information compromised, the person responsible for the breach (if known) and whether the entity maintains a written information security program. Under Massachusetts 201 CMR § 17.03, any entity that owns or licenses personal information about a Massachusetts resident is currently obligated to develop, implement and maintain a comprehensive written information security program that incorporates the prescriptive requirements contained in the regulation.
  • If individuals’ Social Security numbers are disclosed, or reasonably believed to have been disclosed, the company experiencing a breach must offer credit monitoring services at no cost for at least 18 months (42 months, if the company is a consumer reporting agency). Companies also must certify to the Massachusetts attorney general and the Director of the Office of Consumer Affairs and Business Regulation that their credit monitoring services are compliant with state law.
  • The amended law explicitly prohibits a company from delaying notice to affected individuals on the basis that it has not determined the number of individuals affected. Rather, the entity must send out additional notices on a rolling basis, as necessary.
  • If the company experiencing a breach is owned by a separate entity, the individual notice letter must specify “the name of the parent or affiliated corporation.”
  • Companies are prohibited from asking individuals to waive their right to a private action as a condition for receiving credit monitoring services.

HHS Publishes Health Industry Cybersecurity Practices

The U.S. Department of Health and Human Services (“HHS”) recently announced the publication of “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients” (the “Cybersecurity Practices”). The Cybersecurity Practices were developed by the Healthcare & Public Health Sector Coordinating Councils Public Private Partnership, a group comprised of over 150 cybersecurity and healthcare experts from government and private industry.

The Cybersecurity Practices are currently composed of four volumes: (1) the Main Document, (2) a Technical Volume of cybersecurity practices for small healthcare organizations, (3) a Technical Volume of cybersecurity practices for medium and large healthcare organizations, and (4) a Resources and Templates Volume. The Cybersecurity Practices also will include a Cybersecurity Practices Assessments Toolkit, but that is still under development.

The Main Document provides an overview of prominent cyber attacks against healthcare organizations and statistics on the costs of such attacks—such as that in 2017, cyber attacks cost small and medium-sized businesses an average of $2.2 million—and lists the five most common cybersecurity threats that impact the healthcare industry: (1) email phishing attacks, (2) ransomware attacks, (3) loss or theft of equipment or data, (4) insider, accidental or intentional data loss and (5) attacks against connected medical devices that may affect patient safety. The Main Document describes real world scenarios exemplifying each threat, lists “Threat Quick Tips,” analyzes the vulnerabilities that lead to such threats, discusses the impact of such threats and provides practices for healthcare organizations (and their employees) to consider to counter such threats. The Main Document concludes by noting that it is essential for healthcare organizations and government to distribute “relevant, actionable information that mitigates the risk of cyber-attacks” and argues for a “culture change and an acceptance of the importance and necessity of cybersecurity as an integrated part of patient care.”

The two Technical Volumes list the following 10 cybersecurity practices for small and medium and large healthcare organizations:

  • email protection systems;
  • endpoint protection systems;
  • access management;
  • data protection and loss prevention;
  • asset management;
  • network management;
  • vulnerability management;
  • incident response;
  • medical device security; and
  • cybersecurity policies.

The Technical Volumes also list cybersecurity sub-practices and advice for healthcare organizations to follow, with the noted distinction that small healthcare organizations are focused on cost-effective solutions while medium and large organizations may have more “complicated ecosystems of IT assets.”

Finally, the Resources and Template Volume maps the 10 cybersecurity practices and sub-practices to the NIST Cybersecurity Framework. It also provides templates such as a Laptop, Portable Device, and Remote Use Policy and Procedure, Security Incident Response Plan, an Access Control Procedure, and a Privacy and Security Incident Report.

In announcing the Cybersecurity Practices, HHS Acting Chief Information Security Officer stated that cybersecurity is “the responsibility of every organization working in healthcare and public health. In all of our efforts, we must recognize and leverage the value of partnerships among government and industry stakeholders to tackle the shared problems collaboratively.”

The Cybersecurity Practices follow other key important cybersecurity documents published by HHS, including the checklist on cyberattacks and the ransomware fact sheet.

CIPL Co-Hosts Workshop on GDPR and Scientific Health Research

On October 22, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP co-hosted a workshop in Brussels on “Can GDPR Work for Health Scientific Research?” (the “Workshop”) with the European Federation of Pharmaceutical Industries and Associations (“EFPIA”) and the Future of Privacy Forum (“FPF”) to address the challenges raised by the EU General Data Protection Regulation (“GDPR”) in conducting scientific health research.

Over 100 EFPIA members, policymakers, regulators and representatives from companies participated in the full day workshop.

Cecilia Álvarez, European Data Protection Officer Lead at Pfizer, and Brendan Barnes, Director of Data Protection at EFPIA set the scene for the day’s discussions by emphasizing the importance of health research in advancing science, patient welfare and public good, as well as opportunities for digital health activities in Europe. Challenges to such advancement, including the consistent application of the GDPR in the medical research sector, were also highlighted.

Two extensive sessions followed these opening remarks. The first session focused on clinical trials and featured several presentations by industry experts and a panel discussion, moderated by CIPL President Bojana Bellamy. The panel discussed, in particular, the complex regulatory framework surrounding clinical trials, the interaction between the GDPR and the Clinical Trial Regulation (“CTR”), the role of consent under both regulations and operational challenges.

The second session addressed the secondary use of data which is indispensable and of significant value for scientific research. The presentations and panel discussions underlined the current legal uncertainty surrounding the reuse of health data, including the lack of consistent interpretation around which legal processing bases are most appropriate for such reuse of data. In addition, the restrictive interpretation of the concept of scientific research by the Article 29 Working Party in its guidelines on consent under the GDPR and the need to rely on a risk-based approach where data already benefits from extensive safeguards were subjects of much discussion.

The lack of consistency between EU Member States regarding possible legal bases for scientific research and diverging views between national data protection authorities, health authorities and ethical committees were key recurring themes throughout the Workshop. Participants called for the EDPB to clarify the situation as well as for further multi-stakeholder dialogue on the topic to ensure the legal certainty necessary to foster scientific health research in Europe.

If you would like to read more about the Workshop and the key findings from the discussions, please see the Workshop report.

Austrian DPA Issues Decision on Validity of Cookie Consent Solution

On November 30, 2018, the Austrian Data Protection Authority (“DPA”) published a decision in response to a complaint received from an individual regarding the cookie consent options offered on an Austrian newspaper’s website. As a factual matter, the Austrian newspaper offered three options to individuals who sought to access content on the site: (1) accept the use of cookies for analytics and advertising purposes and have full, complimentary website access; (2) refuse cookies and obtain access to only limited content on the website; or (3) pay a monthly subscription of €6 to obtain full access to the website without accepting the use of cookies and similar tracking technologies.

The complainant argued that this approach by the newspaper did not comply with the requirements for valid consent under the EU General Data Protection Regulation (“GDPR”) since access to site content was conditional on an individual consenting to the processing of personal data that is not necessary for the newspaper to provide the service.

The DPA’s decision

The Austrian DPA decided to dismiss the complaint, positing that the consent solution offered by the newspaper met the conditions for freely-given, valid consent under the GDPR. In reaching this conclusion, the DPA referred to the Article 29 Working Party’s Guidelines on Consent, which state that consent is not freely given if there is a risk of deception, intimidation, coercion or significant negative consequences if the individual does not consent. The DPA reasoned that in this case, individuals did not face significant negative consequences since they could choose to subscribe to the site for a small fee or simply choose another online newspaper as a source of information.

This decision under the GDPR by the Austrian DPA is favorable to newspapers and other media publishers who provide online content financed by advertising. The UK Information Commissioner’s Office (the “ICO”) recently took a different view in a case concerning a similar online subscription model from the Washington Post (see here for our blog on this case). In the Washington Post case, the ICO reasoned under the GDPR that individuals must be offered a complimentary alternative to accepting cookies and should be able to opt out from cookies at all subscription levels. The differing approaches taken by the Austrian DPA and the UK ICO demonstrates a lack of alignment on this issue, and it remains to be seen how other DPAs will decide on the issue.

California DOJ to Hold Series of Public Forums on CCPA

The California Department of Justice will host six public forums on the California Consumer Privacy Act of 2018 (“CCPA”) to provide the general public an opportunity to participate in the CCPA rulemaking process. Individuals may attend or speak at the events or submit written comments by email to privacyregulations@doj.ca.gov or by mail to the California Department of Justice, ATTN: Privacy Regulations Coordinator, 300 S. Spring St., Los Angeles, CA 90013.

The forums will take place in January and February throughout the state of California. The first event will be held on January 8, 2019, at the Milton Marks Conference Center in San Francisco.  View the full schedule.

CNIL Fines French Telecom Operator for Data Security Failure

On December 27, 2018, the French Data Protection Authority (the “CNIL”) announced that it imposed a fine of €250,000 on French telecom operator Bouygues Telecom for failing to protect the personal data of the customers of its mobile package B&YOU.

Background

On March 2, 2018, the CNIL was informed – by a third party –  of the existence of a years-long security vulnerability on Bouygues Telecom’s website bouyguestelecom.fr, the end result of which made possible for any person, including bad actors, to access documents containing customers’ personal data from several URL addresses with a similar structure. On March 6, 2018, Bouygues Telecom notified the CNIL of the data breach. The company explained that the incident was due to a human mistake: the computer code, which requires user authentication on the company’s website, had been deactivated during a test phase but not re-activated once the tests were completed. The company quickly blocked the data from improper access.

The CNIL’s Decision

The CNIL noted that the breach affected more than two million customers, and included personal data, such as the customer’s first and last name, date of birth, e-mail address, address and mobile telephone number. The CNIL further noted that the breach lasted for more than two years. The CNIL recognized that human mistake was at the root of the incident, and that the company could not completely guard against such mistakes. The CNIL found, however, that for more than two years the company failed to implement appropriate security measures that would have enabled it to discover the breach, and concluded that the company failed to comply with its obligation to protect its customers’ personal data. As the GDPR was not applicable at the time of the data breach, the CNIL decided to impose a fine of €250,000 on Bouygues Telecom.

CNIL Publishes Guidance on Data Sharing with Business Partners or Data Brokers

On December 28, 2018, the French Data Protection Authority (the “CNIL”) published guidance regarding the conditions to be met by organizations in order to lawfully share personal data with business partners or other third parties, such as data brokers. The guidance focused, in particular, on such a scenario in the context of the EU General Data Protection Regulation (“GDPR”). The CNIL guidance sets forth the 5 following conditions:

  • Prior consent: Organizations must seek the individual’s consent prior to sharing personal data with the organization’s partners.
  • Identification of the partners: The data collection form must provide notice of the particular partner(s) who may receive the personal data. According to the CNIL guidance, the organization that first collects the data may either (1) publish an exhaustive and regularly updated list of partners directly on the data collection form, or (2) insert a link to that list on the form, together with a link to the partners’ privacy policies.
  • Notification of changes to the list of partners: Individuals must be informed of any updates to the list of partners and, in particular, of the fact that their personal data may be shared with new partners. This information may be provided on two “levels”: (1) each marketing message sent by the organization that collects the data must provide an up-to-date list of partners (see above); and (2) each new partner receiving an individual’s data must inform the individual, in its first communication to the data subject, of such processing. (See last bullet point below.)
  • Limit to further sharing without consent: The partners may not share the personal data with their own partners without seeking the individual’s informed consent.
  • Notice to be provided by the partners at the time of the first communication to the individual: The partners who process the personal data to send their own marketing communications must inform the concerned individuals of the source from which the data originates (by providing the name of the organization who shared the data with them), and how the individuals may exercise their data protection rights, in particular, their right to object to the processing of their personal data for direct marketing purposes. The CNIL guidance states that individuals may exercise their right to object either directly by contacting the partner, or by contacting the organization who first collected the data. That organization is required to pass the objection on to its partners who received that individual’s data

Irish DPC Issues Preliminary Guidance on Data Transfers in the Event of a “No Deal” Brexit

On December 21, 2018, the Irish Data Protection Commission (the “DPC”) published preliminary guidance on data transfers to and from the UK in the event of a “no deal” Brexit (the “Guidance”). The Guidance is relevant for any Irish entities that transfer personal data to the UK, including Northern Ireland.

The Guidance notes that if the UK leaves the European Union at 00:00 CET on March 30, 2019, without a withdrawal agreement in place, the UK will be deemed a third country for the purposes of EU data transfers and will require Irish-based organizations and bodies to implement legal safeguards in order to continue transferring data to the UK or Northern Ireland.

The Guidance provides several examples of data transfers that may be affected and includes a list of next steps for organizations to consider in the run up to the withdrawal date. These measures include:

  • Mapping the personal data the organization currently transfers to the UK and Northern Ireland;
  • Determining whether such transfers will need to continue beyond March 30, 2019; and
  • Assessing the different transfer mechanisms available to see which one will be most appropriate for the organization to continue transferring their data and working to have it in place before the UK departs from the EU.

The Guidance concludes by noting that more information will be available from the DPC as the withdrawal date nears.

As we previously reported, the UK House of Commons rejected the draft Brexit withdrawal agreement on January 15, 2019, making the prospect of “no deal” Brexit still a possibility.

Cybersecurity Rules for Insurance Companies to Take Effect in South Carolina

New cybersecurity rules for insurance companies licensed in South Carolina are set to take effect in part on January 1, 2019. The new law is the first in the United States to be enacted based on the data security model law drafted by the National Association of Insurance Commissioners. The law requires licensed insurance companies to notify state insurance authorities of data breaches within 72 hours of confirming that nonpublic information in the company’s (or a service provider’s) system was “disrupted, misused, or accessed without authorization.” The breach reporting requirement is in addition to notification obligations imposed under South Carolina’s breach notification law and applies if the insurance company has a permanent location in the state or if the breach affects at least 250 South Carolina residents, among other criteria. The 72-hour notice requirement takes effect January 1, 2019.

Separately, effective July 1, 2019, the law requires insurance companies licensed in South Carolina to develop and implement a comprehensive, written cybersecurity program. Among other details, the program must be based on a company’s own risk assessments and must include encryption of information in transit, regular testing of systems, and cybersecurity awareness training for employees. The law will also require insurance companies to “exercise due diligence” in choosing third-party service providers and to ensure that service providers have appropriate information safeguards in place no later than July 1, 2020.

CNIL Fines Uber for Data Security Failure Related to 2016 Data Breach

On December 20, 2018, the French data protection authority (the “CNIL”) announced that it levied a €400,000 fine on Uber France SAS, the French establishment of Uber B.V. and Uber Technologies Inc., for failure to implement some basic security measures that made possible the 2016 Uber data breach.

Background
On November 21, 2017, Uber Technologies Inc. published an article on its website revealing that two external individuals had accessed the personal data of 57 million Uber riders and drivers worldwide at the end of 2016.

On November 28, 2017, Uber B.V. sent a letter to the Chairman of the Article 29 Working Party (“Working Party”) to describe the circumstances of the data breach and express its willingness to cooperate with all competent data protection authorities.

On November 29, 2017, the Working Party established a taskforce to coordinate the plethora of national investigations throughout the EU into Uber’s 2016 data breach. This taskforce is composed of representatives from the Dutch, Spanish, French, Belgian, Italian, UK and Slovakian data protection authorities (“DPAs”).

On December 22, 2017, the CNIL sent a questionnaire to Uber Technologies Inc. and Uber B.V. related to the circumstances of the data breach and the security measures implemented by these companies. Uber replied to the questionnaire, explaining that the data breach occurred in three steps: (1) two external individuals managed to gain access to credentials stored in plain text on the collaborative development platform “GitHub” used by Uber’s software engineers; (2) the hackers then used these credentials to connect to GitHub, and found an access key recorded in plain text in a source code file, enabling the hackers to remotely access a server on which Uber users’ data were stored; and (3) they downloaded personal data relating to 57 million users, including 1.4 million in France (1.2 million riders and 163,000 drivers).

The CNIL’s Decision
Against that background, the CNIL issued a decision, discussing inter alia (1) the data controllership of Uber Technologies Inc. and Uber B.V.; (2) the applicability of French data protection law; (3) Uber’s failure to implement appropriate safeguards to prevent unauthorized third parties from accessing the data; and (4) the imposition of a sanction on Uber France SAS, the French establishment of Uber Technologies Inc. and Uber B.V.

Uber Technologies Inc. and Uber B.V. as joint data controllers: The CNIL rejected Uber’s arguments that its Dutch affiliate, Uber B.V., was the sole data controller and that Uber Technologies Inc. acted as a mere data processor of Uber B.V. when (1) issuing guidelines on the handling of personal data, (2) providing training for new employees of the Uber group, (3) executing agreements with third companies, and (4) handling the consequences of the data breach.

In particular, the CNIL considered that the last point—handling the data breach fallout—is not a mere technical or organizational question that can be dealt with by a data processor as part of the margin of maneuver left to the data processor. According to the CNIL, how a data breach is handled is a question related to the essential elements of the means of the data processing, and can only be determined by the data controller. In the CNIL’s view, the fact that Uber Technologies Inc. (1) drafted data protection guidelines applied by all the entities of the Uber group, (2) was responsible for training new employees of the group, and (3) executed agreements with third-party companies (including for the provision of tools necessary for the proper functioning of Uber services) also demonstrate that Uber Technologies Inc. plays a key role in the determination of the purposes and means of the data processing. As a result, the CNIL found that Uber Technologies Inc. is a joint data controller with Uber B.V.

Applicability of French data protection law: Uber has an establishment in France – Uber France SAS – that carries out marketing campaigns to promote Uber’s services and provides support to Uber riders and drivers in France. Referring to the decision of the European Court of Justice (“ECJ”) in Google v. Costeja, the CNIL considered the processing of Uber riders’ and drivers’ personal data to be carried out in the context of the activity of the French establishment of the data controllers, Uber B.V. and Uber Technologies Inc.

Failure to implement appropriate security measures: The CNIL concluded that the data breach was preventable if Uber had implemented certain basic security measures, including:

  • The company should have required that its engineers connect to the “GitHub” platform with a strong authentication measure (e.g., a username and password and then a secret code sent on the engineer’s mobile phone).
  • The company should not have stored – in plain text within the source code of the “GitHub” platform – credentials that allow access to the server.
  • The company should have implemented an IP filtering system to access the “Amazon Web Services S3” servers containing personal data of its users.

Uber France SAS as the addressee of the CNIL’s decision: The CNIL, citing the ECJ’s Wirtschaftsakademie Schleswig-Holstein GmbH decision of June 5, 2018, rejected Uber’s arguments that the CNIL could impose a sanction only on a data controller (and not on a mere establishment of the data controller). In this decision, the ECJ found that, where a business established outside the EU has several establishments in different EU Member States, the supervisory authority of a Member State may exercise its EU Data Protection Directive-derived powers with respect to an establishment in the territory of that Member State even if, as a result of the division of tasks within the group, (1) that establishment is responsible solely for the sale of advertising space and other marketing activities in the territory of the Member State concerned and, (2) exclusive responsibility for collecting and processing personal data belongs, for the entire territory of the EU, to an establishment located in a different Member State. The CNIL therefore decided to impose a sanction on Uber France SAS. As the EU General Data Protection Regulation was not applicable at the time of the data breach, the CNIL imposed a fine of €400,000 on Uber France SAS. When setting the amount of the fine, the CNIL took into account the fact that hackers gained access to the data, thereby possibly allowing them to make further use of the data. The CNIL stressed that, although no damage suffered by affected individuals has been reported to date, evidence of a complete absence of damage cannot be invoked by Uber.

This is the third fine imposed by an EU DPA on Uber in relation to its 2016 data breach. On November 6, 2018, the Dutch DPA fined Uber €600,000 for failure to notify the breach. On November 26, 2018, the ICO also fined Uber £385,000 for failure to implement appropriate security measures.

Department of Commerce Updates Privacy Shield FAQs to Clarify Applicability to UK Personal Data

On December 20, 2018, the Department of Commerce updated its frequently asked questions (“FAQs”) on the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks (collectively, the “Privacy Shield”) to clarify the effect of the UK’s planned withdrawal from the EU on March 29, 2019. The FAQs provide information on the steps Privacy Shield participants must take to receive personal data from the UK in reliance on the Privacy Shield after Brexit.

The deadline for implementing the steps identified in the FAQs depends on whether the UK and EU are able to finalize an agreement for the UK’s withdrawal from the EU. To the extent the UK and EU reach an agreement regarding withdrawal, thereby implementing a Transition Period in which EU data protection law will continue to apply to the UK, Privacy Shield participants will have until December 31, 2020, to implement the relevant changes to their public-facing Privacy Shield commitments described in the FAQs and below. To the extent no such agreement is reached, participants must implement the changes by March 29, 2019.

According to the FAQs, a Privacy Shield participant who would like to continue to receive personal data from the UK following the relevant deadline (as described above) must update any language regarding its public commitment to comply with the Privacy Shield to include an affirmative statement that its commitment under the Privacy Shield will extend to personal data received from the UK in reliance on the Privacy Shield. In addition, Privacy Shield participants who plan to receive Human Resources (“HR”) data from the UK in reliance on the Privacy Shield must also update their HR Privacy Policies. The FAQs further state that if a Privacy Shield participant opts to make such public commitments to continue receiving UK personal data in reliance on the Privacy Shield, the participant will be required to cooperate and comply with the UK Information Commissioner’s Office with regard to any such personal data received.

Commission Publishes Report on the Second Annual Review of the Functioning of the EU-U.S. Privacy Shield

On December 19, 2018, the European Commission (the “Commission”) issued a press release regarding the publication of the Commission’s second annual review of the functioning of the EU-U.S. Privacy Shield (the “Report”).

Background

On July 12, 2016, the Commission adopted an adequacy decision on the basis that the EU-U.S. Privacy Shield ensured an adequate level of protection to personal data transferred from the European Economic Area (“EEA”) to the participating companies in the U.S. The Commission also concluded that the EU-U.S. Privacy Shield framework could be improved. On that basis, the Commission annually reviews the framework and issue recommendations.

Findings after This Second Year

This year’s Report concludes that the U.S. still ensures an adequate level of protection to the personal data transferred from the EEA to U.S. companies under the EU-U.S. Privacy Shield. The U.S. authorities have taken measures to implement the Commission’s recommendations from last year and several aspects of the functioning of the framework have improved. Some of these measures have been recently adopted and further developments need to be monitored.

The Report highlights the following concerns:

  • New tools to ensure compliance with the Privacy Shield principles and to identify false claims of participation to the Privacy Shield framework: On the basis of last year’s recommendation, the Department of Commerce (“Department”) implemented new tools to proactively monitor certified companies’ compliance with the Privacy Shield Principles and to detect potential compliance issues. The Department also has proactively searched for false claims of participation in the Privacy Shield framework. To date, 56 companies were referred to the Federal Trade Commission for issues of non-compliance with the Privacy Shield Principles or false claims of participation. The third review of the EU-U.S. Privacy Shield will assess the effectiveness of these methods.
  • Privacy Shield enforcement measures: The FTC has committed to proactive monitoring of the certified companies’ compliance with the Privacy Shield principles. Accordingly, the FTC has issued administrative subpoenas to request information from a number of Privacy Shield participants. The Commission concluded that developments in this area should be closely monitored.
  • Cooperation between authorities: The Department of Commerce and the European Data Protection Authorities have cooperated to develop guidance on Privacy Shield principles. The Commission welcomes and encourages this cooperation, including, when appropriate, the participation of the Federal Trade Commission, as clarification of various concepts is still needed. (The notion of Human Resources data, for example, is understood differently by different authorities).
  • The appointment of a Privacy Shield ombudsman on a permanent basis: Despite last year’s recommendation, a permanent Privacy Shield ombudsman has yet to be appointed. The Commission reiterates its call and expects that the U.S. government will fill the position by February 28, 2019. If this is not done, the Commission will adopt the necessary measures in accordance with the GDPR.
  • Effectiveness of how the ombudsman deals with complaints: The ombudsman has not yet received any requests. The Commission intends to monitor how complaints will be handled and resolved.

The Commission’s Next Steps

The Commission will monitor the developments and expects to receive information with regard to concerns noted above in order to control the effectiveness of the measures adopted. The Commission also intends to follow the ongoing developments in the U.S. legal framework. In this respect, the Commission encourages the U.S. to adopt a comprehensive legal framework with regard to privacy and data protection and to ratify the Council of Europe’s Convention 108.

A detailed analysis of each aspects of the Privacy Shield framework reviewed after this second year can be found in the Commission Staff Working Document from the Commission to the European Parliament and the Council On The Second Annual Review Of The Functioning Of The EU-U.S. Privacy Shield.

Agreement on Proposal for Cybersecurity Act

The European Commission (“Commission”), the European Parliament (“Parliament”) and the Council of the European Union reached an agreement earlier this month regarding changes to the Proposal for a Regulation on ENISA, the “EU Cybersecurity Agency”, and repealing Regulation (EU) 526/2013, and on Information and Communication Technology Cybersecurity Certification (the “Cybersecurity Act”). The agreement empowers the EU Cybersecurity Agency (known as European Union Agency for Network and Information and Security, or “ENISA”) and introduce an EU-wide cybersecurity certification for services and devices.

Background

The Cybersecurity Act was introduced in a wide-ranging set of cybersecurity measures adopted by the Commission on September 13, 2017, and proposed as a priority of the Digital Single Market Strategy. The objective of these measures was to deal with cyber-attacks and build strong cybersecurity in the EU.

More Powers for ENISA

The Cybersecurity Act reinforces the ENISA’s centrality to better support Member States when facing cybersecurity threats or attacks. The Cybersecurity Act grants more powers to and new tasks for ENISA, including:

  • A permanent mandate. The initial temporary mandate was due to end in 2020 and is now replaced by a permanent mandate. More resources will also be allocated to ENISA to accomplish its tasks.
  • To prepare the EU for a crisis response to major cyberattacks.
  • To assist Member States in responding effectively to cyber-attacks with a greater cooperation and coordination at the EU level.

ENISA will also be recognized as an independent center of expertise that will promote awareness to citizens and businesses and that will assist the EU institutions and Member States in the development and implementation of policies.

Cybersecurity Certification Framework

The Cybersecurity Act also introduces an EU-wide cybersecurity certification framework to ensure that the products and services sold in the EU comply with EU cybersecurity standards. This a great step forward as it is the first internal market law that enhances the security of connected products, Internet of Things or critical infrastructure by implementing a single certificate.

The hope is that consumers will benefit from this new regulation as manufacturers provide detailed information on cybersecurity for certified products and services including guidance on installation, the period for security support and information for security updates. The Cybersecurity Act, in this view, will increase consumers’ trust in products and services they choose to use as they will have warranties that these products and services are cyber secure.

Similarly, companies will also benefit from the Cybersecurity Act as they will save significant costs on certification. A one stop-shop cybersecurity certification means that companies and especially Small and Medium-sized Enterprises (“SMEs”) will not need to apply for certificates in different countries but one certificate will be valid throughout the EU. Certification will no longer be perceived as a market-entry barrier for companies but as a competitive advantage. In addition, companies may certify their own products for a minimum level of cybersecurity.

Better Governance

To make future initiatives clearer and more transparent for industry, the Parliament requested that a Union rolling work program be a component of the cybersecurity certification framework’s governance, and involved in setting the strategic priorities on future certification requirements.

Next Steps

The Parliament’s Committee on Industry, Research and Energy and the Council of the European Union must still formally approve the proposed agreement. If approved, it will then be published in the EU Official Journal. The Cybersecurity Act will enter into force twenty days following that publication.

The press releases of the Commission and of the Parliament can be found here.

Dutch DPA Publishes Post-GDPR Complaints Report

On December 13, 2018, the Dutch Data Protection Authority (“Autoriteit Persoonsgegevens”) (the “Dutch DPA”) published a report on the complaints it has received since the EU General Data Protection Regulation (“GDPR”) became applicable on May 25, 2018 (the “Report”). The GDPR gives data subjects the right to lodge a complaint with the relevant national supervisory authority when they believe that their personal data is processed in a way violative of the GDPR (see article 77 of the GDPR).

View the Report and the press release (in Dutch).

Facts and Figures

In the past six months, (between May 25, 2018 and November 25, 2018), 22,679 individuals have contacted the Dutch DPA to obtain more information about the GDPR or to file a complaint. The Dutch DPA has received 9,661 complaints from data subjects, of which 44% are pending.

The Report states that 32% of the complaints relate to infringements of data subjects’ rights, such as the right of access and the right to erasure. Fifteen percent of the complaints are grounded in what data subjects consider to be overreach in data collection—that more personal data is gathered than is necessary to achieve the purpose(s) underlying the collection. An additional 12% of complaints allege companies impermissibly share individuals’ personal data , because they do so without informing the data subject of such sharing or by disregarding the data subject’s wishes.

The Dutch DPA also indicated that it has been involved in 331 international complaints concerning companies with cross-border activities or with several establishments in Europe. The Dutch DPA indicated that 176 of these were filed directly with the Dutch DPA, and that it acted as lead supervisory authority for a total 21 of the complaints. Separately, the Dutch DPA acted as a concerned supervisory authority in 119 international complaints; 36 complaints were transferred to the Dutch DPA by other national supervisory authorities as they related to companies established in the Netherlands.

Handling of the Complaints

The Report indicates that, in most cases, the Dutch DPA has responded to  complaints by (1) sending a letter to the named company explaining the applicable requirement, (2) initiating a mediation, or (3) discussing the alleged violation with the company and actions to remediate such violation. The Dutch DPA indicated that most companies then adapted their behavior. According to the Report, 11 investigations stemming from complaints have been initiated.

To date, the Dutch DPA has primarily focused on resolving alleged rights violations and obliging companies to take remediating measures. The Report indicates, however, that in the future, complaints will more often lead to investigations and sanctions.

Most Affected Sectors

According to the Report, most complaints were filed against business service providers (i.e., 41 % of the complaints), companies in the IT sector (12%), the government (10%), financial institutions (9%) and companies in the health care sector (9%).

ICO Notifies More Than 900 Organizations of Failure to Pay Required Data Protection Fee

EU data protection authorities (“DPAs”) are proving their willingness as enforcers with respect to the GDPR, not just with regard to the most serious acts of non-compliance but also for errors of a more administrative nature. Under the previous regime, DPAs typically required companies to register their processing activities with the regulator, but the GDPR now permits organizations to maintain data processing inventories internally, only showing them to DPAs when there is a particular need to do so. In the UK, the Information Commissioner’s Office (“ICO”) introduced a requirement for organizations to pay a “data protection fee,” which data controllers falling under the ICO’s scope must pay once a year. Those companies that fail to pay the fee risk incurring a fine of up to £4,350 each.

Between September and November of this year, more than 900 organizations have received notification from the ICO of its intent to fine as a result of their failure to pay the data protection fee. The notifications have been delivered to organizations operating across a number of sectors, from construction to finance, and more than 100 companies have had fines levied against them, with the proceeds contributing to the UK Treasury’s Consolidated Fund. Those notified were given eight days to pay the fine before further legal action was taken by the ICO.

For small organizations, with no more than 10 members of staff and revenues of less than £632,000, the fee is limited to £40 per year, but for the larger organizations, reporting revenues of more than £36 million and employing more than 250 staff members, the required fee is a more sizeable £2,900, based on the increased level of risk the company and its data processing activities present. The fee supports the ICO, which now employs 670 staff members in the UK, in conducting its investigations, providing advice, and preparing guidance relating to the UK’s data protection regime. The specific charges levied are now set out in the UK’s Data Protection (Charges and Information) Regulations 2018.

Australia and Chinese Taipei Join the APEC CBPR System

On November 23, 2018, both Australia and Chinese Taipei joined the APEC Cross-Border Privacy Rules (“CBPR”) system. The system is a regional multilateral cross-border transfer mechanism and an enforceable privacy code of conduct and certification developed for businesses by the 21 APEC member economies.

The Australian Attorney-General’s Department recently announced that APEC endorsed Australia’s application to participate and that the Department plans to work with both the Office of the Australian Information Commissioner and organizations to implement the CBPR system requirements in a way that ensures long-term benefits for Australian businesses and consumers.

In Chinese Taipei, the National Development Council announced that Chinese Taipei has joined the system. According to the announcement, Chinese Taipei’s participation will spur local enterprises to seek overseas business opportunities and help shape conditions conducive to cross-border digital trade.

Australia and Chinese Taipei become the seventh and eighth countries to participate in the system, joining the U.S., Mexico, Canada, Japan, South Korea and Singapore. Both nations’ decisions to join the system further highlights the growing international status of the CBPR system, which implements the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Several other APEC economies are actively considering joining.

Argentina DPA Issues Guidelines on Binding Corporate Rules

The Agency of Access to Public Information (Agencia de Acceso a la Información Pública) (“AAIP”) has approved a set of guidelines for binding corporate rules (“BCRs”), a mechanism that multinational companies may use in cross-border data transfers to affiliates in countries with inadequate data protection regimes under the AAIP.

As reported by IAPP, pursuant to Regulation No. 159/2018, published December 7, 2018, the guidelines require BCRs to bind all members of a corporate group, including employees, subcontractors and third-party beneficiaries. Members of the corporate group must be jointly liable to the data subject and the supervisory authority for any violation of the BCRs.

Other requirements include:

  • restrictions on the processing of special categories of personal data and on the creation of files containing personal data relating to criminal convictions and offenses;
  • protections such as providing for the right to object to the processing of personal data for the purpose of unsolicited direct marketing;
  • complaint procedures for data subjects that include the ability to institute a judicial or administrative complaint using their local venue; and
  • data protection training to personnel in charge of data processing activities.

BCRs also should contemplate the application of general data protection principles, especially the legal basis for processing, data quality, purpose limitation, transparency, security and confidentiality, the data subjects’ rights, and the restriction to subsequent cross-border data transfer to non-adequate jurisdictions. BCRs that do not reflect the guidelines’ provisions must submit the relevant material to the AAIP for approval within 30 calendar days from the date of transfer. Approval is not required if BCRs that track the guidelines are used.

Lisa Sotto, Head of Hunton’s Privacy and Cybersecurity Practice, Kicks Off FTC Data Security Panel

In connection with its hearings on data security, the Federal Trade Commission hosted a December 12 panel discussion on “The U.S. Approach to Consumer Data Security.” Moderated by the FTC’s Deputy Director for Economic Analysis James Cooper, the panel featured private practitioners Lisa Sotto, from Hunton Andrews Kurth, and Janis Kestenbaum, academics Daniel Solove (GW Law School) and David Thaw (University of Pittsburgh School of Law), and privacy advocate Chris Calabrese (Center for Democracy and Technology). Lisa set the stage with an overview of the U.S. data security framework, highlighting the complex web of federal and state rules and influential industry standards that result in a patchwork of overlapping mandates. Panelists debated the effect of current law and enforcement on companies’ data security programs before turning to the “optimal” framework for a U.S. data security regime. Among the details discussed were establishing a risk-based approach with a baseline set of standards and clear process requirements. While there was not uniform agreement on the specifics, the panelists all felt strongly that federal legislation was warranted, with the FTC taking on the role of principal enforcer.

View an on-demand recording of the hearing. For more information on the data security hearings, visit the FTC’s website.

AOL Successor Agrees to Pay $4.95 Million in COPPA Enforcement Action

On December 4, 2018, the New York Attorney General (“NY AG”) announced that Oath Inc., which was known as AOL Inc. (“AOL”) until June 2017 and is a subsidiary of Verizon Communications Inc., agreed to pay New York a $4.95 million civil penalty following allegations that it had violated the Children’s Online Privacy Protection Act (“COPPA”) by collecting and disclosing children’s personal information in conducting online auctions for advertising placement. This is the largest-ever COPPA penalty.

The NY AG alleged that AOL used its display ad exchange to conduct billions of ad space auction websites that AOL knew to be directed to children under the age of 13 and subject to COPPA. AOL is said to have gained this knowledge from clients who flagged child-directed properties to AOL, and from its own internal reviews. In all, AOL is alleged to have conducted 2 billion auctions of display ad space from these websites.

The settlement requires AOL to (1) establish and maintain a comprehensive COPPA compliance program; (2) retain an objective, third-party professional to assess the privacy controls that the company has implemented; (3) implement and maintain functionality that enables website operators that sell ad inventory through AOL systems to indicate each website or portion of a website that is subject to COPPA; and (4) destroy all personal information collected from children. In a statement, Oath indicated that it is “wholly committed to protecting children’s privacy online” and agreed to make comprehensive reforms of its business practices to ensure that children are protected from improper targeted advertising online.

FTC Seeks Public Comment on Identity Theft Rules

On December 4, 2018, the Federal Trade Commission published a notice in the Federal Register indicating that it is seeking public comment on whether any amendments should be made to the FTC’s Identity Theft Red Flags Rule (“Red Flags Rule”) and the duties of card issuers regarding changes of address (“Card Issuers Rule”) (collectively, the “Identity Theft Rules”). The request for comment forms part of the FTC’s systematic review of all current FTC regulations and guides. These periodic reviews seek input from stakeholders on the benefits and costs of specific FTC rules and guides along with information about their regulatory and economic impacts.

The Red Flags Rule requires certain financial entities to develop and implement a written identity theft detection program that can identify and respond to the “red flags” that signal identity theft. The Card Issuers Rule requires that issuers of debit or credit cards (e.g., state credit unions, general retail merchandise stores, colleges and universities, and telecom companies) implement policies and procedures to assess the validity of address change requests if, within a short timeframe after receiving the request, the issuer receives a subsequent request for an additional or replacement card for the same account.

The FTC is seeking comments on multiple issues, including:

  • Is there a continuing need for the specific provisions of the Identity Theft Rules?
  • What benefits have the Identify Theft Rules provided to consumers?
  • What modifications, if any, should be made to the Identify Theft Rules to reduce any costs imposed on consumers?
  • What modifications, if any, should be made to the Identify Theft Rules to increase their benefits to businesses, including small businesses?
  • What evidence is available concerning the degree of industry compliance with the Identify Theft Rules?
  • What modifications, if any, should be made to the Identify Theft Rules to account for changes in relevant technology or economic conditions?

The comment period is open until February 11, 2019, and instructions on how to make a submission to the FTC are included in the notice.

Hunton Recognized in Chambers and Partners 2019 FinTech Guide

Hunton Andrews Kurth LLP is pleased to announce that the firm was recognized in the inaugural Chambers and Partners 2019 FinTech guide. The guide commends the firm for attaining an “excellent reputation for the strengths of its data protection and cybersecurity practice, where it counsels FinTech businesses on privacy issues in commercial contracts and transactional matters.”

In addition, Lisa Sotto, partner and chair of the Privacy and Cybersecurity practice, is one of only two lawyers ranked in the Band 1 category for USA: Legal: Data Protection & Cyber Security.

The Chambers and Partners FinTech guide provides expert legal commentary on key issues for businesses. The guide covers the important developments in the most significant jurisdictions.

CNIL Launches Public Consultation on Draft Standards on Data Processing for Managing Business Activities and Unpaid Invoices

On November 29, 2018, the French Data Protection Authority (the “CNIL”) launched an online public consultation regarding two new CNIL draft standards (“Referentials”) concerning the processing of personal data to manage (1) business activities and (2) unpaid invoices

Background

Following the 2018 update to the French Data Protection Act for purposes of implementing the EU General Data Protection Regulation (“GDPR”), the CNIL may issue guidelines, recommendations or standards called “Referentials.” These Referentials are not compulsory: they are mainly intended as guidance for carrying out specific data processing activities under the GDPR. Each Referential lists the purposes of the data processing in question, the legal basis for that data processing, the types of personal data that may be processed for those purposes, the data retention periods and the associated security measures. By providing this information, the Referential is also intended to aid data controllers to carry out a data protection impact assessment (“DPIA”) as necessary. Data controllers may refer to a Referential to describe the measures the controllers implement, or envision implementing, in order to comply with the necessity and proportionality requirements of the GDPR, to honor data subjects’ rights, and to address risks to data subjects’ rights and freedoms.

CNIL’s Draft Referential on Data Processing for Managing Business Activities

This draft Referential updates the CNIL’s Simplified Norm No. 48 on the management of customers and prospective customers. It therefore intends to cover standard customer data processing activities carried out by any data controller, except (1) health or educational institutions; (2) banking or similar institutions; (3) insurance companies; and (4) operators subject to approval by the French Online Gambling Regulatory Authority. It does not, however, cover the following customer data processing activities: (1) fraud detection and prevention; (2) preventing, on a temporary or permanent basis, data subjects from receiving or accessing services or goods (e.g., due to unpaid invoices); (3) profiling; (4) monitoring store traffic; (5) enriching databases with information collected by third parties. Interestingly, the draft Referential refers to the CNIL’s December 2013 guidelines in advising how to comply with the EU/French cookie law rules, thereby confirming the validity of its previous guidelines even post-GDPR, pending the adoption of the draft ePrivacy Regulation.

CNIL’s Draft Referential on Data Processing for Managing Unpaid Invoices

This draft Referential intends to cover the processing of personal data for managing unpaid invoices. It does not cover the processing of customer data for detecting risks of non-payment, or to identify other infringements (such as discourtesy shown by customers).

The public consultation on the two draft Referentials will be open until January 11, 2019. The new Referentials will then likely be adopted by the CNIL in plenary session.

Privacy Blog Nominated for Best AmLaw Blog of 2018 – Please Vote to Win

Hunton Andrews Kurth’s Privacy & Information Security Law Blog has been nominated in The Expert Institute’s 2018 Best Legal Blog Contest for Best AmLaw Blog of 2018. For nearly 10 years, our award-winning privacy blog has provided readers with current information and legal commentary on news stories; breaking international, federal and state legislation; and other issues on privacy, data protection and cybersecurity. We appreciate your continued support and readership, and ask that you please take a moment to vote for our blog. Click here to vote.

FTC’s Upcoming Hearing Will Address U.S. Approach to Data Security

The Federal Trade Commission published the agenda for the ninth session of its Hearings on Competition and Consumer Protection in the 21st Century (“Hearings Initiative”), a wide-ranging series of public hearings. The ninth session, to take place on December 11-12, 2018, will focus on data security. Lisa Sotto, chair of Hunton Andrews Kurth’s Privacy and Cybersecurity practice, is one of five panel participants discussing “The U.S. Approach to Consumer Data Security.” The panel will be moderated by James Cooper, Deputy Director for Economic Analysis of the FTC’s Bureau of Consumer Protection.

Supreme Court of Pennsylvania Ruling on Common Law Duty to Protect Electronic Employee Data

On November 21, 2018, the Supreme Court of Pennsylvania ruled that a putative class action filed against UPMC (d/b/a The University of Pittsburg Medical Center) should not have been dismissed.

The case arose from a data breach in which criminals accessed UPMC’s computer systems and stole the personal and financial information of 62,000 current and former UPMC employees. This information included names, birth dates, Social Security numbers, addresses, tax forms and bank account data, all of which the employees were required to provide as a condition of employment. The plaintiffs alleged that UPMC was negligent in the collection and storage of this information, and breached an implied contract in connection with the event. The trial court dismissed the case, which the intermediate appellate court affirmed.

Pennsylvania’s highest court, however, disagreed. The court held that: (1) an employer has a duty under Pennsylvania common law to use reasonable care to safeguard its employees’ sensitive personal information that it stores on Internet-accessible computer systems; and (2) Pennsylvania’s economic loss doctrine did not bar the plaintiffs’ negligence claim.

The court explained that it was not creating a new, affirmative duty. Rather, “the case is one involving application of an existing duty to a novel factual scenario.” In other words, the duty was presumed due to UPMC’s alleged risk-causing conduct. Indeed, the court stressed that due to the early procedural posture of the case, it was required to accept as true the plaintiffs’ allegations that UPMC’s conduct created the risk of the data breach. The presence of a third party’s criminal conduct also was not a superseding cause that cut off UPMC’s liability because UPMC’s alleged conduct created a situation where UPMC knew, or should have known, that a third party might try to compromise its network.

The court next found that the economic loss doctrine, as applied in Pennsylvania, did not preclude all negligence claims seeking purely “economic damages” (i.e., monetary damages that do not involve personal injury or property damage). After discussing prior Pennsylvania economic loss doctrine cases, the court concluded that the common law duty it had recognized existed independently from any contractual obligation between the parties, thus precluding application of the economic loss doctrine. As the court noted, this approach to the economic loss doctrine is not taken by all states.

CIPL Publishes Report on Artificial Intelligence and Data Protection in Tension

The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP recently published the first report in its project on Artificial Intelligence (“AI”) and Data Protection: Delivering Sustainable AI Accountability in Practice.

The report, entitled “Artificial Intelligence and Data Protection in Tension,” aims to describe in clear, understandable terms:

  • what AI is and how it is being used all around us today;
  • the role that personal data plays in the development, deployment and oversight of AI; and
  • the opportunities and challenges presented by AI to data protection laws and norms.

The report describes AI capabilities and examples of public and private uses of AI applications in society. It also looks closely at various tensions that exist between well-established data protection principles and the requirements of AI technologies.

The report concludes with six general observations:

  • Not all AI is the same;
  • AI is widely used in society today and is of significant economic and societal value;
  • AI requires substantial amounts of data to perform optimally;
  • AI requires data to identify and guard against bias;
  • The role of human oversight of AI is likely to and will need to change for AI to deliver the greatest benefit to humankind; and
  • AI challenges some requirements of data protection law.

The report is a level-setting backdrop for the next phase of CIPL’s AI project – working with data protection officials, industry leaders and others to identify practical ways of addressing challenges and harnessing the opportunities presented by AI and data protection.

After this next phase, CIPL expects to release a second report, Delivering Sustainable AI Accountability in Practice, which will address some of the critical tools that companies and organizations are starting to develop and implement to promote accountability for their use of AI within existing legal and ethical frameworks, as well as reasonable interpretations of existing principles and laws that regulators can employ to achieve efficient, effective privacy protection in the AI context. The report will also touch on considerations for the developing data protection laws cognizant of AI and other innovative technologies.

To read the first report in detail and to learn more about the observations detailed above, please see the full report.

UK ICO Issues Warning to Washington Post Over Cookie Consent Practices

On November 19, 2018, The Register reported that the UK Information Commissioner’s Office (“ICO”) issued a warning to the U.S.-based The Washington Post over its approach to obtaining consent for cookies to access the service.

The Washington Post presents readers with three options to access its service: (1) free access to a limited number of articles dependent on consent to the use of cookies and tracking for the delivery of personalized ads; (2) a basic subscription consisting of paid access to an unlimited number of articles that is also dependent on consent to the use of cookies and tracking; or (3) a premium subscription consisting of paid access to an unlimited number of articles with no on-site advertising or third party ad tracking for a higher fee.

Responding to a complaint submitted by a reader of The Register, the ICO concluded that since The Washington Post has not offered a free alternative to accepting cookies, consent cannot be freely given and the newspaper is in contravention of Article 7(4) of the EU General Data Protection Regulation (“GDPR”). Article 7(4) provides that “when assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.”

The ICO has issued a written warning to The Washington Post to ensure access to all three subscription levels without users having to consent to the use of cookies. Although The Washington Post is a U.S.-based company, Article 3(2) of the GDPR provides that the regulation applies to the processing of personal data of individuals in the EU by a controller or processor established outside the EU where the processing activities are related to the offering of goods or services to those individuals inside the EU.

Despite issuing a warning, the ICO has noted that if the newspaper decides not to change its practices for obtaining consent for cookies, there is nothing else the regulator can do on the matter. Aside from issues around resources to pursue cross-border enforcement, there continues to be uncertainty around the GDPR’s extraterritorial applicability and its enforceability against non-EU based organizations.

As we previously reported, the FTC and ICO signed a Memorandum of Understanding (the “Memorandum”) in 2014 to facilitate mutual assistance and the exchange of information in investigating and enforcing covered privacy violations. However, the term “covered privacy violation” refers to practices that violate the applicable privacy laws of one participant country to the Memorandum and that are the same or substantially similar to practices prohibited by privacy laws in the other participant country. As U.S. privacy law does not address the issue of cookie consent, the issue is unlikely to fall under the scope of the Memorandum.

The European Data Protection Board is expected to release guidance around the GDPR’s extraterritorial applicability in the coming weeks.

Illinois Supreme Court Hears Standing Arguments

On November 20, 2018, the Illinois Supreme Court heard arguments in a case that could shape future litigation under the Illinois Biometric Information Privacy Act (“BIPA”). BIPA requires companies to (i) provide prior written notice to individuals that their biometric data will be collected and the purpose for such collection, (ii) obtain a written release from individuals before collecting their biometric data and (iii) develop a publicly available policy that sets forth a retention schedule and guidelines for deletion once the biometric data is no longer used for the purpose for which it was collected (but for no more than three years after collection). BIPA also prohibits companies from selling, leasing or trading biometric data.

The plaintiff in the case, Stacy Rosenbach v. Six Flags Entertainment Corp., alleged that Six Flags Entertainment Corporation (“Six Flags”) violated BIPA by collecting her son’s fingerprint in connection with the purchase of a season pass, without first notifying her or obtaining her consent to the collection of her son’s biometric data. At the trial level, Six Flags argued that the case should be dismissed for failure to establish standing because the plaintiff did not allege that actual harm resulted from the company’s collection of her son’s fingerprint data. The case was appealed to the Second District Appellate Court, which ruled in Six Flags’ favor, holding that BIPA plaintiffs cannot rely on technical violations of the law, such as failure to obtain consent, to be “aggrieved” and have standing. The plaintiff appealed the case to the Illinois Supreme Court.

In oral arguments heard by the Illinois Supreme Court on Tuesday, Six Flags again argued that the plaintiff must allege more than just a technical violation of BIPA to establish standing. Three of the Court’s seven justices appeared to disagree with this argument, with one, Justice Robert Thomas, countering that “there seems to be at least a logical appeal” to ensuring that individuals are made aware that their biometric data will be collected, and that “the purpose [of BIPA] is so [an actual harm] won’t happen in the first place.” Justice Anne Burke joined, stating that it is “too late to wait” for a violation of the law to occur in the first place because at that point, a plaintiff “may never know [about the violation] and you can’t get your fingerprints back. It’s irreparable harm.”

The Second District Appellate Court’s ruling in favor of Six Flags diverges from a First District Appellate Court opinion in Klaudia Sekura v. Krishna Schaumburg Tan Inc., which held that plaintiffs have causes of action under BIPA even without allegations of actual harm. The Illinois Supreme Court’s ruling in Rosenbach is expected to set the standard for which plaintiffs have standing under BIPA in future litigation.

UK and EU Draft Withdrawal Agreement

On November 14, 2018, the UK government and the EU agreed upon the text of a draft Withdrawal Agreement in relation to the UK’s impending exit from the European Union on March 29, 2019. The draft Withdrawal Agreement provides for a transition period under which the UK will remain subject to a number of its EU membership obligations, during the period starting when the UK leaves the EU on March 29, 2019 to the end of the transition period on December 31, 2020. The draft Withdrawal Agreement provides the following in relation to data protection law:

  • EU data protection law, including the General Data Protection Regulation (“GDPR”) and the e-Privacy Directive, will continue to apply to personal data of data subjects outside the UK that are (i) processed in the UK in accordance with the GDPR before the end of the transition period on December 31, 2020, and (ii) processed in the UK after the end of the transition period on the basis of the draft Withdrawal Agreement.
  • To the extent that a declaration states that the UK provides an adequate level of protection is issued by the European Commission during the transition period, then EU data protection law (including the GDPR and the e-Privacy Directive) will no longer apply in the UK to personal data of data subjects outside the UK. If, however, such declaration of adequacy ceases to be applicable, the UK commits to ensuring an adequate level of protection for the processing of the relevant personal data that is essentially equivalent to that provided by EU data protection law. Although not explicitly stated in the text of the draft Withdrawal Agreement, this obligation appears to extend beyond the end of the transition period.
  • Notwithstanding the above, Chapter VII of the GDPR, relating to cooperation between supervisory authorities and the consistency mechanism, will not apply in the UK during the transition period. As such, organizations will not be permitted to designate the UK Information Commissioner’s Office (“ICO”) as lead authority for GDPR purposes. In addition, the ICO will, during the transition period, have a significantly limited role in relation to the European Data Protection Board. The ICO will be entitled to attend meetings of the European Data Protection  Board in some cases, but will no longer have voting rights.

In practical terms, assuming that the draft Withdrawal Agreement is adopted in its current form, personal data flows between the EU and the UK will likely continue unrestricted during the transition period, until at least December 31, 2020. The draft Withdrawal Agreement itself does not, however, address the relationship between the UK and the EU after the end of the transition period, which will be subject to whatever final deal, if any, is agreed between the EU and the UK. As the draft Withdrawal Agreement is currently written, however, it appears to contemplate a declaration of adequacy in relation to the UK, which if issued would address transfers of personal data from the EU to the UK after the end of the transition period.  As such, it appears that any immediate threat to personal data transfers between the UK and the EU has been staved off, and transfers are likely to continue unaffected during the transition period.

Before being agreed between the UK and the European Council, the draft Withdrawal Agreement must be approved by the UK Parliament. Following multiple resignations from Theresa May’s government yesterday, it looks increasingly unlikely that the draft Withdrawal Agreement will be approved in its current form. If the draft Withdrawal Agreement is not approved, then there remains the prospect of the UK leaving the EU without any transition period or immediate free trade agreement, or any arrangements in place to protect the free flow of personal data between the EU and UK. If, however, a new draft is proposed and agreed upon before the March deadline, it is possible that some of the non-contentious provisions (which may include those relating to data protection) could be carried over into that new proposal.

CIPL Publishes Legal Note on the ePrivacy Regulation and the EU Charter of Fundamental Rights

On November 12, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP published a legal note on the ePrivacy Regulation and the EU Charter of Fundamental Rights. It was written for CIPL by Dr. Maja Brkan, assistant professor of EU law at Maastricht University, David Dumont, Counsel at Hunton Andrews Kurth, and Dr. Hielke Hijmans, CIPL’s Senior Policy Advisor. 

The note contributes to an important and recurring legal discussion on the proposed ePrivacy Regulation.

The proposal aims to protect the confidentiality of communications, and in particular addresses the confidentiality of content data and metadata of individuals and legal persons, implementing Article 7 of the EU Fundamental Rights Charter (“right to privacy”). In contrast, the GDPR implements Article 8 of the Charter (“right to data protection”).

The legal note argues that the difference between Articles 7 and 8 of the Charter has limited relevance  in connection to the ePrivacy Regulation. It aims to demonstrate that EU law, and in particular the Charter, does not preclude a risk based approach, nor the processing of content data and metadata on the basis of legitimate interest, provided that the necessary safeguards protecting the individuals’ communications are put in place. Neither Article 7 nor Article 52.1 of the Charter enumerate the grounds for limitation of fundamental rights. They do not prescribe that the right to privacy can be limited only on the basis of particular justificatory grounds, such as consent of the user.

The note also addresses a few related issues, such as the sensitive nature of content data and metadata, as well as the robust protection GDPR provides individuals if an organization relies on legitimate interests as a legal basis for processing electronic communications data, due to the increased accountability measures organizations need to take.

CIPL’s note also deals with the confidentiality of communications of legal persons and explains that this confidentiality is indeed not a matter of privacy, but is protected under other EU law provisions.

EU Commission Responds to NTIA Request for Comment on Developing the Administration’s Approach to Consumer Privacy

On November 9, 2018, the European Commission (“the Commission”) submitted comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.

In its comments, the Commission welcomes and agrees with many of the high-level goals identified by NTIA, including harmonization of the legal landscape, incentivizing privacy research, employing a risk-based approach and creating interoperability at a global level. The Commission also welcomes that the key characteristics of a modern and flexible privacy regime (i.e., an overarching law, a core set of data protection principles, enforceable individual rights and an independent supervisory authority with effective enforcement powers) are also at the core of NTIA’s proposed approach to consumer privacy. The Commission structured its specific suggestions around these key characteristics.

In particular, the Commission makes specific suggestions around:

  • Harmonization: The Commission notes that overcoming regulatory fragmentation associated with an approach based on sectoral law in favor of a more harmonized approach would create a level playing field, and provide necessary certainty for organizations while ensuring consistent protection for individuals.
  • Ensuring Trust: The Commission recommends that ensuring trust should guide the development of the US privacy policy formulation, and notes that giving individuals more control over their data will increase trust levels with organizations and in turn result in a greater willingness to share data on the part of consumers.
  • Data Protection Principles: The Commission commends NTIA on the inclusion of certain core data protection principles such as reasonable minimization, security, transparency and accountability, but suggests the further explicit inclusion of other principles such as lawful data processing (i.e., the requirement to process data pursuant to a legal basis, such as consent), purpose specification, accuracy and specific protections for sensitive categories of data.
  • Breach Notification: The Commission suggests the specific inclusion of a breach notification requirement to enable individuals to protect themselves from and mitigate any potential harm that might result from a data breach. While there are already state breach notification laws in place, the Commission believes organizations and individuals could benefit from the harmonization of such rules.
  • Individual Rights: The Commission believes that any proposal for a privacy regime should go beyond the inclusion of only traditional individual rights, such as access and correction, and should include other rights regarding automated decision-making (e.g., the right to explanation or to request human intervention) and rights around redress (e.g., the right to lodge a complaint and have it addressed, and the right to effective judicial redress).
  • Oversight and Enforcement: The Commission notes that the effective implementation of privacy rules critically depends on having robust oversight and enforcement by an independent and well-resourced authority. In this regard, the Commission recommends strengthening the FTC’s enforcement authority, the introduction of mechanisms to ensure effective resolution of individual complaints and the introduction of deterrent sanctions.

The Commission notes in its response that while this consultation only covers a first step in a process that might lead to federal action, it stands ready to provide further comments on a more developed proposal in the future.

NTIA’s request for comments closed on November 9, 2018 and NTIA will post the comments it received online shortly.

 

Privacy Advocacy Organization Files GDPR Complaints Against Data Brokers

On November 8, 2018, Privacy International (“Privacy”), a non-profit organization “dedicated to defending the right to privacy around the world,” filed complaints under the GDPR against consumer marketing data brokers Acxiom and Oracle. In the complaint, Privacy specifically requests the Information Commissioner (1) conduct a “full investigation into the activities of Acxiom and Oracle,” including into whether the companies comply with the rights (i.e., right to access, right to information, etc.) and safeguards (i.e., data protection impact assessments, data protection by design, etc.) in the GDPR; and (2) “in light of the results of that investigation, [take] any necessary further [action]… that will protect individuals from wide-scale and systematic infringements of the GDPR.”

The complaint alleges that the companies’ processing of personal data neither comports with the consent and legitimate interest requirements of the GDPR, nor the GDPR’s principles of:

  • transparency (specifically relating to sources, recipients and profiling);
  • fairness (considering individuals’ reasonable expectations, the lack of a direct relationship, and the opaque nature of processing);
  • lawfulness (including whether either company’s reliance on consent or legitimate interest is justified);
  • purpose limitation;
  • data minimization; and
  • accuracy.

The complaint emphasizes that Acxiom and Oracle are illustrative of the “systematic” problems in the data broker and AdTech ecosystems, and that it is “imperative that the Information Commissioner not only investigate[] these specific companies, but also take action in respect of other relevant actors in these industries and their practices.”

In addition to the complaint against Acxiom and Oracle, Privacy submitted two separate joined complaints against credit reference data brokers Experian and Equifax, and AdTech data brokers Quantcast, Tapad and Criteo.

BayLDA Publishes Review on Audits

On November 7, 2018, the Data Protection Authority of Bavaria for the Private Sector (the “BayLDA”) issued a press release describing audits completed and pending in Bavaria since the EU General Data Protection Regulation (“GDPR”) took force.

The BayLDA initially focused on informing entities about changes brought by the GDPR. Subsequently, this year the BayLDA launched data protection investigations throughout Bavaria to check compliance, raise awareness of the risks inherent to the processing of personal data and incite entities to effectively and adequately protect this data.

As of now, the BayLDA has audited a small number of entities. The audit structure is fairly predictable, beginning with a written examination that is followed by on-site visits to selected entities to verify the information provided. The BayLDA’s aim is to conduct active audits to explain the criteria to these entities and to detail what is expected of them. To this end, the BayLDA publishes the review letters sent to each entity to enable others to understand the requirements and how to comply.

The BayLDA has focused largely on cybersecurity issues (particularly on the security of online shops and ransomware in medical practices), the accountability of large companies, the duty of companies to disclose to job candidates the processing of their personal data during the application process and, finally, the implementation of the GDPR in small and medium-sized enterprises.

The BayLDA intends to continue its wave of audits, including through two investigative approaches it has commenced—one, auditing large, international companies to assess whether they comply with data protection regulations when selecting service providers and, in particular, whether they have implemented a reporting process in the event of a data breach; second, focusing on the issue of “erasure of data,” particularly in connection to SAP systems.

CNIL Publishes DPIA Guidelines and List of Processing Operations Subject to DPIA

On November 6, 2018, the French Data Protection Authority (the “CNIL”) published its own guidelines on data protection impact assessments (the “Guidelines”) and a list of processing operations that require a data protection impact assessment (“DPIA”). Read the guidelines and list of processing operations (in French).

CNIL’s Guidelines

The Guidelines aim to complement guidelines on DPIA adopted by the Article 29 Working Party on October 4, 2017, and endorsed by the European Data Protection Board (“EDPB”) on May 25, 2018. The CNIL crafted its own Guidelines to specify the following:

  • Scope of the obligation to carry out a DPIA. The Guidelines describe the three examples of processing operations requiring a DPIA  provided by Article 35(3) of the EU General Data Protection Regulation (“GDPR”). The Guidelines also list nine criteria the Article 29 Working Party identified as useful in determining whether a processing operation requires a DPIA, if that processing does not correspond to one of the three examples provided by the GDPR. In the CNIL’s view, as a general rule a processing operation meeting at least two of the nine criteria requires a DPIA. If the data controller considers that processing meeting two criteria is not likely to result in a high risk to the rights and freedoms of individuals, and therefore does not require a DPIA, the data controller should explain and document its decision for not carrying out a DPIA and include in that documentation the views of the data protection officer (“DPO”), if appointed. The Guidelines make clear that a DPIA should be carried out if the data controller is uncertain. The Guidelines also state that processing operations lawfully implemented prior to May 25, 2018 (e.g., processing operations registered with the CNIL, exempt from registration or recorded in the register held by the DPO under the previous regime) do not require a DPIA within a period of 3 years from May 25, 2018, unless there has been a substantial change in the processing since its implementation.
  • Conditions in which a DPIA is to be carried out. The Guidelines state that DPIAs should be reviewed regularly—at minimum, every three years—to ensure that the level of risk to individuals’ rights and freedoms remains acceptable. This corresponds to the three-year period mentioned in the draft guidelines on DPIAs adopted by the Article 29 Working Party on April 4, 2017.
  • Situations in which a DPIA must be provided to the CNIL. The Guidelines specify that data controllers may rely on the CNIL’s sectoral guidelines (“Referentials”) to determine whether the CNIL must be consulted. If the data processing complies with a Referential, the data controller may take the position that there is no high residual risk and no need to seek prior consultation for the processing from the CNIL. If the data processing does not fully comply with the Referential, the data controller should assess the level of residual risk and the need to consult the CNIL. The Guidelines note that the CNIL may request DPIAs in case of inspections.

CNIL’s List of Processing Operations Requiring a DPIA

The CNIL previously submitted a draft list of processing operations requiring a DPIA to the EDPB for its opinion. The CNIL adopted its final list on October 11, 2018, based on that opinion. The final list includes 14 types of processing operations for which a DPIA is mandatory. The CNIL provided concrete examples for each type of processing operation, including:

  • processing operations for the purpose of systematically monitoring the employees’ activities, such as the implementation of data loss prevention tools, CCTV systems recording employees handling money, CCTV systems recording a warehouse stocking valuable items in which handlers are working, digital tachograph installed in road freight transport vehicles, etc.;
  • processing operations for the purpose of reporting professional concerns, such as the implementation of a whistleblowing hotline;
    processing operations involving profiling of individuals that may lead to their exclusion from the benefit of a contract or to the contract suspension or termination, such as processing to combat fraud of (non-cash) means of payment;
  • profiling that involves data coming from external sources, such as a combination of data operated by data brokers and processing to customize online ads;
  • processing of location data on a large scale, such as a mobile app that enables to collect users’ geolocation data, etc.

The CNIL’s list is non-exhaustive and may be regularly reviewed, depending on the CNIL’s assessment of the “high risks” posed by certain processing operations.

Next Steps

The CNIL is expected to soon publish its list of processing operations for which a DPIA is not required.

Medical Transcription Vendor Agrees to $200,000 Settlement with New Jersey Attorney General

On October 30, 2018, ATA Consulting LLC (doing business as Best Medical Transcription) agreed to a $200,000 settlement with the New Jersey Attorney General resulting from a server misconfiguration that allowed private medical records to be posted publicly online. The fine was suspended to $31,000 based on the company’s financial condition. Read the settlement.

The New Jersey Attorney General’s investigation found that a patient had discovered that a Google search revealed portions of her medical records, which were viewable without a password. The patient notified her medical provider, Virtua Medical Group (“Virtua”), which used medical record transcription services provided by Best Medical Transcription. The investigation concluded that a software update changed certain security restrictions previously implemented by Best Medical Transcription and permitted anonymous access (i.e., no password required) to the site where files containing patient medical information were stored. This misconfiguration permitted anyone to conduct a Google search to locate and download the complete files. The investigation found that approximately 1,650 records were exposed on the Internet in this manner.

In addition to the settlement payment, Best Medical Transcription was enjoined from committing future violations of various privacy and security requirements, including HIPAA, the Security Rule, the Breach Notification Rule and the Privacy Rule. Virtua previously agreed to pay a $418,000 fine and enhance its data security practices in connection with the incident.

CNIL Details Rules on Audience and Traffic Measuring in Publicly Accessible Areas

On October 17, 2018, the French data protection authority (the “CNIL”) published a press release detailing the rules applicable to devices that compile aggregated and anonymous statistics from personal data—for example, mobile phone identifiers (i.e., media access control or “MAC” address) —for purposes such as measuring advertising audience in a given space and analyzing flow in shopping malls and other public areas. Read the press release (in French).

The CNIL observed that more and more companies use such devices. In shopping malls, these devices can (1) compile traffic statistics and determine how many individuals have visited a shopping mall over a limited time range; (2) model the routes that individuals take through the shopping mall; and/or (3) calculate the rate of repeating visitors. In public areas, they can (1) determine how many individuals walked past an audience measuring device (e.g., an advertising panel); (2) determine the routes taken by these individuals from one advertising panel to another; (3) estimate the amount of time individuals stand in line; (4) assess the number of vehicles driving on a road, etc.

Against that background, the CNIL identified the three following scenarios:

Scenario 1 – When data is anonymized at short notice (i.e., within minutes of collecting the data)

The CNIL defines anonymization as a specific data processing operation which renders individuals no longer identifiable. (Such processing must comply with various criteria set forth in Opinion 05/2014 of the former Article 29 Working Party on anonymization techniques. According to the CNIL, this includes ensuring a high collision rate between several individuals—for instance, in the context of MAC-based audience measurement devices, the processing must allow multiple MAC addresses to match the result of single-identifier processing.)

In this scenario, anonymization must be performed promptly, i.e., within minutes of collecting the data. In the CNIL’s view, this reduces the risk that an individual would be able to access identifying data. To that end, CNIL recommends anonymizing the data within 5 minutes. After that period, no identifying data should be retained.

The CNIL noted that data controllers may rely on their legitimate interest as a legal basis for the processing under the EU General Data Protection Regulation (“GDPR”). The CNIL recommended, however, that data controllers provide notice to individuals, using a layered approach in accordance with the guidelines of the former Article 29 Working Party on transparency under the GDPR. The CNIL provided an example of a notice that would generally satisfy the first layer of a layered privacy notice, though emphasized that notice should be tailored to the processing—particularly with respect to the individuals’ data protection rights. Since the data is anonymized, individuals cannot exercise their rights of access to and rectification of their personal data, and restriction to the processing of their data. Therefore, the notice does not have to mention these rights. However, individuals must be able to object to the collection of their data, and the notice should refer to that right of (prior) objection.

Scenario 2 – When data is immediately pseudonymized and then anonymized or deleted within 24 hours

In this second scenario, data controllers may rely on their legitimate interest as a legal basis for the processing provided that they:

  • Provide prior notice to individuals;
  • Implement mechanisms to allow individuals to object to the collection of their data (i.e., prior objection to the processing). These mechanisms should be accessible, functional, easy to use and realistic;
  • Set up procedures to allow individuals to exercise their rights of access, rectification and objection after data has been collected; and
  • Implement appropriate technical measures to protect the data, including a reliable pseudonymization process of MAC addresses (with the deletion of the raw data and the use of a salt or key). The pseudonymized data must be anonymized or deleted at the end of the day.

Further, the CNIL recommended using multiple modalities to provide notice to individuals, such as posting a privacy notice at entry and exit points of the shopping mall, on Wi-Fi access points, on every advertising device (e.g., on every advertising panel when the processing is carried out on the street), on the website of the shopping mall, or through a specific marketing campaign.

With respect to the individuals’ data protection rights, the CNIL made it clear that individuals who pass audience measuring devices must be able to object to the collection and further processing of their personal data. Companies wishing to install such a device must implement technical solutions that allow individuals to easily exercise this right of objection both a priori and a posteriori: these solutions must not only allow individuals to obtain the deletion of the data already collected (i.e., to exercise their right of objection a posteriori) but also prevent any further collection of their personal data (prior objection). In the CNIL’s view, the right of objection can be exercised using one of the following means:

  • Through a dedicated website or app on which individuals enter their MAC address to object to the processing. (The data controller is responsible for explaining to individuals how to obtain their MAC address so that they can effectively object to the processing of their data.) If an individual exercises his/her right of objection via this site or app, the data controller must delete all the data already collected and must no longer collect any data associated with that MAC address; or
  • Through a dedicated Wi-Fi network that allows the automatic collection of the devices’ MAC address for the purposes of objecting to the processing. If an individual exercises his/her right of objection via this network, the data controller must delete all the data that has been already pseudonymized and must not further collect the MAC address. The CNIL recommended using a clear and explicit name for that network such as “wifi_tracking_optout”.

According to the CNIL, data controllers should not recommend that individuals turn off the Wi-Fi feature of their phone to avoid being tracked. Such a recommendation is inadequate for purposes of enabling individuals to exercise of their right of objection.

Scenario 3 – All other cases

In the CNIL’s view, if the device implemented by the data controller does not strictly comply with the conditions listed in the two previous scenarios, the processing may only be implemented with the individuals’ consent. The CNIL stated that individuals must be able to withdraw consent, and that withdrawing consent should be as simple as granting consent. Individuals should also be able to exercise all the other GDPR data protection rights. In terms of notice, the CNIL recommended providing notice using multiple modalities (as in the second scenario).

Data Protection Impact Assessment and CNIL’s Authorization

The CNIL also reported that, in all the above scenarios, the processing will require a data protection impact assessment to be carried out prior to the implementation of the audience/traffic measuring devices, in so far as such devices assist in the systematic monitoring of individuals through an innovative technical solution.

Additionally, the CNIL’s prior authorization may be required in certain cases.

New Ohio Law Creates Safe Harbor for Certain Breach-Related Claims

Effective November 2, 2018, a new Ohio breach law will provide covered entities a legal safe harbor for certain data breach-related claims brought in an Ohio court or under Ohio law if, at the time of the breach, the entity maintains and complies with a cybersecurity program that (1) contains administrative, technical and physical safeguards for the protection of personal information, and (2) reasonably conforms to one of the “industry-recognized” cybersecurity frameworks enumerated in the law.

The program must additionally be designed to (1) protect the security and confidentiality of the information, (2) protect against any anticipated threats or hazards to the security or integrity of the information, as well as (3) protect against unauthorized access to and acquisition of the information that is likely to result in a material risk of identity theft or other fraud to the individual to whom the information relates. In determining the necessary scale and scope of the program, businesses should consider what is reasonable in light of the size and complexity of the covered entity, the nature and scope of its activities, the resources available to them, the sensitivity of the information to be protected, and the cost and availability of tools to improve information security and reduce vulnerabilities.

While this safe harbor will not apply to breach of contract claims or statutory violations in a breach suit, covered entities may raise this affirmative defense against tort claims that allege a failure to implement reasonable information security controls that result in a data breach. However, the covered entity will bear the burden of demonstrating that its program meets all of the requirements under the law. This may be hard for businesses to prove since many of the frameworks provide generalizations regarding what is required, but not specifics, and since these frameworks do not tend to have formal certification processes. Moreover, because such frameworks are often revised to keep up with new technologies and risks, it may be difficult for businesses to conform to the updates within the statute-mandated, one-year time limit from the revision date.

This law is the first in the U.S. to offer an incentive to businesses that take steps to ensure that there are policies and procedures in place to protect against data breaches. It remains to be seen whether other states will enact similar laws.

Connecticut Requires 24 Months of Credit Monitoring for Certain Security Breaches

Effective October 1, 2018, Connecticut law requires organizations that experience a security breach affecting Connecticut residents’ Social Security numbers (“SSNs”) to provide 24 months of credit monitoring to affected individuals. Previously, Connecticut law required entities to provide 12 months of credit monitoring for breaches affecting SSNs.

The amendment was passed as part of Public Act 18-90, An Act Concerning Security Freezes on Credit Reports, Identity Theft Prevention Services and Regulations of Credit Rating Agencies. Among other requirements, the Act also eliminates fees for placing and lifting a security freeze and requires consumer reporting agencies to (1) act on requests related to credit freezes as soon as practicable, but no later than 5 days for requests to place a security freeze or 3 days for requests to remove a security freeze, and (2) offer to notify the other consumer reporting agencies of the request for a credit freeze on behalf of the consumer.

Webinar on the SAFETY Act and Cybersecurity: Protecting Your Reputation and Reducing Liability Risk

In 2002, Congress enacted the Supporting Anti-Terrorism by Fostering Effective Technologies Act (“the SAFETY Act”) to limit the liabilities that energy, financial, manufacturing and other critical infrastructure companies face in the event of a serious cyber or physical security attack.

Hunton Andrews Kurth LLP recently represented an electric utility in obtaining a first-of-its-kind enterprise-wide SAFETY Act Certification for its cybersecurity risk management program. Administered by the Department of Homeland Security, SAFETY Act Certification of a company’s enterprise-wide cybersecurity program can provide significant benefits, from cost savings to legal protections to a competitive advantage in the marketplace.

Join us for a webinar on November 14, 2018, at 12:30 p.m. EST, as we discuss why companies should consider SAFETY Act protection, and how to obtain it.

Canadian Regulator Issues Final Guidance on New Data Breach Reporting Requirements

On October 29, 2018, the Office of the Privacy Commissioner of Canada (the “OPC”) released final guidance (“Final Guidance”) regarding how businesses may satisfy the reporting and record-keeping obligations under Canada’s new data breach reporting law. The law, effective November 1, 2018, requires organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) to (1) report to the OPC breaches of security safeguards involving personal information “that pose a real risk of significant harm” to individuals, (2) notify affected individuals of the breach and (3) keep records of every breach of security safeguards, regardless of whether or not there is a real risk of significant harm.

As we previously reported, the OPC had published draft guidance for which it had requested public comment. Like the draft version, the Final Guidance includes information regarding how to assess the risk of significant harm, and regarding notice, reporting and recordkeeping requirements (i.e., timing, content and form). The Final Guidance adds a requirement that a record must also include either sufficient detail for the OPC to assess whether an organization correctly applied the real risk of significant harm standard, or a brief explanation as to why the organization determined there was not a real risk of significant harm.

The Final Guidance additionally clarifies the following:

  • Who is responsible for reporting and keeping records of the breach? Businesses subject to PIPEDA requirements must report breaches of security safeguards involving personal information “under its control.”
  • Who is “in control” of personal information? The Final Guidance notes that in general, when an organization (the “principal”) provides personal information to a third party processor (the “processor”), the principal may reasonably be found to be in control of the personal information it has transferred to the processor, triggering the reporting and record-keeping obligations of a breach that occurs with the processor. On the other hand, if the processor uses or discloses the same personal information for other purposes, it is no longer simply processing the personal information on behalf of the principal; it is instead acting as an organization “in control” of the information, and would thereby have the obligation to notify, report, and record. The Final Guidance acknowledges that determining who has personal information “under its control” must be assessed on a case-by-case basis, taking into account any relevant contractual arrangements and “commercial realities” between organizations, such as shifting roles and evolving business models. The Final Guidance recommends that principals ensure “sufficient contractual arrangements [are] in place with the processor to address compliance” with the PIPEDA breach reporting, notification and record-keeping obligations.
  • When do other entities besides affected individuals and the OPC need to be notified? If a breach triggers notification due to a real risk of significant harm, “any government institutions or organizations that the organization believes… may be able to reduce the risk of harm… or mitigate the harm” resulting from the breach must also be notified.

Though the privacy commissioner called the new law a “step in the right direction,” the commissioner also voiced concerns about the law, including that: (1) breach reports to the OPC do not contain the information that would allow for the regulator to assess the quality of an organization’s data security safeguards; (2) the lack of financial sanctions for inadequate data security safeguards misses an opportunity to incentivize organizations to prevent breaches; and (3) the government has not provided the OPC with enough resources to “analyze breach reports, provide advice and verify compliance.”

FERC Adopts Supply Chain Risk Management Reliability Standards

At its October monthly meeting, the Federal Energy Regulatory Commission (the “Commission”) adopted new reliability standards addressing cybersecurity risks associated with the global supply chain for Bulk Electric System (“BES”) Cyber Systems. The new standards expand the scope of the mandatory and enforceable cybersecurity standards applicable to the electric utility sector. They will require electric utilities and transmission grid operators to develop and implement plans that include security controls for supply chain management for industrial control systems, hardware, software and services. 

These standards have been in development for some time. The North American Electric Reliability Corporation (“NERC”) proposed them in September 2017 in response to an earlier Commission directive which identified potential supply chain threats to the utility sector. The reliability standards focus on the following four security objectives: (1) software integrity and authenticity; (2) vendor remote access protections; (3) information system planning and (4) vendor risk management and procurement controls. The new standards will become effective on the first day of the first calendar quarter that is 18 months following the effective date of Order No. 850 (which will be 60 days after its publication in the Federal Register).

In addition to adopting NERC’s proposed standards, the Commission also directed NERC to expand them to include Electronic Access Control and Monitoring Systems (“EACMS”) associated with “medium” and “high” impact BES Cyber Systems within the scope of the supply chain risk management standards. NERC and others had opposed this expansion but were overruled by the Commission. NERC has 24 months to develop and file EACMS rules. By contrast, FERC decided not to require NERC to develop additional rules that would apply to Physical Access Control Systems (“PACS”) or Protected Cyber Assets (“PCAs”) at this time. Instead, NERC must study the cybersecurity supply chain risks presented by PACS and PCAs and report back to the Commission as part of a broader supply chain risk study.

CNIL Publishes Statistical Review of Data Breaches Since Entry into Application of GDPR

Recently, the French Data Protection Authority (the “CNIL”) published a statistical review of personal data breaches during the first four months of the EU General Data Protection Regulation’s (“GDPR”) entry into application. View the review (in French). 

Types of breaches

Between May 25 and October 1, 2018, the CNIL received 742 notifications of personal data breaches that affected 33,727,384 individuals located in France or elsewhere. Of those, 695 notifications were related to confidentiality breaches. In the CNIL’s view, this high proportion of confidentiality breaches may be explained by several reasons:

  • In many cases, personal data breaches are the result of lack of confidentiality of personal data in addition to integrity and/or availability issues.
  • Organizations often have the means to retrieve data within the 72-hour time limit after an integrity or availability breach.

Business areas affected

The accommodation and food services sector is the sector in which the highest number of breaches were observed, with 185 notifications. This is due to a specific case, where a booking service provider  was affected by a data breach. That service provider immediately notified all its customers of the breach and took measures to help them comply with their obligations. As part of these measures, the service provider (1) reminded its customers of the context and the breach notification obligations, (2) provided them with a list of the supervisory authorities to be contacted depending on the country of establishment of each customer, a list of the data subjects to be contacted and a template letter, and (3) implemented a dedicated hotline. According to the CNIL, these measures reflect best practices that should be implemented by a service provider when affected by a personal data breach.

Cause of the breaches

More than half of the notified breaches (421 notifications) were due to hacking via malicious software or phishing. 62 notified breaches were related to data sent to the wrong recipients, 47 notified breaches were due to lost or stolen devices, and 41 notified breaches were due to the unintentional publication of information. Most breaches were therefore the result of hacking and intentional theft attributable to a malicious third party, or employees’ unintentional mistakes. In all other cases, the causes of the breach were unknown or undetermined by the notifying data controller, or the breach was the result of internal malicious actions. The CNIL advised that businesses should think about data security at the outset of their project, regularly run security updates on operating systems, application servers, or databases, and regularly inform staff of the risks and challenges raised by data security. This will help prevent the majority of these incidents.

CNIL’s approach

The CNIL also reported that it will adopt an aggressive approach when the data controller does not comply with its obligation to notify the breach within 72 hours after having become aware of it. Failure to comply with that obligation may lead to a fine of up to €10 million or 2 percent of the total worldwide annual revenues. Conversely, if the CNIL receives the notification in a timely manner, the CNIL will adapt an approach that aims at helping the professionals involved take all the necessary measures to limit the consequences of a breach.

When necessary, the CNIL will contact organizations for the purposes of:

  • Verifying that adequate measures have been taken before or after the breach. In this respect, the CNIL may advise the data controller on any needed improvements, e.g. use of an appropriate encryption algorithm or the best way to manage passwords. The CNIL may also refer data controllers to the relevant police services or to the web platform to file a complaint.
  • Assessing the necessity to notify affected data subjects. For each notification, the CNIL assesses the risks to data subjects and may recommend notifying them of the breach. Since May 25, 2018, the CNIL’s injunction power has been used only once to order a data controller to notify affected data subjects. The CNIL did so by serving formal notice on the data controller, and the latter complied with the notice served.

Data Protection Authorities Endorse Guidelines on AI – Fairness, Transparency and Privacy Key Principles

On October 23, 2018, the 40th International Conference of Data Protection and Privacy Commissioners (the “Conference”) released a Declaration on Ethics and Protection in Artificial Intelligence (“the Declaration”). In it, the Conference endorsed several guiding principles as “core values” to protect human rights as the development of artificial intelligence (“AI”) continues apace. Key principles include:

  • AI and machine learning technologies should be designed, developed and used in the context of respect for fundamental human rights and in accordance with the “fairness principle,” including by considering the impact of AI on society at large.
  • AI systems’ transparency and intelligibility should be improved.
  • AI systems should be designed and developed responsibly, which entails proceeding from the principles of “privacy by default” and “privacy by design.”
  • Unlawful biases and discrimination that may result from the use of data in AI should be reduced and mitigated.

The Conference called for the establishment of international common governance principles on AI in line with these concepts. As an initial step toward that goal, the Conference announced a permanent working group on Ethics and Data Protection in Artificial Intelligence.

The Declaration’s authors are the French Commission Nationale de l’Informatique et des Libertés, the European Data Protection Supervisor and the Italian Garante per la protezione dei dati personali. It was co-sponsored by fifteen other organizations from nations across the world.

EU and U.S. Regulators Issue Joint Statement on the Status of the Second Annual EU-U.S. Privacy Shield Review

On October 19, 2018, European Commissioner for Justice, Consumers and Gender Equality Věra Jourová and U.S. Secretary of Commerce Wilbur Ross issued a joint statement regarding the second annual review of the EU-U.S. Privacy Shield framework, taking place in Brussels beginning October 18. The statement highlights the following:

  • a significant number of companies – over 4,000 –have become Privacy Shield-certified since the inception of the framework in 2016;
  • the appointment of three new members to the U.S. Privacy and Civil Oversights Board (“PCLOB”), as well as the PCLOB’s declassification of its report on a presidential directive that extended certain signals intelligence privacy protections to foreign citizens;
  • the regulators’ ongoing review of the functioning of the Privacy Shield Ombudsperson Mechanism, and the need for the U.S. to promptly appoint a permanent Under Secretary;
  • recent privacy incidents affecting U.S. and EU residents, with both U.S. and EU regulators reaffirming the “need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy;” and
  • the Commerce Department’s promise to revoke the certification of companies that do not comply with the Privacy Shield’s principles.

The European Commission plans to publish a report on the functioning of the Privacy Shield by the end of 2019.

Federal Government and Private Sector to Collaborate through the Pipeline Cybersecurity Initiative

Earlier this month, the Department of Energy (“DOE”) and the Department of Homeland Security (“DHS”) co-chaired a meeting with industry leaders from the Oil and Natural Gas Subsector Coordinating Council (“ONG SCC”) in Washington, D.C. to address cybersecurity threats to pipelines. Together, DOE and DHS launched the Pipeline Cybersecurity Initiative, which will harness DHS’s cybersecurity resources, DOE’s energy sector expertise, and the Transportation Security Administration’s (“TSA”) assessment of pipeline security to provide intelligence to natural gas companies and support ONG SCC’s efforts. “This meeting and the ones to follow will build upon the expanded cybersecurity measures in the recently updated Pipeline Security Guidelines and our collaboration with [DHS’s] National Risk Management Center to minimize the consequences of an attack or disruption,” said TSA Administrator David Pekoske. The Pipeline Cybersecurity Initiative has been warmly received and complements other efforts in the energy industry, such as to the U.S. power grid, to enhance cybersecurity to protect critical infrastructure.

FTC Releases Staff Perspective on Informational Injuries

On October 19, 2018, the Federal Trade Commission announced that it released a paper on the Staff Perspective on the Informational Injury Workshop (the “Paper”), which summarized the outcomes of a workshop it hosted on December 12, 2017 to discuss and better understand “informational injuries” (i.e., harm suffered by consumers as a result of privacy and security incidents, such as data breaches or unauthorized disclosures of data) in an effort to guide (1) future policy determinations related to consumer injury and (2) future application of the “substantial injury” prong in cases involving informational injury.

The Paper listed several examples of informational injuries, including medical identity theft, doxing, disclosure of private information and erosion of trust, and emphasized that the risks of such injuries should be balanced against the value of the information collection. In light of these risks, the workshop participants agreed on three factors that governments should consider in determining whether and when to intervene and address these injuries:

  • the sensitivity of the data at issue;
  • how the data at issue will be used; and
  • whether the data at issue is anonymized or identifiable.

Workshop participants further discussed (1) whether the definition of “injury” should include the risk of injury, (2) potential explanations of “the privacy paradox,” in which survey evidence indicates that consumers state their care and concern for privacy, but behave in a contrary way, and (3) the need for more research on a broad range of privacy and data security issues.

Regarding the last topic, workshop participants agreed that such research would inform government policymakers and law enforcers regarding how to prevent and remedy informational injuries without cramping innovation. The FTC hopes to encourage academic research in this area through its annual PrivacyCon conference, to take place in May 2019, and through its series of Hearings on Competition and Consumer Protection in the 21st century, which explore the intersection between privacy, big data, competition and the FTC’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters.

OCR Enters into Record Settlement with Anthem

Recently, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement and record settlement of $16 million with Anthem, Inc. (“Anthem”) following Anthem’s 2015 data breach. That breach, affecting approximately 79 million individuals, was the largest breach of protected health information (“PHI”) in history.

Three years ago, in February 2015, OCR opened a compliance review of Anthem, the nation’s second largest health insurer, following media reports that Anthem had suffered a significant cyberattack. In March 2015, Anthem submitted a breach report to OCR detailing the cyberattack,  indicating that it began after at least one employee responded to a spear phishing email. Attackers were able to download malicious files to the employee’s computer and gain access to other Anthem systems that contained individuals’ names, Social Security numbers, medical identification numbers, addresses, dates of birth, email addresses and employment information.

OCR investigated Anthem and found that it may have violated the HIPAA Privacy and Security Rules by failing to:

  • conduct an accurate and thorough risk analysis of the risks and vulnerabilities to the confidentiality, integrity and availability of electronic PHI (“ePHI”);
  • implement procedures to regularly review records of information system activity;
  • identify and respond to the security incident;
  • implement sufficient technical access procedures to protect access to ePHI; and
  • prevent unauthorized access to ePHI.

The resolution agreement requires Anthem to pay $16 million to OCR and enter into a Corrective Action Plan that obligates Anthem to:

  • conduct a risk analysis and submit it to OCR for review and approval;
  • implement a risk management plan to address and mitigate the risks and vulnerabilities identified in the risk analysis;
  • revise its policies and procedures to specifically address (1) the regular review of records of information system activity and (2) technical access to ePHI, such as network or portal segmentation and the enforcement of password management requirements, such as password age;
  • distribute the policies and procedures to all members of its workforce within 30 days of adoption;
  • report any events of noncompliance with its HIPAA policies and procedures; and
  • submit annual compliance reports for a period of two years.

In announcing the settlement with Anthem, OCR Director Roger Severino noted that the record-breaking settlement with Anthem was merited, as the company had experienced the largest health data breach in U.S. history. “Unfortunately, Anthem failed to implement appropriate measures for detecting hackers who had gained access to their system to harvest passwords and steal people’s private information.” Severino continued, “We know that large health care entities are attractive targets for hackers, which is why they are expected to have strong password policies and to monitor and respond to security incidents in a timely fashion or risk enforcement by OCR.”

The $16 million settlement with Anthem almost triples the previous record of $5.55 million, which OCR imposed in 2016 against Advocate Health Care Network. The settlement also comes two months after a U.S. District Court granted final approval of Anthem’s record $115 million class action settlement related to the breach.

CNIL Adopts Referentials on DPO Certification

On October 11, 2018, the French data protection authority (the “CNIL”) announced that it adopted two referentials (i.e., guidelines) on the certification of the data protection officer (“DPO”). View the announcement (in French). As a practical matter, both referentials are intended to apply to DPOs located in France or who speak French. The referentials include:

  • a certification referential that sets forth the conditions regarding the admissibility of DPO applications, and lists 17 qualifications that the DPO must have in order to be certified as a DPO by a certification body approved by the CNIL; and
  • an accreditation referential that outlines the criteria organizations must satisfy in order to be accredited by the CNIL as certification bodies.

View the certification referential and the accreditation referential (both in French).

Background

The French Data Protection Act, as amended on June 20, 2018 to supplement the GDPR, allows the CNIL to draft certification criteria and approve certification bodies for the purpose of certifying individuals as DPOs.

The CNIL adopted the referentials for the certification of  DPOs on this basis, following a public consultation held from May 23, 2018 to June 22, 2018. The CNIL received about 200 contributions from DPOs (or prospective DPOs), data controllers and data processors in different industries, as well as certification bodies. According to the CNIL, this consultation helped it strike “the most appropriate balance” between the knowledge and skills that a DPO must have and the expectations of privacy professionals.

Certification of the DPO

The certification of a DPO based on the standards of the CNIL’s referential is not a prerequisite in order to be appointed as a DPO with the CNIL and fulfill the responsibilities of a DPO. It is a purely voluntary process to assist in demonstrating compliance with the GDPR requirements. Article 37(5) of the GDPR requires that the DPO “shall be designated on the basis of professional qualities and, in particular, expert knowledge of data protection law and practices and the ability to fulfill the [DPO] tasks.”

In the CNIL’s view, the certificate is a vote of confidence not only for the organization that has a certified DPO, but also for its clients, vendors, employees or agents, since that organization will be able to demonstrate that the DPO has the required level of expertise and skills.

The certification will only be available to individuals (and not to legal persons). The CNIL will not grant the certification: it will be issued by certification bodies when the CNIL accredits the first certification bodies in 2019.

Prerequisites to Certification And Certification Criteria

To be eligible for certification, candidates will need to fulfill one of the following conditions:

  • professional experience of at least 2 years in projects, activities or tasks related to data protection and the tasks of a DPO; or
  • professional experience of at least 2 years in any field, with at least 35 hours of data protection training administered by a training body.

Candidates also will need to successfully complete a written test that will consist of at least 100 multiple choice questions, 30% of which will be presented in the form of case studies. These questions aim to test skills listed in the CNIL’s DPO certification referential, which include knowledge of fundamental data protection principles, the ability to draft and implement data protection policies, and the ability to assist with data protection impact assessments, among many other skills.

Successful candidates will obtain a certification that will be valid for three years, which may be renewed provided that the DPO passes the test again at the end of this three-year term. As the test will be available in French only, this voluntary certification mechanism is intended to apply to DPOs in France or French-speaking DPOs.

CIPL Responds to ICO Call for Views on Creating a Regulatory Sandbox

On October 11, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted comments to the UK Information Commissioner’s Office (“ICO”) in response to its call for views on creating a regulatory sandbox.

The regulatory sandbox concept is intended to provide a supervised safe space for piloting and testing innovative products, services, business models or delivery mechanisms in the real market, using the personal data of real individuals. The concept was first developed by the UK’s Financial Conduct Authority to enable regulated companies to experiment and innovate in the financial services space. The model may be particularly suited for and well received in the data protection community, where technical innovation has an impact on data protection and where there is an increasing recognition that compliance has to be treated as an iterative process.

In its comments, CIPL identifies the main benefits of participation in the regulatory sandbox, sets out various practical suggestions to maximize the prospects of success of the regulatory sandbox and lays out specific safeguards which the ICO should adopt in its deployment of the concept.

In particular, CIPL’s response details:

  • benefits of sandbox participation for individuals, organizations, the ICO itself, society and the economy;
  • actual and hypothetical examples of where sandbox participation may be helpful, both in the private and public sectors;
  • practical considerations around the operation of the sandbox concept and features that can be included to maximize the prospects of its success;
  • criteria for acceptance into the sandbox;
  • the relationship between the sandbox and data protection impact assessments; and
  • safeguards which must be considered to address real concerns that are likely to arise from prospective sandbox participants.

CIPL has previously written about the potential of a regulatory sandbox model in data protection as a critical tool for innovation in digital society while ensuring data protection in its 2017 paper Regulating for Results: Strategies and Priorities for Leadership and Engagement.

CIPL to Host Side Event on Fairness in Data Protection at ICDPPC 2018

On October 23, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP will host an official side event on The Concept of “Fairness” in Data Protection at the 40th International Conference of Data Protection and Privacy Commissioners in Brussels, Belgium.

The event will run twice and feature two sets of panelists. More information can be found on the event registration page.

At the session, data protection regulators and industry professionals will take a deep dive into questions such as:

  • What does “fair” mean in the context of data protection?
  • How should organizations make determinations about the “fairness” of their processing activities in a way that can reliably pass muster in the eyes of a data protection authority?
  • What are the measurable elements and proof points of fairness?
  • Are the measurable elements and proof points of fairness sufficiently universal and objective so that efforts to meet “fairness” requirements will not proceed under a cloud of legal uncertainty?

CIPL’s examination of “fairness” will occur in the context of its recently launched project on Artificial Intelligence and Data Protection: Delivering Sustainable AI Accountability in Practice.

California Enacts Blockchain Legislation

As reported on the Blockchain Legal Resource, California Governor Jerry Brown recently signed into law Assembly Bill No. 2658 for the purpose of further studying blockchain’s application to Californians. In doing so, California joins a growing list of states officially exploring distributed ledger technology.

Specifically, the law requires the Secretary of the Government Operations Agency to convene a blockchain working group prior to July 1, 2019. Under the new law, “blockchain” means “a mathematically secured, chronological and decentralized ledger or database.” In addition to including various representatives from state government, the working group is required to include appointees from the technology industry and non-technology industries, as well as appointees with backgrounds in law, privacy and consumer protection.

Under the new law, which has a sunset date of January 1, 2022, the working group is required to evaluate:

  • the uses of blockchain in state government and California-based businesses;
  • the risks, including privacy risks, associated with the use of blockchain by state government and California-based businesses;
  • the benefits associated with the use of blockchain by state government and California-based businesses;
  • the legal implications associated with the use of blockchain by state government and California-based businesses; and
  • the best practices for enabling blockchain technology to benefit the State of California, California-based businesses and California residents.

In doing so, the working group is required to seek “input from a broad range of stakeholders with a diverse range of interests affected by state policies governing emerging technologies, privacy, business, the courts, the legal community and state government.”

The working group is also tasked with delivering a report to the California Legislature by January 1, 2020, on the potential uses, risks and benefits of blockchain technology by state government and California businesses. Moreover, the report is required to include recommendations for amending relevant provisions of California law that may be impacted by the deployment of blockchain technology.

Hunton Insurance Head Comments on Hotel Data Breach Coverage Dispute

As reported on the Insurance Recovery Blog, Hunton Andrews Kurth insurance practice head Walter Andrews recently commented to the Global Data Review regarding the infirmities underlying an Orlando, Florida federal district court’s ruling that an insurer does not have to defend its insured for damage caused by a third-party data breach.

The decision in St. Paul Fire & Marine Ins. Co. v. Rosen Millennium Inc., which involved a claim for coverage under two general liability insurance policies, turned on whether or not customers’ credit card information obtained from the insured’s payment system had been “made known” and by whom. According to the district court, the insurance policies required that the credit card information be “made known” by the insured, however in this instance, the publication was made by the third-party hackers. As Andrews explained, however, although it was undisputed that Florida law controlled interpretation of Millennium’s policies, the district court based its decision on a prior decision decided under South Carolina law, which differs from Florida law in many fundamental respects. “Florida state law makes it very clear that coverage is meant to be construed in favor of the policyholder where there is ambiguity,” Andrews said. “To me, it’s clear that there were two reasonable interpretations of the insurance policy here.”

Despite the outcome, Andrews noted that there are helpful takeaways from this decision for policyholders and prospective insureds facing potential exposure from cyber events: “Given how strenuously the insurers are fighting to deny coverage for data breach claims, a readable takeaway is that policyholders should consider getting very specific cyber insurance coverage.”

View the district court’s decision, and Andrews’ comments to the Global Data Review.

CIPL Hosts Workshop on Accountability Under the GDPR in Paris

On October 5, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP hosted a workshop on how to implement, demonstrate and incentivize accountability under the EU General Data Protection Regulation (“GDPR”), in collaboration with AXA in Paris, France. In addition to the workshop, on October 4, 2018, CIPL hosted a Roundtable on the Role of the Data Protection Office (“DPO”) under the GDPR at Mastercard and a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity.

Roundtable on the Role of the DPO Under the GDPR

On October 4, 2018, CIPL hosted a Roundtable on the Role of the DPO under the GDPR. The industry-only session consisted of an open discussion among CIPL members who have firsthand experience in carrying out the role and tasks of a DPO in diverse and complex multinational organizations. Following opening remarks by CIPL president Bojana Bellamy, participants discussed practical challenges, best practices and solutions to the effective exercise of the DPO’s functions. The Roundtable addressed issues such as the position of the DPO in the organization, independence and conflict of interests and rights, duties and liability of the DPO. View the full program and discussion topics as listed in the program agenda.

CIPL Pre-Workshop Dinner at Chanel School of Fashion

On the evening of October 4, 2018, CIPL hosted a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity. The event brought together CIPL members and data protection authorities (“DPAs”) in advance of CIPL’s all day accountability workshop. During the dinner, remarks were given by Bojana Bellamy, as well as Anna Pouliou, Head of Privacy at Chanel and Terry McQuay, Nymity president and sponsor of the event.

CIPL Workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR

On October 5, 2018, CIPL hosted an all day workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR, in collaboration with AXA. CIPL’s two newest papers on the Central Role of Accountability in Data Protection formed the basis of the program, placing an emphasis on how accountability enables effective data protection and trust in the digital society, and on the need for DPAs to encourage and incentivize accountability. Over 100 CIPL members and invited guests attended the session, including over 10 data privacy regulators.

Following opening remarks by Emmanuel Touzeau, Group Communication and Brand Director – GDPR Sponsor at AXA and CIPL’s Bojana Bellamy, introductory scene setting keynotes by Peter Hustinx, Former European Data Protection Supervisor and Patrick Rowe, Deputy General Counsel at Accenture laid the foundation for the day’s discussions.

The first panel on “Accountability under the GDPR” featured a wide ranging discussion by DPAs and industry experts on the important role of accountability in data protection. The meaning of accountability and its role in enabling effective privacy protections for individuals while ensuring innovation by organizations informed the discussion, along with dialogue around the key elements of accountability and how specific requirements of the GDPR map to these core elements. An important topic of discussion during this session concerned how to reconcile the need for proactive engagement between companies and DPAs with enforcement practices.

The second panel on “How to Demonstrate Accountability Internally and Externally” progressed the discussion from what constitutes accountability to how to implement and demonstrate it in practice, both within an organization and externally to DPAs. Participants also discussed whether accountability should be showcased proactively and how it can be demonstrated by participation in accountability schemes such as Binding Corporate Rules and future GDPR certifications and codes of conduct.

The final session of the day on “Best Practices: How are DPAs Incentivizing Accountability?” considered how DPAs can incentivize accountability under the GDPR. A wide range of incentives that are – or could be – used to encourage organizations to implement strong accountability measures were discussed, along with those that feature in CIPL’s paper on incentivizing accountability.

The workshop formed part of CIPL’s ongoing work around the concept of accountability in data protection and reaching consensus on its essential elements. View the full workshop agenda. CIPL’s papers on The Case for Accountability: How it Enables Effective Data Protection and Trust in Digital Society and Incentivizing Accountability: How Data Protection Authorities and Law Makers Can Encourage Accountability are the latest papers in this initiative and form the foundations for more work on accountability to follow from CIPL .

EDPB Adopts Opinions on National DPIA Lists in the EU

The European Data Protection Board (“EDPB”) recently published 22 Opinions on the draft lists of Supervisory Authority (“SAs”) in EU Member States regarding which processing operations are subject to the requirement of conducting a data protection impact assessment (“DPIA”) under the EU General Data Protection Regulation (“GDPR”).

National DPIA Lists

Article 35(4) of the GDPR states that the SAs of the EU Member States must establish, publish and communicate to the EDPB a list of processing operations that trigger the DPIA requirement under the GDPR. The following EU Members States have submitted their lists: Austria, Belgium, Bulgaria, Czech Republic, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Sweden and the United Kingdom.

In some cases, the EDPB requests that the SAs include processing activities in their list or specify additional criteria that, when combined, would satisfy the DPIA requirement. In other cases, the EDPB requests that the SAs remove some processing activities or criteria not considered to present a high risk to individuals. The purpose of the EDPB opinions is to ensure the consistent application of the GDPR’s DPIA requirement and to limit inconsistencies among EU Member States with respect to this requirement. The national lists will not be identical because, in establishing DPIA lists, the SAs must take into account their national or regional context and national legislation.

The EDPB has emphasized that the national DPIA lists are aimed to improve transparency for data controllers, but they are not exhaustive. Importantly, the EDPB requests national SAs to include in their DPIA lists a clear reference to the high risk criteria for conducting DPIAs as established by the Article Working Party 29 in its guidance. The draft lists should aim to rely on and complement these guidelines.

Next Steps

After receiving the EDPB’s opinions, the SAs have two weeks to (1) communicate to the EDPB whether they intend to amend their draft list or maintain it in its current form and (2) provide an explanation for such decision.

View the 22 Opinions of the EDPB on national DPIA lists.

Vizio Agrees to $17M Settlement to Resolve Smart TV Class Action Suit

Vizio, Inc. (“Vizio”), a California-based company best known for its internet-connected televisions, agreed to a $17 million settlement that, if approved, will resolve multiple proposed consumer class actions consolidated in California federal court. The suits’ claims, which are limited to the period between February 1, 2014 and February 6, 2017, involve data-tracking software Vizio installed on its smart TVs. The software allegedly identified content displayed on Vizio TVs and enabled Vizio to determine the date, time, channel of programs and whether a viewer watched live or recorded content. The viewing patterns were connected to viewer’s IP addresses, though never, Vizio emphasized in its press release announcing the proposed settlement, to an individual’s name, address, or similar identifying information. According to Vizio, viewing data allows advertisers and programmers to develop content better aligned with consumers’ preferences and interests.  

Among other claims, the suits allege that Vizio failed to adequately disclose its surveillance practices and obtain consumers’ express consent before collecting the information. The various suits, some of which were filed in 2015, were consolidated in California’s Central District in April 2016 and subsequently survived Vizio’s motion to dismiss. Vizio had argued that several of the claims were deficient, and contended that the injunctive relief claims were moot in light of a February 2017 consent decree resolving the Federal Trade Commission’s (“FTC”) complaint over Vizio’s collection and use of viewing data and other information. To settle the FTC case, Vizio agreed, among other things, to stop unauthorized tracking, to prominently disclose its TV viewing collection practices and to get consumers’ express consent before collecting and sharing viewing information.

The parties notified the district court in June that they struck a settlement in principle. On October 4, 2018, they jointly moved for preliminary settlement approval. Counsel for the consumers argued that the deal is fair, because revenue that Vizio obtained from sharing consumers’ data will be fully disgorged and class members who submit a claim will receive a proportion of the settlement of between $13 and $31, based on a 2 to 5 percent claims rate. Vizio also agreed to provide non-monetary relief including revised on-screen disclosures concerning its viewing data practices and deleting all viewing data collected prior to February 6, 2017. The relief is pending until the court approves the settlement.

SEC Fines Broker-Dealer $1 Million in First Enforcement Action Under Identity Theft Rule

On September 26, 2018, the SEC announced a settlement with Voya Financial Advisers, Inc. (“Voya”), a registered investment advisor and broker-dealer, for violating Regulation S-ID, also known as the “Identity Theft Red Flags Rule,” as well as Regulation S-P, the “Safeguards Rule.” Together, Regulations S-ID and S-P are designed to require covered entities to help protect customers from the risk of identity theft and to safeguard confidential customer information. The settlement represents the first SEC enforcement action brought under Regulation S-ID.

I.  The Identity Theft Red Flags Rule

Regulation S-ID covers SEC-registered broker-dealers, investment companies and investment advisors and mandates a written identity theft program, including policies and procedures designed to:

  • identify relevant types of identity theft red flags;
  • detect the occurrence of those red flags;
  • respond appropriately to the detected red flags; and
  • periodically update the identity theft program.

Covered entities are also required to ensure the proper administration of their preventative programs.

II.  The Safeguards Rule

Rule 30(a) of Regulation S-P requires financial institutions to adopt written policies and procedures that address administrative, technical and physical safeguards to protect customer records and information. It further requires that those policies and procedures be reasonably designed to (1) ensure the security and confidentiality of customer records and information; (2) protect against anticipated threats or hazards to the security or integrity of customer records and information; and (3) protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

III.  The Voya Violations

According to the SEC’s order, cyber intruders successfully impersonated Voya contractor-representatives, gaining access to a web portal that housed the personally identifiable information (“PII”) of approximately 5,600 Voya customers. Over a six-day period, intruders called Voya’s service call center and requested that three representatives’ passwords be reset; the intruders then used the temporary passwords to create new customer profiles and access customer information and documents. The order indicated that, in two of the three cases, the phone number used to call the Voya service center had previously been flagged as associated with fraudulent activity.

Three hours after the first fraudulent reset, the targeted representative allegedly notified technical support that they had not requested the reset. While Voya did take some steps in response, the order found that those steps did not include terminating the fraudulent login sessions or imposing safeguards sufficient to prevent intruders from obtaining passwords for two additional representative accounts over the next several days.

The SEC determined that Voya violated the Identity Theft Red Flags Rule because, while it had adopted an Identity Theft Prevention Program in 2009, it did not review and update this program in response to changes in the technological environment. The SEC also found that Voya failed to provide adequate training to its employees. Finally, the SEC found that Voya’s Identity Theft Program lacked reasonable policies and procedures to respond to red flags. In addition to these violations, the SEC determined that Voya violated the Safeguards Rule by failing to adopt written policies and procedures reasonably designed to safeguard customer records and information.

IV.  Aftermath and Implications

While neither admitting nor denying the SEC’s findings, Voya agreed to a $1 million fine to settle the enforcement action and will engage an independent consultant to evaluate its policies and procedures for compliance with the Safeguards Rule, Identity Theft Red Flags Rule and related regulations. The SEC additionally ordered that Voya cease and desist from committing any violations of Regulations S-ID and S-P.

The Voya settlement demonstrates that the SEC is focused on protecting consumer information, and ensuring that broker-dealers, investment companies and investment advisors comply with Regulation S-ID. The Voya settlement also provides that having policies and procedures designed to protect customer information alone may not suffice; entities subject to Regulation S-ID should frequently evaluate the adequacy of their policies and procedures designed to identify and address “red flags,” and they should ensure that all relevant employees receive comprehensive training on identify theft. Such entities must also ensure that their compliance program is frequently updated to address changes in technology and corresponding changes to the risk environment.

NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic.

The Draft Report is designed “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their IoT devices throughout their lifecycles.” According to the Draft Report, “[m]any organizations are not necessarily aware they are using a large number of IoT devices. It is important that organizations understand their use of IoT because many IoT devices affect cybersecurity and privacy risks differently than conventional IT devices do.”

The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy.

In order to address those considerations and risk mitigation goals, the Draft Report provides the following recommendations:

  • Understand the IoT device risk considerations and the challenges they may cause to mitigating cybersecurity and privacy risks for devices in the appropriate risk mitigation areas.
  • Adjust organizational policies and processes to address the cybersecurity and privacy risk mitigation challenges throughout the IoT device lifecycle.
  • Implement updated mitigation practices for the organization’s IoT devices as you would any other changes to practices.

Comments are due by October 24, 2018.

APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade recognizes “the economic and social benefits of protecting the personal information of users of digital trade” and will require the U.S., Canada and Mexico (the “Parties”) to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users[.]” The frameworks should include key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability.

In adopting such a framework, Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) within their respective legal systems:

Art. 19.8(6) Recognizing that the Parties may take different legal approaches to protecting personal information, each Party should encourage the development of mechanisms to promote compatibility between these different regimes. The Parties shall endeavor to exchange information on the mechanisms applied in their jurisdictions and explore ways to extend these or other suitable arrangements to promote compatibility between them. The Parties recognize that the APEC Cross-Border Privacy Rules system is a valid mechanism to facilitate cross-border information transfers while protecting personal information.

In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to… cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.”

The APEC CBPRs were developed by the 21 APEC member economies as a cross-border transfer mechanism and comprehensive privacy program for private sector organizations  to enable the accountable free flow of data across the APEC region. Organizations must be certified by a third-party APEC recognized Accountability Agent to participate in this system. The CBPRs are binding and enforceable against participating companies.

The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate.

CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.

What is a Blockchain?

A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.

Scope of the CNIL’s Assessment

The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.

  • DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
  • Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).

In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.

Role of Actors in a Blockchain Network

The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.

If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.

According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.

How to Minimize Risks To Data Subjects

  • Assessing the appropriateness of using blockchain

As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.

If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.

  • Choosing the right format under which the data will be recorded

As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.

In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.

The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.

With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).

In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”

How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights

According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)

However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.

In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.

With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.

Security Requirements

The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.

Next Steps

In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.

Chipotle Consumer Plaintiffs’ Putative Class Case Survives in Part

On September 26, 2018, the U.S. District Court for the District of Colorado (“the Court”) refused to dismiss all putative class claims against Chipotle Mexican Grill, Inc. (“Chipotle”). This litigation arose from a 2017 data breach in which hackers stole customers’ payment card and other personal information by using malicious software to access the point-of-sale systems at Chipotle’s locations. 

Chipotle moved to dismiss all claims, arguing that two of the named plaintiffs – Plaintiff Lawson and Plaintiff Baker – lacked standing and that all other plaintiffs failed to state a claim. The motion was first considered by a United States Magistrate Judge, who recommended granting only part of Chipotle’s requested relief. Both Plaintiffs and Chipotle objected to portions of the recommendation. The District Court Judge agreed with the recommendation in part.

The Court first found that Plaintiff Lawson’s allegations of debit card misuse, time spent obtaining a new debit card, inability to receive cash back awards on certain purchases, and the cost to expedite delivery of a new card for impending travel all demonstrated injury in fact sufficient for standing. It also determined that more than just Plaintiff Baker’s name and payment card number may have been stolen, thus alleging facts sufficient to establish an impending injury.

The District Court Judge further found that certain allegations failed to state claims. Specifically, the Court dismissed claims for: (1) negligence; (2) negligence per se; (3) violation of the Colorado Consumer Protection Act; (4) unjust enrichment; and (5) violation of the Illinois Uniform Deceptive Trade Practices Act. However, the following claims survived Chipotle’s dismissal efforts: (1) breach of implied contract; (2) fraudulent omission claims (under Arizona, California, and Illinois consumer protection laws); (3) violation of California’s Unfair Competition Law; and (4) various damages claims (under California, Illinois, and Missouri consumer protection laws).

View Court’s Order.

California Enacts New Requirements for Internet of Things Manufacturers

On September 28, 2018, California Governor Jerry Brown signed into law two identical bills regulating Internet-connected devices sold in California. S.B. 327 and A.B. 1906 (the “Bills”), aimed at the “Internet of Things,” require that manufacturers of connected devices—devices which are “capable of connecting to the Internet, directly or indirectly,” and are assigned an Internet Protocol or Bluetooth address, such as Nest’s thermostat—outfit the products with “reasonable” security features by January 1, 2020; or, in the bills’ words: “equip [a] device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure[.]”

According to Bloomberg Law, the Bills’ non-specificity regarding what “reasonable” features include is intentional; it is up to the manufacturers to decide what steps to take. Manufacturers argue that the Bills are egregiously vague, and do not apply to companies that import and resell connected devices made in other countries under their own labels.

The Bills are opposed by the Custom Electronic Design & Installation Association, Entertainment Software Association and National Electrical Manufacturers Association. They are sponsored by Common Sense Kids Action; supporters include the Consumer Federation of America, Electronic Frontier Foundation and Privacy Rights Clearinghouse.

Four Companies Settle FTC Allegations Regarding False EU-U.S. Privacy Shield Certifications

On September 27, 2018, the Federal Trade Commission announced a settlement agreement with four companies – IDmission, LLC, (“IDmission”) mResource LLC (doing business as Loop Works, LLC) (“mResource”), SmartStart Employment Screening, Inc. (“SmartStart”), and VenPath, Inc. (“VenPath”) – over allegations that each company had falsely claimed to have valid certifications under the EU-U.S. Privacy Shield framework. The FTC alleged that SmartStart, VenPath and mResource continued to post statements on their websites about their participation in the Privacy Shield after allowing their certifications to lapse. IDmission had applied for a Privacy Shield certification but never completed the necessary steps to be certified.

In addition, the FTC alleged that both VenPath and SmartStart failed to comply with a provision under the Privacy Shield requiring companies that cease participation in the Privacy Shield framework to affirm to the Department of Commerce that they will continue to apply the Privacy Shield protections to personal information collected while participating in the program.

As part of the proposed settlements with the FTC, each company is prohibited from misrepresenting their participation in any privacy or data security program sponsored by the government or any self-regulatory or standard-setting organization and must comply with FTC reporting requirements. Further, VenPath and SmartStart must either (1) continue to apply the Privacy Shield protections to personal information collected while participating in the Privacy Shield, (2) protect it by another means authorized by the Privacy Shield framework, or (3) return or delete the information within 10 days of the FTC’s order.

“Companies need to know that if they fail to honor their Privacy Shield commitments, or falsely claim participation in the Privacy Shield framework, we will hold them accountable,” said Andrew Smith, director of the FTC’s Bureau of Consumer Protection. “We have now brought enforcement actions against eight companies related to the Privacy Shield, and we will continue to aggressively enforce the Privacy Shield and other cross-border privacy frameworks.”

Update: On November 19, 2018, the Commission voted to give final approval to the settlements with the four companies.

CIPL Submits Comments on Draft Indian Data Protection Bill

On September 26, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Indian Ministry of Electronics and Information Technology on the draft Indian Data Protection Bill 2018 (“Draft Bill”).

CIPL’s comments on the Draft Bill focus on several key issues that are of particular importance for any modern-day data protection law, including increased emphasis on accountability and the risk-based approach to data processing, interoperability with other data protection laws globally, the significance of having a variety of legal bases for processing and not overly relying on consent, the need for extensive and flexible data transfer mechanisms, and the importance of maximizing the effectiveness of the data protection authority.

Specifically, the comments address the following key issues:

  • the Draft Bill’s extraterritorial scope;
  • the standard for anonymization;
  • notice requirements;
  • accountability and the risk-based approach;
  • legal bases for processing, including importance of the reasonable purposes ground;
  • sensitive personal data;
  • children’s data;
  • individual rights;
  • data breach notification;
  • Data Protection Impact Assessments;
  • record-keeping requirements and data audits;
  • Data Protection Officers;
  • the adverse effects of a data localization requirement;
  • cross-border transfers;
  • codes of practice; and
  • the timeline for adoption.

These comments were formed as part of CIPL’s ongoing engagement in India. In January 2018, CIPL responded to the Indian Ministry of Electronics and Information Technology’s public consultation on the White Paper of the Committee of Experts on a Data Protection Framework for India.

NTIA Seeks Public Comment on Approach to Consumer Privacy with an Eye Toward Building Better Privacy Protections

On September 26, 2018, the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced that it is seeking public comments on a proposed approach to advancing consumer privacy. The approach is divided into two parts: (1) a set of desired user-centric privacy outcomes of organizational practices, including transparency, control, reasonable minimization (of data collection, storage length, use and sharing), security, access and correction, risk management and accountability; and (2) a set of high-level goals that describe the outlines of the ecosystem that should be created to provide those protections, including harmonizing the regulatory landscape, balancing legal clarity and the flexibility to innovate, ensuring comprehensive application, employing a risk and outcome-based approach, creating mechanisms for interoperability with international norms and frameworks, incentivizing privacy research, ensuring that the Federal Trade Commission has the resources and authority to enforce, and ensuring scalability.

The NTIA is specifically looking to the public to respond with comments on the following questions:

  • Are there other outcomes or goals that should be included, or outcomes or goals that should be expanded upon as separate items?
  • Are the descriptions for the outcomes and goals clear, or are there are any issues raised by how any of them are described?
  • Are there any risks that accompany the list of outcomes, the list of goals or the general approach taken?
  • Are there any aspects of the approach that could be implemented or enhanced through Executive action or non-regulatory actions, and if so, what actions?
  • Should further explorations be made regarding additional commercial data privacy-related issues, including any recommended focus and desired outcomes?
  • Are there any aspects of the approach that may be achieved by other means, such as through statutory changes?
  • Do any terms used in the approach require more precise definitions, including suggestions for better definitions and additional terms?
  • Do changes need to be made with regard to the FTC’s resources, processes and/or statutory authority?
  • If all or some of the outcomes or goals described in this approach were replicated by other countries, do you believe it would be easier for U.S. companies to provide goods and services in those countries?
  • Are there other ways to achieve U.S. leadership that are not included in the approach?
  • Are there any high-level goals in this approach that would be detrimental to achieving U.S. leadership?

Comments are due by October 26, 2018, and may be submitted by email. Additional information can be found in the Federal Register Notice.

Senate Commerce Committee Holds Hearing on Examining Consumer Privacy Protections

On September 26, 2018, the U.S. Senate Committee on Commerce, Science, and Transportation convened a hearing on Examining Consumer Privacy Protections with representatives of major technology and communications firms to discuss approaches to protecting consumer privacy, how the U.S. might craft a federal privacy law, and companies’ experiences in implementing the EU General Data Protection Regulation (“GDPR”) and the California Consumer Privacy Act (“CCPA”).

After introductory remarks by Senator and Chairman of the Committee John Thune (R-SD) and Senator Bill Nelson (D-FL), representatives from AT&T, Amazon, Google, Twitter, Apple and Charter Communications provided testimony on the importance of protecting consumer privacy, the need for clear rules that still ensure the benefits that flow from the responsible use of data, and key principles that should be included in any federal privacy law. A question and answer session followed, with various senators posing a variety of questions to the witnesses, covering topics such as comparisons to global data privacy regimes, the current and potential future authority of the Federal Trade Commission, online behavioral advertising and political advertising, current privacy tools and issues surrounding children’s data.

Key views expressed by the witnesses from the hearing include:

  • support for the creation of a federal privacy law and a preference for preemption rather than a patchwork of different state privacy laws;
  • agreement that the FTC should be the regulator for a federal privacy law but the authority of the FTC under such a law should be discussed and examined further;
  • concern around a federal privacy law attempting to copy the GDPR or CCPA. A federal privacy law should seek to avoid the difficulties and unintended consequences created by these laws and the U.S. should put its own stamp on what the law should be; and
  • agreement that a federal law should not be unduly burdensome for small and medium sized enterprises.

An archived webcast of the hearing is available on the Senate Commerce Committee’s website.

The hearing marked the first of several as the U.S. debates whether to adopt federal privacy legislation. The next hearing is scheduled for early October where Andrea Jelinek, head of the European Data Protection Board, Alastair MacTaggert, California privacy activist, and representatives from consumer organizations will participate and answer questions on consumer privacy, the GDPR and the CCPA.

Uber Settles with 50 State Attorneys General for $148 Million In Connection with 2016 Data Breach

On September 26, 2018, Uber Technologies Inc. (“Uber”) agreed to a settlement (the “Settlement”) with all 50 U.S. state attorneys general (the “Attorneys General”) in connection with a 2016 data breach affecting the personal information (including driver’s license numbers) of approximately 607,000 Uber drivers nationwide, as well as approximately 57 million consumers’ email addresses and phone numbers. The Attorneys General alleged that after Uber learned of the breach, which occurred in November 2016, the company paid intruders a $100,000 ransom to delete the data. The Attorneys General alleged that Uber failed to promptly notify affected individuals of the incident, as required under various state laws, instead notifying affected customers and drivers of the breach one year later in November 2017. 

As reported by the Pennsylvania Office of the Attorney General, the Settlement will require Uber to pay $148 million to the Attorneys General, which will be divided among the 50 states. In addition, Uber must undertake certain data security measures, including:

  • comply with applicable breach notification and consumer protection laws regarding protecting personal information;
  • implement measures to protect user data stored on third-party platforms;
  • implement stricter internal password policies for employee access to Uber’s network;
  • develop and implement an overall data security policy to address the collection and protection of personal information, including assessing potential data security risks;
  • implement additional data security measures with respect to personal information stored on Uber’s network;
  • implement a corporate integrity program to ensure appropriate reporting channels for internal ethics concerns or complaints; and
  • engage a third-party expert to conduct regular assessments of Uber’s data security efforts and make recommendations for improvement, as appropriate.

The Settlement is pending court approval. In a statement, California Attorney General Xavier Becerra said, “Uber’s decision to cover up this breach was a blatant violation of the public’s trust. The company failed to safeguard user data and notify authorities when it was exposed. Consistent with its corporate culture at the time, Uber swept the breach under the rug in deliberate disregard of the law.”

We previously reported that the Federal Trade Commission modified a 2017 settlement with Uber after learning of the company’s response to the 2016 breach.

Update: In addition, as reported by Law360, on November 27, 2018, Uber was fined by both the UK Information Commissioner’s Office (“ICO”) and the Dutch Data Protection Authority (“DPA”). The ICO’s fine of £385,000 was a result of Uber’s failure to protect its customers’ personal information. The Dutch DPA fined Uber €600,000 “for violating the Dutch data breach regulation,” which requires notification of the breach within 72 hours.

CNIL Publishes Initial Assessment of GDPR Implementation

On September 25, 2018, the French Data Protection Authority (the “CNIL”) published the first results of its factual assessment of the implementation of the EU General Data Protection Regulation (GDPR) in France and in Europe. When making this assessment, the CNIL first recalled the current status of the French legal framework, and provided key figures on the implementation of the GDPR from the perspective of privacy experts, private individuals and EU supervisory authorities. The CNIL then announced that it will adopt new GDPR tools in the near future. Read the full factual assessment (in French).

Upcoming Consolidation of the French Legal Framework

The French Data Protection Act (“the Act”) and its implementing Decree were amended by a law and Decree published respectively on June 21 and August 3, 2018, in order to bring French law in line with the GDPR and implement the EU Data Protection Directive for Police and Criminal Justice Authorities. However, some of the provisions of the Act still remain unchanged and are no longer applicable. In addition, the Act does not mention all new obligations imposed by the GDPR or the new rights of data subjects, and is therefore incomplete. The CNIL recalled that an ordinance is expected to be adopted by the end of this year to re-write the Act and facilitate readability of the French data protection framework.

Gradual Rolling Out of the GDPR by Privacy Experts

The CNIL noted that 24,500 organizations have appointed a data protection officer (“DPO”), which represents 13,000 DPOs. In comparison, only 5,000 DPOs were appointed under the previous data protection framework. Since May 25, 2018, the CNIL has also received approximately 7 data breach notifications a day, totaling more than 600 data breach notifications, which affected 15 million individuals. The CNIL  continues to receive a large number of authorization requests in the health sector (more than 100 requests filed since May 25, 2018, in particular for clinical trial purposes).

Individuals’ Unprecedented GDPR Awareness

Since May 25, 2018, the CNIL has received 3,767 complaints from individuals. This represents an increase of 64% compared to the number of complaints received during the same period in 2017, and can be explained by the widespread media coverage of the GDPR and cases such as Cambridge Analytica. EU supervisory authorities are currently handling more than 200 cross-border complaints under the cooperation procedure provided for by the GDPR, and the CNIL is a supervisory authority concerned for most of these cases.

Effective European Cooperation Under the GDPR

The CNIL recalled that a total of 18 GDPR guidelines have been adopted at the EU level and 7 guidelines are currently being drawn up by the European Data Protection Board (“EDPB”) (e.g., guidelines on the territorial scope of the GDPR, data transfers and video surveillance). Further, the IT platform chosen to support cooperation and consistency procedures under the GDPR has been effective since May 25, 2018. With respect to Data Protection Impact Assessments (“DPIAs”), the CNIL has submitted to the EDPB a list of processing operations requiring a DPIA. Once validated by the EDPB, this list and additional guidelines will be published by the CNIL.

Next Steps

In terms of the CNIL’s upcoming actions or initiatives, the CNIL announced that it will shortly propose the following new tools:

  • “Referentials” (i.e., guidelines) relating to the processing of personal data for HR and customer management purposes. These referentials are intended to update the CNIL’s well established doctrine in light of the new requirements of the GDPR. The draft referentials will be open for public consultation. Once finalized, the CNIL announced its intention to promote those referentials at the EU level.
  • A Model Regulation regarding biometric data. According to Article 9(4) of the GDPR, EU Member States may maintain and introduce further conditions, including limitations, with regard to the processing of biometric data. France introduced such conditions by amending the French Data Protection Act in order to allow the processing of biometric data for the purposes of controlling access to a company’s premises and/or devices and apps used by staff members to perform their job duties if that processing complies with the CNIL’s Model Regulation. Compliance with that Model Regulation constitutes an exception from the prohibition to process biometric data.
  • A first certification procedure. In May 2018, the CNIL launched a public consultation on the certification of the DPO, which ended on June 22, 2018. The CNIL will finalize the referentials relating to the certification of the DPO by the end of this month.
  • Compliance packs. The CNIL confirmed that it will continue to adopt compliance packs, (i.e., guidelines for a particular sector or industry).  The CNIL also announced its intention to promote some of these compliance packs at the EU level (such as the compliance pack on connected vehicles) in order to develop a common European doctrine that could be endorsed by the EDPB.
  • Codes of conduct. A dozen codes of conduct are currently being prepared, in particular codes of conduct on medical research and cloud infrastructures.
  • A massive open online course. This course will help participants familiarize themselves with the fundamental principles of the GDPR.

CCPA Amendment Bill Signed Into Law

On September 23, 2018, California Governor Jerry Brown signed into law SB-1121 (the “Bill”), which makes limited substantive and technical amendments to the California Consumer Privacy Act of 2018 (“CCPA”). The Bill takes effect immediately,  and delays the California Attorney General’s enforcement of the CCPA until six months after publication of the Attorney General’s implementing regulations, or July 1, 2020, whichever comes first. 

We have previously posted about the modest changes that SB-1121 makes to the CCPA. As reported in BNA Privacy Law Watch, the California legislature may consider broader substantive changes to the CCPA in 2019.

ICO Issues First Enforcement Action Under the GDPR

The Information Commissioner’s Office (“ICO”) in the UK has issued the first formal enforcement action under the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act 2018 (the “DPA”) on Canadian data analytics firm AggregateIQ Data Services Ltd. (“AIQ”). The enforcement action, in the form of an Enforcement Notice served under section 149 of the DPA, requires AIQ to “cease processing any personal data of UK or EU citizens obtained from UK political organizations or otherwise for the purposes of data analytics, political campaigning or any other advertising purposes.”

AIQ uses data to target online advertisements at voters, and its clients include UK political organizations, in particular Vote Leave, BeLeave, Veterans for Britain and the DUP Vote to Leave. These organizations provide personal data to AIQ for the purposes of targeting individuals with political advertising messages on social media.

While not established in the EU, the ICO has determined that as long as AIQ’s processing activities relate to the monitoring of data subjects’ behavior when that behavior takes place within the EU, then AIQ is subject to the GDPR, under its territorial scope provisions at Article 3(2)(b).

AIQ was found to be in breach of Articles 5(a) – 5(c) and Article 6 of the GDPR for processing personal data in a way that data subjects were not aware of, for a purpose they would not have expected, and without a lawful basis for processing. In addition, AIQ failed to provide the transparency information required under Article 14 of the GDPR.

AIQ is challenging the ICO’s decision and has exercised its right of appeal to the First-tier Tribunal, under section 162(1)(c) of DPA.

UK ICO Fines Equifax for 2017 Breach

Recently, the UK Information Commissioner’s Office (“ICO”) fined credit rating agency Equifax £500,000 for failing to protect the personal data of up to 15 million UK individuals. The data was compromised during a cyber attack that occurred between May 13 and July 30, 2017, which affected 146 million customers globally. Although Equifax’s systems in the U.S. were targeted, the ICO found the credit agency’s UK arm, Equifax Ltd, failed to take appropriate steps to ensure that its parent firm, which processed this data on its behalf, had protected the information. The ICO investigation uncovered a number of serious contraventions of the UK Data Protection Act 1998 (the “DPA”), resulting in the ICO imposing on Equifax Ltd the maximum fine available.

The compromised UK data was controlled by Equifax Ltd and was processed by Equifax Ltd’s parent company and data processor, Equifax Inc. The breach affected Equifax’s Identity Verifer (“EIV”) dataset, which related to the EIV product, and its GCS dataset. The compromised data included names, telephone numbers, driver’s licence numbers, financial details, dates of birth, security questions and answers (in plain text), passwords (in plain text) and credit card numbers (obscured). The ICO investigation found that there had been breaches of five of the eight data protection principles of the DPA. In particular, the ICO commented in detail on Equifax’s breaches of the fifth and seventh principles and noted the following:

  • Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes (Fifth Principle):
    • In 2016, Equifax Ltd moved the EIV product from the U.S. to be hosted in the UK. Once the EIV product had been migrated to the UK, it was no longer necessary to keep any of the EIV dataset, in particular the compromised UK data, on Equifax Inc.’s systems. The EIV dataset, however, was not deleted from Equifax’s U.S. systems and was subsequently compromised.
    • With respect to the GCS datasets stored on the U.S. system, Equifax Ltd was not sufficiently aware of the purpose(s) for which it was being processed until after the breach. In the absence of a lawful basis for processing (in breach of the First Principle of the DPA), the personal data should have been deleted. The data was not deleted and Equifax Ltd failed to follow-up or check that all UK data had been removed from Equifax’s U.S. systems.
  • Appropriate technical and organizational measures shall be taken against unauthorized or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data (Seventh Principle):
    • Equifax Ltd failed to undertake an adequate risk assessment of the security arrangements that Equifax Inc. had in place, prior to transferring data to Equifax Inc. or following the transfer.
    • Equifax Ltd and Equifax Inc. had various data processing agreements in place, however, these agreements failed to (1) provide appropriate safeguards (not limited to security requirements), and (2) properly incorporate the EU Standard Contractual Clauses (in breach of the Eighth Principle of the DPA).
    • Equifax Ltd had a clear contractual right to audit Equifax Inc.’s compliance with its obligations under the aforementioned data processing agreements. Despite this right, Equifax Ltd failed to exercise it to check Equifax Inc.’s compliance with its obligations.
    • Communication procedures between Equifax Ltd and Equifax Inc. were deemed inadequate. In particular, this was highlighted by the delay of over one month between Equifax Inc. becoming aware of the breach and Equifax Ltd being informed of it.
    • Equifax Ltd failed to ensure adequate security measures were in place or notice that Equifax Inc. had failed to take such measures, including:
      • failing to adequately encrypt personal data or protect user passwords. The ICO did not accept Equifax Ltd’s reasons (i.e., fraud prevention and password analysis) for storing passwords in a plaintext file, particularly as it was a direct breach of Equifax Ltd’s own Cryptology Standards, and the stated aims could be achieved by other more secure means;
      • failing to address known IT vulnerabilities, including those identified and reported to senior employees. In particular, Equifax had been warned about a critical vulnerability in its systems by the U.S. Department of Homeland Security in March 2017. This vulnerability was given a score of 10.0 on the Common Vulnerability Scoring System (“CVSS”). A CVSS score of 10.0 is the highest score, indicating a critical vulnerability that requires immediate attention. Equifax Inc. failed to patch all vulnerable systems and this vulnerability in its consumer-facing disputes portal was exploited by the cyber attack; and
      • not having fully up-to-date software, failing to undertake sufficient and regular system scans, and failing to ensure appropriate network segregation (some UK data was stored together with U.S. data, making it difficult to differentiate).

Since the breach occurred prior to May 25, 2018, it was dealt with in accordance the Act. While the Equifax fine represents the maximum available under the Act, the aggravating factors identified by the ICO including the number of affected data subjects, the type of data at risk, and the multiple, systematic and serious inadequacies, it is likely that this fine would have been considerably more had the EU General Data Protection Regulation been in force when the breach occurred.

New Federal Credit Freeze Law Eliminates Fees, Provides for Year-Long Fraud Alerts

Effective September 21, 2018, Section 301 of the Economic Growth, Regulatory Relief, and Consumer Protection Act (the “Act”) requires consumer reporting agencies to provide free credit freezes and year-long fraud alerts to consumers throughout the country. Under the Act, consumer reporting agencies must each set up a webpage designed to enable consumers to request credit freezes, fraud alerts, extended fraud alerts and active duty fraud alerts. The webpage must also give consumers the ability to opt out of the use of information in a consumer report to send the consumer a solicitation of credit or insurance. Consumers may find links to these webpages on the Federal Trade Commission’s Identity Theft website.

The Act also enables parents and guardians to freeze their children’s credit if they are under age 16. Guardians or conservators of incapacitated persons may also request credit freezes on their behalf.

Section 302 of the Act provides additional protections for active duty military. Under this section, consumer reporting agencies must offer free electronic credit monitoring to all active duty military.

For more information, read the FTC’s blog post.

Apple to Require Privacy Policies for All New Apps and App Updates

On August 30, 2018, Apple Inc. announced a June update to its App Store Review Guidelines that will require each developer to provide its privacy policy as part of the app review process, and to include in such policy specific content requirements. Effective October 3, 2018, all new apps and app updates must include a link to the developer’s privacy policy before they can be submitted for distribution to users through the App Store or through TestFlight external testing.

The privacy policy must detail what data the app gathers, how the data will be collected and how such data will be used. The policy also must confirm that third parties with whom an app shares user data will provide the same or equal protection of user data as stated in the app’s privacy policy and the App Store Review Guidelines. Lastly, the policy must explain the developer’s data retention policy, as well as include information on how users can revoke consent to data collection or request deletion of the collected user data. Developers only will be able to edit an app’s privacy policy when submitting a new version of the app.

Canadian Regulator Seeks Public Comment on Breach Reporting Guidance

As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018. 

Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.

The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.

Software Company Settles with New Jersey AG Over Data Breach

On September 7, 2018, the New Jersey Attorney General announced a settlement with data management software developer Lightyear Dealer Technologies, LLC, doing business as DealerBuilt, resolving an investigation by the state Division of Consumer Affairs into a data breach that exposed the personal information of car dealership customers in New Jersey and across the country. The breach occurred in 2016, when a researcher exposed a gap in the company’s security and gained access to unencrypted files containing names, addresses, social security numbers, driver’s license numbers, bank account information and other data belonging to thousands of individuals, including at least 2,471 New Jersey residents.

To resolve the investigation, DealerBuilt agreed to undertake a number of changes to its security practices to help prevent similar breaches from occurring in the future, including:

  • the creation of an information security program to be implemented and maintained by a chief security officer;
  • the maintenance and implementation of encryption protocols for personal information stored on laptops or other portable devices or transmitted wirelessly;
  • the maintenance and implementation of policies that clearly define which users have authorization to access its computer network;
  • the maintenance of enforcement mechanisms to approve or disapprove access requests based on those policies; and
  • the maintenance of data security assessment tools, including vulnerability scans.

In addition to the above, DealerBuilt agreed to an $80,784 settlement amount, comprised of $49,420 in civil penalties and $31,364 in reimbursement of the Division’s attorneys’ fees, investigative costs and expert fees.

Read the consent order resolving the investigation.

NIST Launches Privacy Framework Effort

On September 4, 2018, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) announced a collaborative project to develop a voluntary privacy framework to help organizations manage privacy risk. The announcement states that the effort is motivated by innovative new technologies, such as the Internet of Things and artificial intelligence, as well as the increasing complexity of network environments and detail of user data, which make protecting individuals’ privacy more difficult. “We’ve had great success with broad adoption of the NIST Cybersecurity Framework, and we see this as providing complementary guidance for managing privacy risk,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.

The goals for the framework stated in the announcement include providing an enterprise-level approach that helps organizations prioritize strategies for flexible and effective privacy protection solutions and bridge gaps between privacy professionals and senior executives so that organizations can respond effectively to these challenges without stifling innovation. To kick off the effort, the NIST has scheduled a public workshop on October 16, 2018, in Austin, Texas, which will occur in conjunction with the International Association of Privacy Professionals’ “Privacy. Security. Risk. 2018” conference. The Austin workshop is the first in a series planned to collect current practices, challenges and requirements in managing privacy risks in ways that go beyond common cybersecurity practices.

In parallel with the NIST’s efforts, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) is “developing a domestic legal and policy approach for consumer privacy.” The announcement stated that the NTIA is coordinating its efforts with the department’s International Trade Administration “to ensure consistency with international policy objectives.”

Uber Data Breach Class Action Must Proceed to Arbitration

On September 5, 2018, the U.S. District Court for the Central District of California held that a class action arising from a 2016 Uber Technologies Inc. (“Uber”) data breach must proceed to arbitration. The case was initially filed after a 2016 data breach that affected approximately 600,000 Uber drivers and 57 million Uber customers. Upon registration with Uber, the drivers and customers entered into a service agreement that contained an arbitration provision. Based on this provision, the defendants moved to compel arbitration. They argued that the provision’s express language delegated the threshold issue of whether the case should be arbitrated (also called an issue of “substantive arbitrability”) to an arbitrator, not to the court. The plaintiffs countered, arguing that the arbitration clause was both inapplicable to the 2016 data breach and unconscionable, and that Uber customers did not receive reasonable notice of the electronic terms agreement when they registered.

The court rejected each of the plaintiffs’ arguments. First, citing Mohammed v. Uber Techs., Inc., 848 F.3d 1201, 1209 (9th Cir. 2016), the court held that the agreement’s language “clearly and unmistakably” delegated to the arbitrator the threshold and substantive issue of whether the 2016 breach was one that should be arbitrated. Second, whether the arbitration provision was unconscionable was similarly a question of substantive arbitrability “expressly delegated to the arbitrator.” Third, the court noted that the plaintiffs offered no evidence of confusion or lack of notice, and that many other courts had found similar electronic notice to be reasonable.

The case has been stayed pending completion of the arbitration.

Belgium Publishes Law Adapting the Belgian Legal Framework to the GDPR

On September 5, 2018, the Law of 30 July 2018 on the Protection of Natural Persons with regard to the Processing of Personal Data (the “Law”) was published in the Belgian Official Gazette.

This is the second step in adapting the Belgian legal framework to the EU GDPR after the Law of 3 December 2017 Creating the Data Protection Authority, which reformed the Belgian Data Protection Authority.

The Law is available in French and Dutch.

EU Begins Formal Approval for Japan Adequacy Decision

On September 5, 2018, the European Commission (the “Commission”) announced in a press release the launch of the procedure to formally adopt the Commission’s adequacy decision with respect to Japan.

The press release notes that the EU-Japan talks on personal data protection were completed in July 2018, and announces the publication of the draft adequacy decision and related documents which, among other things, set forth the additional safeguards Japan will accord EU personal data that is transferred to Japan. According to the release, Japan is undertaking a similar formal adoption process concerning the reciprocal adequacy findings between the EU and Japan.

The adequacy decision intends to ensure that Japan provides privacy protections for EU personal data that are “essentially equivalent” to the EU standard. The key elements of the agreement include:

  • Specific safeguards to be applied by Japan to bridge the difference between EU and Japanese standards on issues such as sensitive data, onward transfer of EU data to third countries, and the right to access and rectification.
  • Enforcement by the Japan Personal Information Protection Commission.
  • Safeguards concerning access to EU personal data by Japanese public authorities for law enforcement and national security purposes.
  • A complaint-handling mechanism.

The press release also notes that the adequacy decision will complement the EU-Japan Economic Partnership Agreement by supporting free data flows between the EU and Japan and providing for privileged access to 127 million Japanese consumers.

Finally, the press release also outlines the next four steps in the formal approval process:

  • Opinion from the European Data Protection Board.
  • Consultation of a committee composed of representatives from the EU Member States (comitology procedure).
  • Update of the European Parliament Committee on Civil Liberties, Justice and Home Affairs.
  • Adoption of the adequacy decision by the College of Commissioners.

CCPA Amended: Enforcement Delayed, Few Substantive Changes Made

On August 31, 2018, the California State Legislature passed SB-1121, a bill that delays enforcement of the California Consumer Privacy Act of 2018 (“CCPA”) and makes other modest amendments to the law. The bill now goes to the Governor for signing. The provisions of the CCPA will become operative on January 1, 2020. As we have previously reported, the CCPA introduces key privacy requirements for businesses. The Act was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. The CCPA’s hasty passage resulted in a number of drafting errors and inconsistencies in the law, which SB-1121 seeks to remedy. The amendments to the CCPA are primarily technical, with few substantive changes.

Key amendments to the CCPA include:

  • Enforcement:
    • The bill extends by six months the deadline for the California Attorney General (“AG”) to draft and adopt the law’s implementing regulations, from January 1, 2020, to July 1, 2020. (CCPA § 1798.185(a)).
    • The bill delays the AG’s ability to bring enforcement actions under the CCPA until six months after publication of the implementing regulations or July 1, 2020, whichever comes first. (CCPA § 1798.185(c)).
    • The bill limits the civil penalties the AG can impose to $2,500 for each violation of the CCPA or up to $7,500 per each intentional violation, and states that a violating entity will be subject to an injunction. (CCPA § 1798.155(b)).
  • Definition of “personal information”: The CCPA includes a number of enumerated examples of “personal information” (“PI”), including IP address, geolocation data and web browsing history. The amendment clarifies that the listed examples would constitute PI only if the data “identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.” (CCPA § 1798.140(o)(1)).
  • Private right of action:
    • The amendments clarify that a consumer may bring an action under the CCPA only for a business’s alleged failure to “implement and maintain reasonable security procedures and practices” that results in a data breach. (CCPA § 1798.150(c)).
    • The bill removes the requirement that a consumer notify the AG once the consumer has brought an action against a business under the CCPA, and eliminates the AG’s ability to instruct a consumer to not proceed with an action. (CCPA § 1798.150(b)).
  • GLBA, DDPA, CIPA exemptions: The original text of the CCPA exempted information subject to the Gramm-Leach-Bliley Act (“GLBA”) and Driver’s Privacy Protection Act (“DPPA”), only to the extent the CCPA was “in conflict” with either statute. The bill removes the “in conflict” qualification and clarifies that data collected, processed, sold or disclosed pursuant to the GLBA, DPPA or the California Information Privacy Act is exempt from the CCPA’s requirements. The revisions also exempt such information from the CCPA’s private right of action provision. (CCPA §§ 1798.145(e), (f)).
  • Health information:
    • Health care providers: The bill adds an exemption for HIPAA-covered entities and providers of health care governed by the Confidentiality of Medical Information Act, “to the extent the provider or covered entity maintains patient information in the same manner as medical information or protected health information,” as described in the CCPA. (CCPA § 1798.145(c)(1)(B)).
    • PHI: The bill expands the category of exempted protected health information (“PHI”) governed by HIPAA and the Health Information Technology for Economic and Clinical Health Act to include PHI collected by both covered entities and business associates. The original text did not address business associates. (CCPA § 1798.145(c)(1)(A)).
    • Clinical trial data: The bill adds an exemption for “information collected as part of a clinical trial” that is subject to the Federal Policy for the Protection of Human Subjects (also known as the Common Rule) and is conducted in accordance with specified clinical practice guidelines. (CCPA § 1798.145(c)(1)(C)).
  • Notice of right of deletion: The original text of the CCPA stated that a business must disclose on its website or in its privacy policy a consumer’s right to request the deletion of her PI. The bill modifies this requirement, stating that a business must disclose the right to deletion “in a form that is reasonably accessible to consumers.” (CCPA § 1798.105(b)).
  • First Amendment protection: The bill adds a provision to the CCPA, which states that the rights afforded to consumers and obligations imposed on businesses under the CCPA do not apply if they “infringe on the noncommercial activities of a person or entity” as described in Art. I, Section 2(b) of the California constitution, which addresses activities related to the free press. This provision is designed to prevent First Amendment challenges to the law. (CCPA § 1798.150(k)).
  • Preemption:
    • The bill adds to the CCPA’s preemption clause that the law will not apply in the event its application is preempted by, or in conflict with, the U.S. Constitution. The CCPA previously referenced only the California Constitution. (CCPA § 1798.196).
    • Certain provisions of the CCPA supersede and preempt laws adopted by local entities regarding the collection and sale of a consumer’s PI by a business. The bill makes such provisions of the Act operative on the date the bill becomes effective.

The California State Legislature is expected to consider more substantive changes to the law when it reconvenes in January 2019.

Senate Commerce Committee Members Rumored to be Discussing Online Privacy Bill

On August 29, 2018, Bloomberg Law reported that four Senate Commerce Committee members are discussing a potential online privacy bill. The bipartisan group consists of Senators Jerry Moran (R-KS), Roger Wicker (R-MS), Richard Blumenthal (D-CT) and Brian Schatz (D-HI), according to anonymous Senate aides.

Specific details of the possible bill are unknown. The proposal may compete with a bill being developed by Senate Commerce Committee Chairman John Thune (R-SD), and is a further indication of increased Congressional interest in enacting a broad online privacy bill. Such interest sharpened in a year of increased scrutiny and legal developments in the privacy arena, including the European Union’s General Data Protection Regulation and the recently enacted California Consumer Privacy Act of 2018.

Alongside these reported Congressional efforts, the Trump Administration, through the National Economic Council and the Commerce Department, is said developing an online privacy proposal to send to Congress.

California AG Voices Concern About State’s New Privacy Law

On August 22, 2018, California Attorney General Xavier Becerra raised significant concerns regarding the recently enacted California Consumer Privacy Act of 2018 (“CCPA”) in a letter addressed to the CCPA’s sponsors, Assemblyman Ed Chau and Senator Robert Hertzberg. Writing to “reemphasize what [he] expressed previously to [them] and [state] legislative leaders and Governor Brown,” Attorney General Becerra highlighted what he described as five primary flaws that, if unresolved, will undermine the intention behind and effective enforcement of the CCPA.

Most of the issues Attorney General Becerra pointed to were those he claimed impose unnecessary and/or onerous obligations on the Attorney General’s Office (“AGO”). For example, the CCPA requires the AGO to provide opinions, warnings and an opportunity to cure to a business before the business can be held accountable for a CCPA violation. Attorney General Becerra said that this effectively requires the AGO to provide unlimited legal counsel to private parties at taxpayer expense, and creates a potential conflict of interest by requiring the AGO to advise parties who may be violating Californians’ privacy rights.

In a similar vein, Attorney General Becerra noted that the CCPA gives consumers a limited right to sue if they become victims of a data breach, but otherwise does not include a private right of action for consumers to seek remedies to protect their privacy. That framework, Attorney General Becerra wrote, substantially increases the AGO’s need for enforcement resources. Likewise, the CCPA requires private plaintiffs to notify the Attorney General before filing suit. Attorney General Becerra criticized this requirement as both without use, since only courts may decide the merits of a case, and a drain on personnel and administrative resources.

Attorney General Becerra also pointed out that the CCPA’s civil penalty provisions purport to amend and modify the Unfair Competition Law’s civil penalty provision. The latter, however, was enacted by voters through a ballot proposition and thus cannot be amended through legislation. For that reason, Attorney General Becerra argued, the CCPA’s civil penalty provision is likely unconstitutional (the letter noted that the AGO has offered “corrective language” that replaces the CCPA’s current penalty provision with a stand-alone enforcement proposition).

Additionally, Attorney General Becerra took issue with the CCPA’s provision that the AGO has one year to conduct rulemaking for the CCPA. Attorney General Becerra noted that the CCPA did not provide resources for the AGO to carry out the rulemaking nor its implementation thereafter; the Attorney General called the existing deadline “simply unattainable.”

Plaintiffs File Class Action Lawsuit Against Nielsen Over Alleged False and Misleading Statements

On August 28, 2018, plaintiffs filed a class action lawsuit against Nielsen Holdings PLC (“Nielsen”) and some of its officers and directors for making allegedly materially false and misleading statements to investors about the impact of privacy regulations and third-party business partners’ privacy policies on the company’s revenues and earnings. The case was filed in the United States District Court for the Southern District of New York. 

The complaint alleges that Nielsen made false and/or misleading statements and/or failed to disclose that: (1) Nielsen recklessly disregarded its readiness for and the true risks of privacy-related regulations and policies, including the EU General Data Protection Regulation (“GDPR”), on its current and future financial and growth prospects; (2) Nielsen’s financial performance was far more dependent on Facebook and other third-party large data set providers than previously disclosed, and privacy policy changes affected the scope and terms of access Nielsen would have had to third-party data; and (3) access to Facebook and other third-party provider data was becoming increasingly restricted for Nielsen and Nielsen clients. Plaintiffs allege that, as a result, Nielsen’s public statements were materially false and misleading at all relevant times.

The complaint maintains that, because of Nielsen’s “material misrepresentations and omissions, Nielsen stock traded at artificially inflated prices.” The complaint further alleges that when Nielsen published its financial results for the second quarter of 2018 announcing that it missed revenue and earnings targets, its stock plummeted, which caused substantial harm to the plaintiffs who were investors in Nielsen stock. In that announcement, Nielsen cited the impact of the GDPR on the company’s results and announced that its CEO and Executive Chairman, Mitch Barns, would retire from the company at the end of 2018.

Read the complaint.

Sixth Circuit Declines Reconsideration of American Tooling Center’s “Spoofing” Win

Recently, the Sixth Circuit rejected Travelers Casualty & Surety Company’s request for reconsideration of the court’s July 13, 2018, decision confirming that the insured’s transfer of more than $800,000 to a fraudster after receipt of spoofed emails was a “direct” loss that was “directly caused by” the use of a computer under the terms of American Tooling Company’s (“ATC’s”) crime policy. In doing so, the court likewise confirmed that intervening steps by the insured, such as following the directions contained in the bogus emails, did not break the causal chain so as to defeat coverage for “direct” losses.

We’ve previously reported on ATC decisions on Hunton’s Insurance Recovery blog.

Second Circuit Stands By Medidata “Spoofing” Decision

As reported on Hunton’s Insurance Recovery blog, the Second Circuit has rejected Chubb subsidiary Federal Ins. Co.’s request for reconsideration of the court’s July 6, 2018, decision, confirming that the insurer must cover Medidata’s $4.8 million loss under its computer fraud insurance policy. In July, the court determined that the loss resulted directly from the fraudulent emails. The court again rejected the insurer’s argument that the fraudster did not directly access Medidata’s computer systems. But the court again rejected that argument, finding that access indeed occurred when the “spoofing” code in emails sent to Medidata employees ended up in Medidata’s computer system.

View the Second Circuit’s summary order. Prior posts on the Medidata litigation and decisions are available through the following links:

July 10, 2018, Hunton Insurance Recovery Practice Head Explains Why Medidata Decision Affirming Phishing Coverage is “Common Sense”

July 9, 2018, 2nd Cir. Affirms Medidata’s Spoofing Loss is Covered Under Crime Policy’s Computer Fraud Provision

July 27, 2017, Hunton Insurance Head Walter Andrews Comments on Medidata Coverage Win

July 24, 2017, Chubb Owes $4.8M for Medidata Social Engineering Loss

Department of Commerce Updates Privacy Shield FAQs

Recently, the Department of Commerce updated its frequently asked questions (“FAQs”) on the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks (collectively, the “Privacy Shield”) to provide additional clarification on a wide range of topics, including transfers of personal information to third parties, the application of the Privacy Shield Principles to data processors, and the relation of the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”) to the Privacy Shield. Certain key insights from the updated FAQs are outlined below:

  • Data processors. When responding to individuals seeking to exercise their rights under the Privacy Shield Principles, the FAQs state that a processor should respond pursuant to the instructions of the EU data controller. For example, in order to comply with the Choice Principle, a Privacy Shield-certified organization acting as a processor could, pursuant to the EU controller’s instructions, put individuals in contact with the controller that provides a choice mechanism or offer a choice mechanism directly.
  • Onward transfers. The FAQs also provide additional guidance for organizations preparing to come into compliance with the Accountability for Onward Transfer Principle. For example, the FAQs state that organizations may use contracts that fully reflect the requirements of the relevant standard contractual clauses adopted by the European Commission to fulfill the Accountability for Onward Transfer Principle’s contractual requirements.
  • CLOUD Act. The FAQs state that the CLOUD Act, which involves data transfers for law enforcement purposes, does not conflict with the Privacy Shield, which is unaffected by the enactment of the law.

View the full Privacy Shield FAQs.

FTC to Commence Hearings on Competition and Consumer Protection in the 21st Century

The Federal Trade Commission announced the opening dates of its Hearings on Competition and Consumer Protection in the 21st Century, a series of public hearings that will discuss whether broad-based changes in the economy, evolving business practices, new technologies or international developments might require adjustments to competition and consumer protection law, enforcement priorities and policy. The FTC and Georgetown University Law Center will co-sponsor two full-day sessions of hearings on September 13 and 14, 2018, to be held at the Georgetown University Law Center facility.

Panelists at the hearings will consider, among other topics, the regulation of consumer data and whether the U.S. economy has become more concentrated and less competitive. The FTC invites public comment on any of the issues.

More information is available on the FTC’s website.

Judge Grants Final Approval of Record Data Breach Settlement in Anthem Class Action

On August 15, 2018, U.S. District Judge Lucy Koh signed an order granting final approval of the record $115 million class action settlement agreed to by Anthem Inc. in June 2017. As previously reported, Judge Koh signed an order granting preliminary approval of the settlement in August 2017.

The settlement arose out of a 2015 data breach that exposed the personal information of more than 78 million individuals, including names, dates of birth, Social Security numbers and health care ID numbers. The terms of the settlement include, among other things, the creation of a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers.

“The Court finds that the Settlement is fair, adequate, and reasonable,” Judge Lucy Koh wrote in her opinion.

Under the $115 million settlement, $51 million will go to the victims. Of the $51 million, $17 million is earmarked for credit-monitoring services, $15 million will go to customers who suffered out-of-pocket costs from the data breach, and $13 million will go to customers who demonstrate that they already have credit-monitoring services. The judge awarded the plaintiffs’ attorneys $31.05 million in legal fees. Additionally, the consulting firm appointed to administer the settlement received $23 million.

The settlement also requires Anthem to make certain changes to its data security systems and cybersecurity practices, including adopting encryption protocols for sensitive data, for at least three years.

The case is In re Anthem, Inc. Data Breach Litig., N.D. Cal., No. 15-md-02617, final approval 8/15/18.

California Lawmakers Consider Additional Resources For Attorney General’s Privacy Act Regulations

As reported in BNA Privacy Law Watch, a California legislative proposal would allocate additional resources to the California Attorney General’s office to facilitate the development of regulations required under the recently enacted California Consumer Privacy Act of 2018 (“CCPA”). CCPA was enacted in June 2018 and takes effect January 1, 2020. CCPA requires the California Attorney General to issue certain regulations prior to the effective date, including, among others, (1) to update the categories of data that constitute “personal information” under CCPA, and (2) certain additional regulations governing compliance (such as how a business may verify a consumer’s request made pursuant to CCPA). The proposal, which was presented in two budget bills, would allocate $700,000 and five staff positions to the California Attorney General’s office to aid in the development of the required regulations. The legislature is expected to pass the relevant funding measure by August 31, 2018. California Attorney General Xavier Becerra has stated that he expects his office will issue its final rules under CCPA in June 2019.

FTC Approves Changes to Video Game Industry’s Safe Harbor Program Under COPPA

On August 13, 2018, the Federal Trade Commission approved changes to the video game industry’s safe harbor guidelines under the Children’s Online Privacy Protection Act (“COPPA”) Rule. COPPA’s “safe harbor” provision enables industry groups to propose self-regulatory guidelines regarding COPPA compliance for FTC approval. 

The Entertainment Software Ratings Board (“ESRB”)’s proposed modifications were opened to a comment and notice period between April and May of this year. The ESRB’s suggestions elicited five comments from individuals and consumer advocates, including that the ESRB retain language from the existing program that defines street-level geolocation information as personal information and data. The FTC approved changes to the ESRB’s COPPA safe harbor program by 5-0 vote; in announcing its decision, the FTC noted that the approved, revised guidelines include certain changes addressing issues raised by the commenters.

Ohio Law Provides Safe Harbor from Tort Claims Related to Data Breaches

On August 3, 2018, Ohio Governor John Kasich signed into law Senate Bill 220 (the “Bill”), which provides covered entities with an affirmative defense to tort claims, based on Ohio law or brought in an Ohio court, that allege or relate to the failure to implement reasonable information security controls which resulted in a data breach. According to the Bill, its purpose is “to be an incentive and to encourage businesses to achieve a higher level of cybersecurity through voluntary action.” The Bill will take effect 90 days after it is provided to the Ohio Secretary of State.

FTC Asks Whether to Expand Enforcement Power Over Corporate Privacy Practices

On August 6, 2018, the Federal Trade Commission published a notice seeking public comment on whether the FTC should expand its enforcement power over corporate privacy and data security practices. The notice, published in the Federal Register, follows FTC Chairman Joseph Simons’ declaration at a July 18 House subcommittee hearing that the FTC’s current authority to do so, under Section 5 of the FTC Act, is inadequate to deal with the privacy and security issues in today’s market.

The FTC asks for input by August 20, 2018. It also requests comment on growing or evolving its authority in several other areas, including the intersection between privacy, big data and competition. Beginning in September 2018, the FTC will conduct a series of public hearings to consider “whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection law, enforcement priorities, and policy.”

Unixiz Agrees to Settle Charges Under COPPA and the New Jersey Consumer Fraud Act

On August 3, 2018, California-based Unixiz Inc. (“Unixiz”) agreed to shut down its “i-Dressup” website pursuant to a consent order with the New Jersey Attorney General, which the company entered into to settle charges that it violated the Children’s Online Privacy Protection Act (“COPPA”) and the New Jersey Consumer Fraud Act. The consent order also requires Unixiz to pay a civil penalty of $98,618.

The charges stemmed from a 2016 data breach in which hackers compromised more than 2.2 million unencrypted usernames and passwords, including those associated with over 24,000 New Jersey residents’ accounts. The New Jersey Attorney General alleged that Unixiz had actual knowledge that the i-Dressup website (which allowed users to “dress, style and make-up animated characters in various outfits” and featured children’s games) had collected the personal information of over 10,000 children and failed to obtain verifiable parental consent for such collection, in violation of COPPA. The New Jersey Attorney General further alleged that the data breach resulted from Unixiz’s failure to appropriately safeguard user account information. Pursuant to the terms of the consent order, Unixiz agreed to pay a $98,618 civil penalty (suspended to $34,000 if, after two years, Unixiz undertakes certain steps to safeguard users’ personal information). Unixiz also agreed to shut down the i-Dressup website, comply with all applicable state and federal laws (including the New Jersey Consumer Fraud Act and COPPA), and implement policies and procedures to safeguard user account information.

CNIL Serves Formal Notice to Marketing Companies to Obtain User’s Consent for Processing Geolocation Data for Ad Targeting

On July 19, 2018, the French Data Protection Authority (“CNIL”) announced that it served a formal notice to two advertising startups headquartered in France, FIDZUP and TEEMO. Both companies collect personal data from mobile phones via software development kit (“SDK”) tools integrated into the code of their partners’ mobile appseven when the apps are not in useand process the data to conduct marketing campaigns on mobile phones.

The SDK technology enables TEEMO to collect mobile advertising IDs and geolocation data of users every five minutes. This information is then correlated with the users’ interests determined by TEEMO’s retail partners and used to send targeted ads on the users’ mobile phones. The SDK technology installed by FIDZUP in partners’ mobile apps collects MAC addresses and advertising IDs of mobile phones. In parallel, FIDZUP has installed in its partners’ sale points FIDZBOX devices which collect data relating to MAC addresses and WiFi signal strength of users’ mobile phones. The data is then processed by the company to send targeted, geolocated ads on users’ mobile phones whenever users walk by a sale point of FIDZUP’s partners.

A Breach of the Obligation to Obtain User’s Consent

Despite their claims, the CNIL found that the two companies do not obtain users’ consent in accordance with French data protection law and the EU General Data Protection Regulation (“GDPR”). The inspections carried out by the CNIL on several mobile apps revealed that:

  • Concerning TEEMO, users are not informed when downloading mobile apps that an SDK that will collect their data is integrated into the apps.
  • Concerning FIDZUP, users are not informed about the advertising targeting purposes of the processing or the data controller’s identity when installing the app. In addition, the information provided in the terms of use of the mobile apps or displayed on posters in stores is provided to users after the collection and processing of their data, whereas obtaining valid consent requires providing that information beforehand.

The CNIL also found that it was not possible to download the apps without the SDK technology.

Finally, the CNIL noted that, when users’ consent is sought for the processing of their geolocation data when installing the app, that consent is limited to the use of the data by the app. Consent is not sought for the collection of the data for marketing purposes via the SDK tools.

The CNIL therefore concluded that the data processed by TEEMO and FIDZUP for targeted marketing purposes is in fact processed without the users’ knowledge and consent in breach of French law and the GDPR.

A Breach of the Obligation to Define an Adequate Retention Period

The CNIL also found that TEEMO retains geolocation data for 13 months. In the CNIL’s view, this retention period is disproportionate in relation to the purpose of the processing. The CNIL stressed that use of geolocation devices are especially intrusive as they constantly track users in real time.

The CNIL’s Requests

The CNIL ordered TEEMO and FIDZUP to obtain users’ valid consent within three months (e.g., via a pop-up containing specific information and a tick-box to signify consent). The CNIL also ordered TEEMO to define a retention period for geolocation data that is proportionate to the purpose of the processing. Failure to do so within the prescribed time limit may result in sanctions, including a fine.

Supreme Court of Ireland to Review Facebook Privacy Case

On July 31, 2018, the Supreme Court of Ireland granted Facebook, Inc.’s (“Facebook”) leave to appeal a lower court’s ruling sending a privacy case to the Court of Justice of the European Union (the “CJEU”). Austrian privacy activist Max Schrems challenged Facebook’s data transfer practices, arguing that Facebook’s use of standard contractual clauses failed to adequately protect EU citizens’ data. Schrems, supported by Irish Data Protection Commissioner Helen Dixon, argued that the case belonged in the CJEU, the EU’s highest judicial body. The High Court agreed. Facebook’s request to appeal followed.

In granting Facebook leave to appeal, the Supreme Court noted that “[i]t is in the interest of justice” that the Court hear its arguments. The hearing will take place within the next five months.

View the Supreme Court’s decision.

CIPL Submits Comments to EDPB’s Draft Guidelines on Certification and Identifying Certification Criteria in Accordance with Articles 42 and 43 GDPR

On July 10, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the European Data Protection Board (the “EDPB”) on its draft guidelines on certification and identifying certification criteria in accordance with Articles 42 and 43 of the GDPR (the “Guidelines”). The Guidelines were adopted by the EDPB on May 25, 2018, for public consultation.

CIPL highlights in its comments that in order to achieve the goals of certifications under the GDPR (i.e., to use certifications as an accountability tool to demonstrate compliance and as a cross-border data transfer mechanism), certification mechanisms should be based on a harmonized EU-wide minimum GDPR certification standard or template which is adaptable to different contexts. Such a baseline standard will enable both EU-wide general GDPR certifications as well as more narrow GDPR certifications customized for specific products, services, processes, industry sectors and/or jurisdictions. CIPL recommends that the EU-wide baseline standard should be developed by the European Commission and/or the EDPB in collaboration with certification bodies and industry.

In addition to basing GDPR certifications on a harmonized EU-wide minimum GDPR certification standard, CIPL underlines in its comments that certification mechanisms should:

  • permit the certifying of entire organizational privacy management programs, in addition to specific products, services and processes;
  • enable interoperability as much as possible with other, similar EU accountability schemes as well as other certification schemes in other countries and regions, such as the APEC Cross-Border Privacy Rules ( “CBPR”) and Privacy Recognition for Processors (“PRP”);
  • be construed on the basis of a holistic approach which enables both national or EU compliance and cross border compliance as part of one set of certification criteria.

To read the above recommendations in more detail, along with CIPL’s other recommendations on certification and identifying certification criteria in accordance with articles 42 and 43 of the GDPR, view the full paper.

These comments follow CIPL’s related consultation response to the Article 29 Working Party’s Draft Guidelines on the Accreditation of Certification Bodies under the GDPR.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 92 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics prioritized by the EDPB.

India’s Draft on Data Privacy Law Issued Today

On July 27, 2018, the Justice BN Srikrishna committee, formed by the Indian government in August 2017 with the goal of introducing a comprehensive data protection law in India, issued a report, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians (the “Committee Report”), and a draft data protection bill called the Personal Data Protection Bill, 2018 (the “Bill”). Noting that the Indian Supreme Court has recognized the right to privacy as a fundamental right, the Committee Report summarizes the existing data protection framework in India, and recommends that the government of India adopt a comprehensive data protection law such as that proposed in the Bill.

The Bill would establish requirements for the collection and processing of personal data, including particular limitations on the processing of sensitive personal data and the length of time in which personal data may be retained. The Bill would require organizations to appoint a Data Protection Officer and require annual third-party audits of the organization’s processing of personal data. Further, the Bill would require organizations to implement certain information security safeguards, including (where appropriate) de-identification and encryption, as well as safeguards to prevent misuse, unauthorized access to, modification, disclosure or destruction of personal data. The Bill also would require regulator notification and, in certain circumstances, individual notification in the event of a data breach. Noncompliance with the Bill would result in penalties up to 50 million Rupees (approximately USD $728,000), or two percent of global annual turnover of the preceding financial year, whichever is higher.

The Bill has been submitted for consideration to the Ministry of Electronics and Information Technology and is expected to be introduced in Parliament at a later date.

OCR Issues Guidance on Disclosures to Family, Friends and Others

In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information (“ePHI”). The guidance, along with additional resources identified by OCR, are outlined below:

  • Identifying software vulnerabilities. Every HIPAA-covered entity is required to perform a risk analysis that identifies risks and vulnerabilities to the confidentiality, integrity and availability of ePHI. Such entities must also implement measures to mitigate risks identified during the risk analysis. In its guidance, OCR indicated that mitigation activities could include installing available patches (where reasonable and appropriate) or, where patches are unavailable (such as in the case of obsolete or unsupported software), reasonable compensating controls, such as restricting network access.
  • Patching software. Patches may be applied to software and firmware on a wide range of devices, and the installation of vendor patches is typically routine. The installation of such updates, however, may result in unexpected events due to the interconnected nature of computer programs and systems. OCR recommends that organizations install patches for identified vulnerabilities in accordance with their security management processes. In order to help ensure the protection of ePHI during patching, OCR also identifies common steps in patch management as including evaluation, patch testing, approval, deployment, verification and testing.

In addition to the information contained in the guidance, OCR identified a number of additional resources, which are listed below:

CIPL Issues Discussion Papers on the Central Role of Accountability in Data Protection

On July 23, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP issued two new discussion papers on the Central Role of Organizational Accountability in Data Protection. The goal of these discussion papers is to show that organizational accountability is pivotal to effective data protection and essential for the digital transformation of the economy and society, and to emphasize how its many benefits should be actively encouraged and incentivized by data protection authorities (“DPAs”), and law and policy makers around the globe.

The Case for Accountability: How it Enables Effective Data Protection and Trust in Digital Society

The first discussion paper explains how accountability provides the necessary framework and tools for scalable compliance, fosters corporate digital responsibility beyond pure legal compliance, and empowers and protects individuals. It also details the benefits of implementing accountability to individuals, regulators and organizations.

Key areas of focus in the paper include:

  • the essential elements of accountability and the different approaches to implementing accountability;
  • how accountability applies in similar and different ways to data controllers and processors;
  • implementing and demonstrating accountability within an organization (e.g., through comprehensive internal privacy programs or verified or certified accountability mechanisms); and
  • the benefits of accountability, both by stakeholder and by type.

Incentivizing Accountability: How Data Protection Authorities and Law Makers Can Encourage Accountability

The second discussion paper explains why and how accountability should be specifically incentivized, particularly by DPAs and law makers. It argues that given the many benefits of accountability for all stakeholders, DPAs and law makers should encourage and incentivize organizations to implement accountability. They should not merely rely on the threat of sanctions to ensure legally required accountability, nor should they leave the implementation of heightened accountability (i.e., accountability beyond what is legally required) to various “internal” incentives of the organizations, such as improved customer trust and competitive advantage. Instead, DPAs and law makers should proactively provide additional external incentives, including on the grounds that accountability provides broader benefits to all stakeholders beyond just the organization itself and specifically helps DPAs carry out their many regulatory tasks.

Key areas of focus in the paper include:

  • why accountability measures should be incentivized;
  • who should incentivized accountability, namely, DPAs and law and policy makers; and
  • how accountability should be incentivized, including examples of what the incentives might be.

Brexit White Paper Addresses Data Protection-Related Issues

On July 12, 2018, British Prime Minister Theresa May presented her Brexit White Paper, “The Future Relationship Between the United Kingdom and the European Union,” (the “White Paper”) to Parliament. The White Paper outlines the UK’s desired future relationship with the EU post-Brexit, and includes within its scope important data protection-related issues, including digital trade, data flows, cooperation for the development of Artificial Intelligence (“AI”), and the role of the Information Commissioner’s Office (“ICO”), as further discussed below:

  • Section 1.5 (Digital) recognizes the importance of the digital services trade between the UK and EU, while confirming that the UK will no longer be a part of the EU Digital Single Market. Consequently, the UK proposes a digital relationship that will allow both the UK and EU to capitalize on the growth of digital technologies globally. The proposed relationship covers:
    • Digital trade and eCommerce – On the basis that the digital economy relies on the free flow of data and the disruption of data flows would be economically costly, the UK’s proposal argues for: (1) the removal of barriers to allow cross-border data flows, (2) a free, open and secure internet, and (3) recognizing equivalent forms of electronic ID and authentication.
    • Telecommunications and digital infrastructure – The UK and the EU have a history of regulatory innovation in telecommunications; as such, the UK proposes to continue these joint commitments and the sharing of cyber threat information.
    • Digital technology – Trade should promote the development of new technologies, such as AI. At the same time, however, new technologies create shared challenges that require regulatory cooperation. The European Commission recently committed to establish a European AI Alliance to develop ethics guidelines for AI. After the UK leaves the EU, the UK’s Centre for Data Ethics and Innovation intends to participate in the European AI Alliance.
  • Chapter 2 (Security Partnership) explores the relationship relating to security which will differ once the UK exits the EU. The UK is seeking an ambitious partnership covering a wide range of security interests including foreign policy, defense, development, law enforcement and criminal justice cooperation. The UK proposes that this partnership would include the swift, effective and efficient exchange of data to combat crime, including cyber threats. The UK acknowledges that there is no precedent for a non-EU country having access to EU data exchange tools. Despite this, the UK is proposing an agreement to combat crime through the exchange of sensitive information and data, including: (1) information about airline passengers; (2) alerts to police and border forces (with access to systems that allow for efficient responses); (3) exchanges of criminal record information; and (4) DNA, fingerprint and vehicle registration.
  • Section 3.2 (Data Protection) argues for the continued exchange of personal data between the UK and EU with strong privacy protections for citizens, and ongoing cooperation between the ICO and EU Data Protection Authorities.
    • The continued exchange and protection of personal data – Given the importance of data flows for economic growth and security, the UK and the EU must maintain the ability to exchange data securely. This includes enabling consumers to purchase goods from the UK over the internet, and supporting the exchange of data between law enforcement agencies. The UK sees the EU’s adequacy framework as the right starting point for the continued free flow of data, but wants to go beyond the framework with a focus on stability and transparency, and regulatory cooperation.
    • Ongoing cooperation between data protection authorities – The ICO has played a significant part in the development of EU data protection laws and there should be ongoing cooperation between the ICO and EU Data Protection Authorities. This would help avoid unnecessary complexity, make it easier for EU citizens and UK nationals to enforce their rights across borders, and reduce administrative burdens for businesses.

Read the full white paper.

EU and Japan Agree on Reciprocal Adequacy

On July 17, 2018, the European Union and Japan successfully concluded negotiations on a reciprocal finding of an adequate level of data protection, thereby agreeing to recognize each other’s data protection systems as “equivalent.” This will allow personal data to flow safely between the EU and Japan, without being subject to any further safeguards or authorizations. 

This is the first time that the EU and a third country have agreed on a reciprocal recognition of the adequate level of data protection. So far, the EU has adopted only unilateral adequacy decisions with 12 other countries—namely, Andorra, Argentina, and Canadian organizations subject to PIPEDA, the Faroe Islands, Guernsey, Israel, the Isle of Man, Jersey, New Zealand, Switzerland, Uruguay and the United States (EU-U.S. Privacy Shield)—which allow personal data to flow safely from the EU to these countries.

Background

On January 10, 2017, the European Commission (“the Commission”) published a communication addressed to the European Parliament and European Council on Exchanging and Protecting Personal Data in a Globalized World. As announced in this communication, the Commission launched discussions on possible adequacy decisions with “key trading partners,” starting with Japan and South Korea in 2017.

The discussions with Japan were facilitated by the amendments made to the Japanese Act on the Protection of Personal Information (Act No. 57 of 2003) that came into force on May 30, 2017. These amendments have modernized Japan’s data protection legislation and increased convergence with the European data protection system.

Key parts of the adequacy finding

Once adopted, the adequacy finding will cover personal data exchanged for commercial purposes between EU and Japanese businesses, as well as personal data exchanged for law enforcement purposes between EU and Japanese authorities, ensuring that in all such exchanges a high level of data protection is applied.

This adequacy finding was decided based on a series of additional safeguards that Japan will apply to EU citizens’ personal data when transferred to their country, including the following measures:

  • expanding the definition of sensitive data;
  • facilitating the exercise of individuals’ rights of access to and rectification of their personal data;
  • increasing the level of protection for onward data transfers of EU data from Japan to a third country; and
  • establishing a complaint-handling mechanism, under the supervision of the Japanese data protection authority (the Personal Information Protection Commission), to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities.

Next steps

The EU and Japan will launch their respective internal procedures for the adoption of the adequacy finding. The Commission is planning on adopting its adequacy decision in fall 2018, following the usual procedure for adopting EU adequacy decisions. This involves (1) the approval of the draft adequacy decision by the College of EU Commissioners; (2) obtaining an opinion from EU Data Protection Authorities within the European Data Protection Board, (3) completing by a comitology procedure, requiring the European Commission to obtain the green light from a committee composed of representatives of EU Member States; and (4) updating the European Parliament Committee on Civil Liberties, Justice and Home Affairs. Once adopted, this will be the first adequacy decision under the EU General Data Protection Regulation.

View the Commission’s full press release and Q&As on the Japan adequacy decision.

Senators Ask FTC to Investigate Smart TV Manufacturers

On July 12, 2018, two U.S. Senators sent a letter to the Federal Trade Commission asking the agency to investigate the privacy policies and practices of smart TV manufacturers. In their letter, Senators Edward Markey (D-MA) and Richard Blumenthal (D-CT) note that smart TVs can “compile detailed profiles about users’ preferences and characteristics” which can then allow companies to personalize ads to be sent to “customers’ computers, phones or any other device that shares the smart TV’s internet connection.”

The Senators cite the history of unique privacy concerns raised by companies tracking information about the content viewers watch on TV. They also noted the VIZIO case, in which the FTC settled with VIZIO for preinstalling software on its TV to track data on consumers without their consent.

The letter concludes by reemphasizing the private nature of content consumers watch on their smart TVs, and stating that any company that collects data from consumers via their smart TVs should “comprehensively and consistently detail” what data will collected and how it will be used. The letter also recommends that users should be given the opportunity to affirmatively consent to the collection and use of their sensitive information.

China Publishes the Draft Regulations on the Classified Protection of Cybersecurity

On June 27, 2018, the Ministry of Public Security of the People’s Republic of China published the Draft Regulations on the Classified Protection of Cybersecurity (网络安全等级保护条例(征求意见稿)) (“Draft Regulation”) and is seeking comments from the public by July 27, 2018.

Pursuant to Article 21 of the Cybersecurity Law, the Draft Regulation establishes the classified protection of cybersecurity. The classified protection of information security scheme was previously implemented under the Administrative Measures for the Classified Protection of Information Security. The Draft Regulation extends targets of security protection from just computer systems to anything related to construction, operation, maintenance and use of networks, such as cloud computing, big data, artificial intelligence, Internet of Things, project control systems and mobile Internet, except those set up by individuals and families for personal use.

The obligations of network operators include, but are not limited to, (1) grade confirmation and filing; (2) security construction and ratification; (3) grade assessment; (4) self-inspection; (5) protection of network infrastructure, network operation, and data and information; (6) effective handling of network safety accidents; and (7) guarding against network crimes, all of which vary across the classified levels where the network operators are graded.

Network Operator Compliance

  • Classified Levels. The network operator must ascertain its security level in the planning and design phase. The network is classified by five levels for the degree of security protection as shown below.

Explanation of terms such as “object” and “degree of injury” can be found in Draft Information Security Technology-Guidelines for Grading of Classified Cybersecurity Protection, which closed for public comment on March 5, 2018.

  • Grading Review. The considerations for classified level grading include network functions, scope of services, types of service recipients and types of data being processed. For networks graded at Level 2 or above, the operator is required to conduct an expert review and then obtain approval from any relevant industry regulator. Cross provincial or national uniform connected networks must be graded and organized for review by the industry regulator.
  • Grading Filing. After grading review, any networks graded at Level 2 or above must file with a public security authority at or above county level, after confirmation of the classified level. The filing certificate should be issued after satisfactory review by the relevant public security authority. The timeline for the relevant public security authority to review such applications is not defined in the Draft Regulation, and is within the authority’s discretion.
  • General Obligations of Cybersecurity Protection. Most of the general cybersecurity obligations are stated in the Cybersecurity Law, and the Draft Regulation stipulates additional obligations, such as:
    • In the event of detection, blocking or elimination of illegal activity, network operators must prevent illegal activity from spreading and preventthe destruction or loss of evidence of crimes.
    • File network records.
    • Report online events to the local public security authority with jurisdiction within 24 hours. To prevent divulging state secrets, reports should be made to the local secrecy administration with jurisdiction at the same time.
  • Special Obligations of Security Protection. The networks graded at Level 3 or above require a higher standard for their network operators, which will bear general liability and special liability, including:
    • designating the department of cybersecurity and forming a level-by-level examination system for any change of network, access, operation and maintenance provider;
    • reviewing the plan or strategy developed by professional technical personnel;
    • conducting a background check on key cybersecurity personnel, and confirming those personnel have relevant professional certificates;
    • managing the security of of service providers;
    • dynamically monitoring the network and establishing a connection with the public security authority at the same level;
    • implementing redundancy, back-up and recovery measures for important network equipment, communications links and systems; and
    • establishing a classified assessment scheme, conducting such assessments, rectifying the results, and reporting the information to relevant authorities.
  • Online Testing Before Operation. Network operators at Level 2 or above must test the security of new networks before operation. Assessments must be performed at least once a year. For new networks at Level 3 or above, the classified assessment must be conducted by a cybersecurity classified assessment entity before operation and annually thereafter. Based on the results, the network operators must rectify the risks and report to the public security authority with its filing records.
  • Procurement. The network products used for the “important part” of the network must be evaluated by a professional assessment entity. If a product has an impact on national security, it must be checked by state cyberspace authorities and relevant departments of State Counsel. The Draft Regulation does not clearly define what the “important part” of a network means.
  • Maintenance. Maintenance of networks graded at Level 3 or above must be conducted in China. If business needs require cross-border maintenance, cybersecurity evaluations and risk control measures must take place before performance of such cross-border maintenance. Maintenance records must be kept for public security’s inspection.
  • Protection of Data and Information Security. Network operators must protect the security of their data and information in the process of collection, storage, transmission, use, supply and destruction, and keep recovery and backup files in a different place. Personal information protection requirements in the Draft Regulation are similar to those found under the Cybersecurity Law.
  • Protection of Encrypted Networks. The networks relating to state secrets are governed by encryption protection. Networks graded at Level 3 or above must be password protected and operators must entrust relevant entities to test the security of the password application. Upon passing evaluation, the networks can run online and must be evaluated once a year. The results of the evaluation must be filed with (1) the public security authority with its filing record and (2) the cryptography management authority where the operator is located.

Powers of the Competent Authorities

In addition to regular supervision and inspection, the Draft Regulation gives the competent authorities more powerful measures to handle investigations and emergencies. During an investigation, when necessary, the competent authorities may order the operator to block information transmission, shut down the network temporarily and backup relevant data. In case of an emergency, the competent authorities may order the operator to disconnect the network and shut down servers.

Penalties for Violations

The Cybersecurity Law includes liability provisions for violations of security protection, technical maintenance, and data security and personal information protection, as well as enforcement of the Draft Regulation. The penalties include rectification orders, fines, relevant business suspension, business closing or website shut-down pending rectification, and revocation of relevant business permits and/or licenses.

Lenovo Reaches Proposed $8.3 Million Settlement Agreement

On July 11, 2018, computer manufacturer Lenovo Group Ltd. (“Lenovo”) agreed to a proposed $8.3 million settlement in the hopes of resolving consumer class claims regarding pop-up ad software Lenovo pre-installed on its laptops. Lenovo issued a press release stating that, “while Lenovo disagrees with allegations contained in these complaints, we are pleased to bring this matter to a close after 2-1/2 years.”

In June of 2014, Lenovo and Superfish, a software development company, entered into a profit-sharing agreement regarding Superfish’s VisualDiscovery ad-serving software. Lenovo pre-installed VisualDiscovery on a certain group of its laptops, which it began shipping out in late summer. According to the consumer class claims, VisualDiscovery accessed sensitive consumer data and riddled the laptops with security vulnerabilities.

The proposed settlement, filed in the U.S. District Court for the Northern District of California, requires Lenovo and Superfish to pay the class $7.3 million and $1 million, respectively. It will be finalized only with Judge Haywood Gilliam’s approval.

We previously reported on the FTC’s 2017 settlement with Lenovo regarding preinstalled laptop software.

CIPL Hosts Special Executive Retreat with APPA Privacy Commissioners on Accountable AI

During the week of June 25, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP hosted its annual executive retreat in San Francisco, California. The annual event consisted of a closed pre-retreat session for CIPL members, a CIPL Panel at the APPA Forum Open session followed by a CIPL reception and dinner and a special all day workshop with data protection commissioner members of the Asia Pacific Privacy Authorities (“APPA”) on Accountable AI.

CIPL Pre-Retreat Closed Session

On June 25, 2018, CIPL hosted a closed pre-retreat session for members on global privacy developments at Hunton Andrews Kurth’s San Francisco office. The session consisted of a discussion of current EU General Data Protection Regulation (“GDPR”) implementation, compliance and enforcement issues, the impact of the GDPR on global organizational privacy management programs and the interrelation between the GDPR and ePrivacy. This was followed by a discussion of emerging new privacy laws in Latin America and India, the growth of the APEC Cross-Border Privacy Rules (“CBPR”) and Privacy Recognition for Processors systems, and developments in EU adequacy negotiations.

CIPL Panel at APPA Forum Open Session

On June 26, 2018, CIPL held a panel on Accountable and Interoperable Cross-border Data Flows as part of the 49th APPA Forum Open Session. The panel discussed the many requirements organizations have to contend with when transferring data across borders, the latest updates to the APEC CBPR system and the recently introduced U.S. Clarifying Lawful Overseas Use of Data Act. Data as a driver of innovation and the need for its ability to flow across jurisdictions was also discussed.

CIPL Special Executive Retreat Workshop on Accountable AI

On June 27, 2018, CIPL hosted a full day workshop on accountable AI at Google’s offices in San Francisco. Data protection commissioner members of APPA attended and participated in the day-long event. The workshop marked the third major event to date in CIPL’s Project on Artificial Intelligence and Data Protection: ​Delivering Sustainable AI Accountability in Practice.

The workshop included several keynote addresses on AI from leading technologists and data scientists, followed by three panel sessions on current uses of AI in the public and private sectors, the key data protection challenges and risks associated with AI and the elements of accountable AI. Over 100 invited guests attended the session, including CIPL members, data privacy regulators from around the globe, business and technology leaders and academics.

The workshop commenced with leading AI technology experts and engineers from Intel, Google, Accenture and Microsoft sharing their industry insights and experiences on the growing array of current applications of AI as well as the trajectory of AI’s role in society going forward. A discussion of the technical aspects of AI, including the black box concept and neural nets, and their impact on data protection principles followed.

The second panel featured short presentations from data protection regulators on their current initiatives surrounding AI and on the wide variety of challenges that organizations will face as both AI technologies and privacy frameworks and regulations develop. Following presentations from the UK Information Commissioner’s Office, Japan Personal Information Protection Commission, Singapore Personal Data Protection Commission and the Office of the Privacy Commissioner for Personal Data, Hong Kong, regulators and experts from industry further discussed the challenges and risks associated with AI both universally and in specific jurisdictions.

The final panel of the day featured a discussion on the elements of accountable AI and potential solutions for enabling responsible data use for AI applications. Top privacy executives from leading CIPL member companies discussed their organizations’ current initiatives for delivering accountability in the AI context and also some of the key issues that organizations are working on, including transparency and the role of the user and accountability.

CIPL’s AI project aims to provide a more nuanced and detailed understanding of the opportunities presented by AI, potential misalignment with data protection laws and practical ways to address the issues through the lens of organizational accountability. For more details about the project, please see the project workplan.

Brazil’s Senate Passes General Data Protection Law

This post has been updated. 

As reported by Mundie e Advogados, on July 10, 2018, Brazil’s Federal Senate approved a Data Protection Bill of Law (the “Bill”). The Bill, which is inspired by the EU General Data Protection Regulation (“GDPR”), is expected to be sent to the Brazilian President in the coming days.

As reported by Mattos Filho, Veiga Filho, Marrey Jr e Quiroga Advogados, the Bill establishes a comprehensive data protection regime in Brazil and imposes detailed rules for the collection, use, processing and storage of personal data, both electronic and physical.

Key requirements of the Bill include:

  • National Data Protection Authority. The Bill calls for the establishment of a national data protection authority which will be responsible for regulating data protection, supervising compliance with the Bill and enforcing sanctions.
  • Data Protection Officer. The Bill requires businesses to appoint a data protection officer.
  • Legal Basis for Data Processing. Similar to the GDPR, the Bill provides that the processing of personal data may only be carried out where there is a legal basis for the processing, which may include, among other bases, where the processing is (1) done with the consent of the data subject, (2) necessary for compliance with a legal or regulatory obligation, (3) necessary for the fulfillment of an agreement, or (4) necessary to meet the legitimate interest of the data controller or third parties. The legal basis for data processing must be registered and documented. Processing of sensitive data (including, among other data elements, health information, biometric information and genetic data) is subject to additional restrictions.
  • Consent Requirements. Where consent of the data subject is relied upon for processing personal data, consent must be provided in advance and must be free, informed and unequivocal, and provided for a specific purpose. Data subjects may revoke consent at any time.
  • Data Breach Notification. The Bill requires notification of data breaches to the data protection authority and, in some circumstances, to affected data subjects.
  • Privacy by Design and Privacy Impact Assessments. The Bill requires organizations to adopt data protection measures as part of the creation of new products or technologies. The data protection authority will be empowered to require a privacy impact assessment in certain circumstances.
  • Data Transfer Restrictions. The Bill places restrictions on cross-border transfers of personal data. Such transfers are allowed (1) to countries deemed by the data protection authority to provide an adequate level of data protection, and (2) where effectuated using standard contractual clauses or other mechanisms approved by the data protection authority.

Noncompliance with the Bill can result in fines of up to two percent of gross sales, limited to 50 million reias (approximately USD 12.9 million) per violation. The Bill will take effect 18 months after it is published in Brazil’s Federal Gazette.

Update: The Bill was signed into law in mid-August and is expected to take effect in early 2020.

Kenya Considers Data Protection Bill

On July 3, 2018, a draft bill (the “Data Protection Bill”) was introduced that would establish a comprehensive data protection regime in Kenya. The Data Protection Bill would require “banks, telecommunications operators, utilities, private and public companies and individuals” to obtain data subjects’ consent before collecting and processing their personal data. The Data Protection Bill also would impose certain data security obligations related to the collection, processing and storage of data, and would place restrictions on third-party data transfers. Violations of the Data Protection Bill could result in fines up to 500,000 shillings (USD 4,960) and a five-year prison term. According to BNA Privacy Law Watch, while the Data Protection Bill is a “private member’s bill,” the Kenyan government “is working on a separate data-protection policy and bill to be published this week,” with the goal of consolidating the two proposals.

European Parliament Calls for Suspension of EU-U.S. Privacy Shield Unless U.S. Can “Fully Comply”

On July 5, 2018, the European Parliament issued a nonbinding resolution (“the Resolution”) that calls on the European Commission to suspend the EU-U.S. Privacy Shield unless U.S. authorities can “fully comply” with the framework by September 1, 2018. The Resolution states that the data transfer mechanism does not provide the adequate level of protection for personal data as required by EU data protection law. The Resolution takes particular aim at potential access to EU residents’ personal data by U.S. national security agencies and law enforcement, citing the passage of the CLOUD Act as having “serious implications for the EU, as it is far-reaching and creates a potential conflict with the EU data protection laws.”

The Resolution also cites recent revelations surrounding Facebook and Cambridge Analytica, both Privacy Shield-certified companies, as “highlight[ing] the need for proactive oversight and enforcement actions,” including “systematic checks of the practical compliance of privacy policies with the Privacy Shield principles,” and calls on EU data protection authorities to suspend data transfers for non-compliant companies.

The Resolution comes on the heels of the FTC’s recent settlement with California company ReadyTech for alleged misrepresentations in its privacy policy about the status of the company’s Privacy Shield certification. The Resolution is nonbinding, and the European Commission, through its spokesperson, reportedly has stated that a suspension of the framework is not warranted at this time.

California Corporation Settles FTC Complaint Regarding EU-U.S. Privacy Shield Compliance Claim

On July 2, 2018, the Federal Trade Commission announced that California company ReadyTech Corporation (“ReadyTech”) agreed to settle FTC allegations that ReadyTech misrepresented it was in the process of being certified as compliant with the EU-U.S. Privacy Shield (“Privacy Shield”) framework for lawfully transferring consumer data from the European Union to the United States. The FTC finalized this settlement on October 17, 2018.

To join the Privacy Shield, companies must self-certify to the U.S. Department of Commerce compliance with the Privacy Shield Principles and related requirements. The FTC’s administrative complaint against ReadyTech alleged that ReadyTech, which provides online and instructor-led training, falsely claimed on its website to be in the process of complying with the Privacy Shield. The reality, according to the FTC, is that ReadyTech had begun but failed to complete the process.

This is the FTC’s fourth case enforcing the Privacy Shield. ReadyTech’s settlement agreement provides, in part, that ReadyTech will not misrepresent its participation in any privacy or security program sponsored by a government or any self-regulatory or standard-setting organization.

Equifax Enters Into Consent Order with State Banking Regulators Regarding 2017 Data Breach

As reported in BNA Privacy Law Watch, on June 27, 2018, Equifax entered into a consent order (the “Order”) with 8 state banking regulators (the “Multi-State Regulatory Agencies”), including those in New York and California, arising from the company’s 2017 data breach that exposed the personal information of 143 million consumers.

Equifax’s key obligations under the terms of the Order include: (1) developing a written risk assessment; (2) establishing a formal and documented Internal Audit Program that is capable of effectively evaluating IT controls; (3) developing a consolidated written Information Security Program and Information Security Policy; (4) improving oversight of its critical vendors and ensuring that sufficient controls are developed to safeguard information; (5) improving standards and controls for supporting the patch management function, including reducing the number of unpatched systems; and (6) enhancing oversight of IT operations as it relates to disaster recovery and business continuity.  The Order also requires Equifax to strengthen its Board of Directors’ oversight over the company’s information security program, including regular Board reviews of relevant policies and procedures.

Equifax must also submit to the Multi-State Regulatory Agencies a list of all remediation projects planned, in process or implemented in response to the 2017 data breach, as well as written reports outlining its progress toward complying with the provisions of the Order.

NYDFS Cybersecurity Regulation to Apply to Consumer Reporting Agencies

On June 25, 2018, the New York Department of Financial Services (“NYDFS”) issued a final regulation (the “Regulation”) requiring consumer reporting agencies with “significant operations” in New York to (1) register with NYDFS for the first time and (2) comply with the NYDFS’s cybersecurity regulation. Under the Regulation, consumer reporting agencies that reported on 1,000 or more New York consumers in the preceding year are subject to these requirements, and must register with NYDFS on or before September 1, 2018. The deadline for consumer reporting agencies to come into compliance with the cybersecurity regulation is November 1, 2018. In a statement, Governor Andrew Cuomo said, “Oversight of credit reporting agencies ensures that the personal private information of New Yorkers is less vulnerable to the threat of cyber attacks, providing them with peace of mind about their financial future.”

California Consumer Privacy Act Signed, Introduces Key Privacy Requirements for Businesses

On June 28, 2018, the Governor of California signed AB 375, the California Consumer Privacy Act of 2018 (the “Act”). The Act introduces key privacy requirements for businesses, and was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. We previously reported on the relevant ballot initiative. The Act will take effect January 1, 2020.

Key provisions of the Act include:

  • Applicability. The Act will apply to any for-profit business that (1) “does business in the state of California”; (2) collects consumers’ personal information (or on the behalf of which such information is collected) and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information; and (3) satisfies one or more of the following thresholds: (a) has annual gross revenues in excess of $25 million, (b) alone or in combination annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more consumers, households or devices, or (c) derives 50 percent or more of its annual revenue from selling consumers’ personal information (collectively, “Covered Businesses”).
  • Definition of Personal Information. Personal information is defined broadly as “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” This definition of personal information aligns more closely with the EU General Data Protection Regulation’s definition of personal data. The Act includes a list of enumerated examples of personal information, which includes, among other data elements, name, postal or email address, Social Security number, government-issued identification number, biometric data, Internet activity information and geolocation data, as well as “inferences drawn from any of the information identified” in this definition.
  • Right to Know
    • Upon a verifiable request from a California consumer, a Covered Business must disclose (1) the categories and specific pieces of personal information the business has collected about the consumer; (2) the categories of sources from which the personal information is collected; (3) the business or commercial purposes for collecting or selling personal information; and (4) the categories of third parties with whom the business shares personal information.
    • In addition, upon verifiable request, a business that sells personal information about a California consumer, or that discloses a consumer’s personal information for a business purpose, must disclose (1) the categories of personal information that the business sold about the consumer; (2) the categories of third parties to whom the personal information was sold (by category of personal information for each third party to whom the personal information was sold); and (3) the categories of personal information that the business disclosed about the consumer for a business purpose.
    • The above disclosures must be made within 45 days of receipt of the request using one of the prescribed methods specified in the Act. The disclosure must cover the 12-month period preceding the business’s receipt of the verifiable request. The 45-day time period may be extended when reasonably necessary, provided the consumer is provided notice of the extension within the first 45-day period. Importantly, the disclosures must be made in a “readily useable format that allows the consumer to transmit this information from one entity to another entity without hindrance.”
  • Exemption. Covered Businesses will not be required to make the disclosures described above to the extent the Covered Business discloses personal information to another entity pursuant to a written contract with such entity, provided the contract prohibits the recipient from selling the personal information, or retaining, using or disclosing the personal information for any purpose other than performance of services under the contract. In addition, the Act provides that a business is not liable for a service provider’s violation of the Act, provided that, at the time the business disclosed personal information to the service provider, the business had neither actual knowledge nor reason to believe that the service provider intended to commit such a violation.
  • Disclosures and Opt-Out. The Act will require Covered Businesses to provide notice to consumers of their rights under the Act (e.g., their right to opt out of the sale of their personal information), a list of the categories of personal information collected about consumers in the preceding 12 months, and, where applicable, that the Covered Business sells or discloses their personal information. If the Covered Business sells consumers’ personal information or discloses it to third parties for a business purpose, the notice must also include lists of the categories of personal information sold and disclosed about consumers, respectively. Covered Businesses will be required to make this disclosure in their online privacy notice. Covered Businesses must separately provide a clear and conspicuous link on their website that says, “Do Not Sell My Personal Information,” and provide consumers a mechanism to opt out of the sale of their personal information, a decision which the Covered Business must respect. Businesses also cannot discriminate against consumers who opt out of the sale of their personal information, but can offer financial incentives for the collection of personal information.
  • Specific Rules for Minors: If a business has actual knowledge that a consumer is less than 16 years of age, the Act prohibits a business from selling that consumer’s personal information unless (1) the consumer is between 13–16 years of age and has affirmatively authorized the sale (i.e., they opt in); or (2) the consumer is less than 13 years of age and the consumer’s parent or guardian has affirmatively authorized the sale.
  • Right to Deletion. The Act will require a business, upon verifiable request from a California consumer, to delete specified personal information that the business has collected about the consumer and direct any service providers to delete the consumer’s personal information. However, there are several enumerated exceptions to this deletion requirement. Specifically, a business or service provider is not required to comply with the consumer’s deletion request if it is necessary to maintain the consumer’s personal information to:
    • Complete the transaction for which the personal information was collected, provide a good or service requested by the consumer, or reasonably anticipated, within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract with the consumer.
    • Detect security incidents; protect against malicious, deceptive, fraudulent or illegal activity; or prosecute those responsible for that activity.
    • Debug to identify and repair errors that impair existing intended functionality.
    • Exercise free speech, ensure the right of another consumer to exercise his or her right of free speech, or exercise another right provided for by law.
    • Comply with the California Electronic Communications Privacy Act.
    • Engage in public or peer-reviewed scientific, historical or statistical research in the public interest (when deletion of the information is likely to render impossible or seriously impair the achievement of such research) if the consumer has provided informed consent.
    • To enable solely internal uses that are reasonably aligned with the consumer’s expectations based on the consumer’s relationship with the business.
    • Comply with a legal obligation.
    • Otherwise use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.
  • Enforcement
    • The Act is enforceable by the California Attorney General and authorizes a civil penalty up to $7,500 per violation.
    • The Act provides a private right of action only in connection with “certain unauthorized access and exfiltration, theft, or disclosure of a consumer’s nonencrypted or nonredacted personal information,” as defined in the state’s breach notification law, if the business failed “to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.”
      • In this case, the consumer may bring an action to recover damages up to $750 per incident or actual damages, whichever is greater.
      • The statute also directs the court to consider certain factors when assessing the amount of statutory damages, including the nature, seriousness, persistence and willfulness of the defendant’s misconduct, the number of violations, the length of time over which the misconduct occurred, and the defendant’s assets, liabilities and net worth.

Prior to initiating any action against a business for statutory damages, a consumer must provide the business with 30 days’ written notice of the consumer’s allegations and, if within the 30 days the business cures the alleged violation and provides an express written statement that the violations have been cured, the consumer may not initiate an action for individual statutory damages or class-wide statutory damages. These limitations do not apply to actions initiated solely for actual pecuniary damages suffered as a result of the alleged violation.

Protection of Personal Data Now a Constitutional Right in Chile

As reported in BNA Privacy Law Watch, a new law makes data protection a constitutional right in Chile. The measure, which was enacted by the National Congress of Chile, lists “protection of one’s personal data” as an individual right under the Constitution’s Article 19. As a result of this measure, Chilean courts must expedite privacy-related cases under constitutional protection. For more information, read the full article.

California Assembly Bill Aims to Avert State Ballot Initiative Related to Privacy

On June 21, 2018, California lawmakers introduced AB 375, the California Consumer Privacy Act of 2018 (the “Bill”). If enacted and signed by the Governor by June 28, 2018, the Bill would introduce key privacy requirements for businesses, but would also result in the removal of a ballot initiative of the same name from the November 6, 2018, statewide ballot. We previously reported on the relevant ballot initiative.

The Bill expands some of the requirements in the ballot initiative. For example, if enacted, the Bill would require businesses to disclose (e.g., in its Privacy Notice) the categories of personal information it collects about California consumers and the purposes for which that information is used. The Bill also would require businesses to disclose, upon a California consumer’s verifiable request, the categories and specific pieces of personal information it has collected about the consumer, as well as the business purposes for collecting or selling the information and the categories of third parties with whom it is shared. The Bill would require businesses to honor consumers’ requests to delete their data and to opt out of the sale of their personal information, and would prohibit a business from selling the personal information of a consumer under the age of 16 without explicit (i.e., opt-in) consent.

A significant difference between the Bill and the ballot initiative is that the Bill would give the California Attorney General exclusive authority to enforce most of its provisions (whereas the ballot initiative provides for a private right of action with statutory damages of up to $3,000 per violation). One exception would be that a private right of action would exist in the event of a data breach in which the California Attorney General declines to bring an action.

If enacted, the Bill would take effect January 1, 2020.

Supreme Court Holds Warrant Required to Obtain Historical Cell Phone Location Information

On June 22, 2018, the United States Supreme Court held in Carpenter v. United States that law enforcement agencies must obtain a warrant supported by probable cause to obtain historical cell-site location information (“CSLI”) from third-party providers. The government argued in Carpenter that it could access historical CSLI through a court order alone under the Stored Communications Act (the “SCA”). Under 18 U.S.C. § 2703(d), obtaining an SCA court order for stored records only requires the government to “offer specific and articulable facts showing that there are reasonable grounds.” However, in a split 5-4 decision, the Supreme Court held that the Fourth Amendment requires law enforcement agencies to obtain a warrant supported by probable cause to obtain historical CSLI.

In Carpenter, the FBI obtained a court order under the SCA for historical CSLI. These records were used to convict the defendant, Carpenter, of robbing a number of stores, including the cell phone provider that ultimately provided the relevant records. Carpenter argued that accessing his CSLI without a warrant constituted a Fourth Amendment violation. The government argued that historical CSLI constituted routinely collected business records protected by the Supreme Court’s third-party doctrine (established in U.S. v. Miller and Smith v. Maryland), which provided that the public did not have a reasonable expectation of privacy for certain records held by third-party service providers. Siding with Carpenter, however, the Court held, “A majority of the court has already recognized that individuals have a reasonable expectation of privacy in the whole of their physical movements…Allowing government access to cell-site records—which hold for many Americans the ‘privacies of life,’—contravenes that expectation.”

Chief Justice Roberts was joined in the majority opinion by Justices Ginsburg, Breyer, Sotomayor and Kagan. Justices Kennedy, Thomas, Alito and Gorsuch dissented, each offering separate dissenting opinions.

Virginia Amends Breach Notification Law Applicable to Income Tax Information

On July 1, 2018, HB 183, which amends Virginia’s breach notification law, will come into effect (the “amended law”). The amended law will require income tax return preparers who prepare individual Virginia income tax returns to notify the state’s Department of Taxation (the “Department”) if they discover or are notified of a breach of “return information.” Under the amended law, “return information” is defined as “a taxpayer’s identity and the nature, source, or amount of his income, payments, receipts, deductions, exemptions, credits, assets, liabilities, net worth, tax liability, tax withheld, assessments, or tax payments.”

If an income tax return preparer must notify the Department of a breach, then the preparer must provide the Department with the name and taxpayer identification number of any affected taxpayer, as well as the preparer’s name and preparer tax identification number.

Iowa and Nebraska Enact Information Security Laws

Recently, Iowa and Nebraska enacted information security laws applicable to personal information. Iowa’s law applies to operators of online services directed at and used by students in kindergarten through grade 12, whereas Nebraska’s law applies to all commercial entities doing business in Nebraska who own or license Nebraska residents’ personal information.

In Iowa, effective July 1, 2018, HF 2354 will impose information security requirements on operators of websites, online services, online applications or mobile applications who have actual knowledge that their sites, services or applications are designed, marketed and used primarily for kindergarten through grade 12 school purposes (“Operators”). Under the law, Operators will be required to implement and maintain information security procedures and practices consistent with industry standards and applicable state and federal laws to prevent students’ personal information from unauthorized access, destruction, use, modification or disclosure. Operators also are prohibited from selling or renting students’ information. The law does not apply to “general audience” websites, online services, online applications or mobile applications.

In Nebraska, effective July 18, 2018, LB757 requires commercial entities that conduct business in Nebraska and own, license or maintain computerized data that includes Nebraska residents’ personal information to implement and maintain reasonable security procedures and practices, including safeguards for the disposal of personal information. Under the law, commercial entities also must require, by contract, that their service providers institute and maintain reasonable security procedures and practices (the service provider provision applies to contracts entered into on or after the effective date of the law). A violation of the information security requirements under the law is subject to the penalty provisions of the state’s Consumer Protection Act, but expressly does not give rise to a private cause of action.

California Ballot Initiative to Establish Disclosure and Opt-Out Requirements for Consumers’ Personal Information

On November 6, 2018, California voters will consider a ballot initiative called the California Consumer Privacy Act (“the Act”). The Act is designed to give California residents (i.e., “consumers”) the right to request from businesses (see “Applicability” below) the categories of personal information the business has sold or disclosed to third parties, with some exceptions. The Act would also require businesses to disclose in their privacy notices consumers’ rights under the Act, as well as how consumers may opt out of the sale of their personal information if the business sells consumer personal information. Key provisions of the Act include:

  • Definition of Personal Information. Personal information is defined broadly as “information that identifies, relates to, describes, references, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or device.” The Act includes a list of enumerated examples of personal information, which includes, among other data elements, name, postal or email address, Social Security number, government-issued identification number, biometric data, Internet activity information and geolocation data.
  • Applicability. The Act would apply to any for-profit business that “does business in the state of California” and (1) has annual gross revenues in excess of $50 million; (2) annually sells, alone or in combination, the personal information of 100,000 or more consumers or devices; or (3) derives 50 percent or more of its annual revenue from selling consumers’ personal information (collectively, “Covered Businesses”).
  • Right to Know. The Act would require Covered Businesses to disclose, upon a verifiable request from a California consumer, the categories of personal information the business has collected about the consumer, as well as the categories of personal information sold and/or disclosed for a business purpose to third parties. The Act would also require Covered Businesses to identify (i.e., provide the name and contact information for) the third parties to whom the Covered Business has sold or disclosed, for a business purpose, consumers’ personal information. Covered Businesses would be required to comply with such requests free of charge within 45 days of receipt, and would be required to provide this information only once within a 12-month period.
  • Exemption. Based on a carve-out in the definition of “third party” (which is defined to exclude (1) “the business that collects personal information from consumers under this Act” or (2) “a person to whom the business discloses a consumer’s personal information for a business purpose pursuant to a written contract”), Covered Businesses would not be required to make the disclosures described above to the extent the Covered Business discloses personal information to another entity pursuant to a written contract with such entity, provided the contract prohibits the recipient from selling the personal information, or retaining, using or disclosing the personal information for any purpose other than performance of services under the contract.
  • Disclosures and Right to Opt Out. The Act would require Covered Businesses to provide notice to consumers of their rights under the Act, and, where applicable, that the Covered Business sells their personal information. If the Covered Business sells consumers’ personal information, the notice must disclose that fact and include that consumers have a right to opt out of the sale of their personal information. Covered Businesses would be required to make this disclosure in their online privacy notice and must separately provide a clear and conspicuous link on their website that says, “Do Not Sell My Personal Information” and provides an opt-out mechanism. If a consumer opts out, the Covered Business would be required to stop selling the consumers’ personal information unless the consumer expressly re-authorizes such sale.
  • Liability for Security Breaches. Pursuant to the Act, if a Covered Business suffers a “breach of the security of the system” (as defined in California’s breach notification law), the Covered Business may be held liable for a violation of the Act if the Covered Business “failed to implement and maintain reasonable security procedures and practices, appropriate to the nature of the information, to protect personal information.”
  • Enforcement. The Act would establish a private right of action and expressly provides that a violation of the Act establishes injury-in-fact without the need to show financial harm. The Act establishes maximum statutory damages of $3,000 per violation or actual damages, whichever is higher. Separately, the Act also would be enforceable by the California Attorney General and would authorize a civil penalty of up to $7,500 per violation. The Act also contains whistleblower enforcement provisions.

If passed, the Act would take effect November 7, 2018, but would “only apply to personal information collected or sold by a business on or after” August 7, 2019.

Chicago Introduces Data Protection Ordinance

Recently, the Personal Data Collection and Protection Ordinance (“the Ordinance”) was introduced to the Chicago City Council. The Ordinance would require businesses to (1) obtain prior opt-in consent from Chicago residents to use, disclose or sell their personal information, (2) notify affected Chicago residents and the City of Chicago in the event of a data breach, (3) register with the City of Chicago if they qualify as “data brokers,” (4) provide specific notification to mobile device users for location services and (5) obtain prior express consent to use geolocation data from mobile applications. 

Key provisions of the Ordinance include:

  • Opt-in Consent to Use and Share Personal Information. In order to use, disclose or sell the personal information of Chicago residents, website operators and online services providers must obtain prior opt-in consent from individuals. Upon request, businesses must disclose to the individual (or their designee) the personal information they maintain about the individual.
  • Security Breach Notification. The Ordinance also imposes breach notification obligations on businesses that process personal information of Chicago residents. Businesses are generally required to notify affected residents or, if they do not own the affected personal information, the data owners within 15 days of discovering the breach. Businesses must also notify the City of Chicago regarding the timing, content and distribution of the notices to individuals and number of affected individuals.
  • Data Broker Registration. Data brokers, defined as commercial entities that collect, assemble and possess personal information about Chicago residents who are not their customers or employees to trade the information, must register with the City of Chicago. Data brokers must submit an annual report to the City, including, among other items, (1) the number of Chicago residents whose personal information the brokers collected in the previous year and (2) the name and nature of the businesses to which the brokers shared personal information.
  • Mobile Devices with Location Services Functionality. Retailers that sell or lease mobile devices with location services functionality must provide notice about the functionality in the form and substance prescribed by the Ordinance.
  • Location-enabled Mobile Applications. In order to collect, use, store or disclose geolocation information from a mobile application, individuals must generally provide affirmative express consent. This requirement is subject to various exceptions, such as in certain instances to allow a parent or guardian to locate their minor child.

Depending on the requirement, the Ordinance allows for a private right of action and specifies fines to address violations.

Colorado Amends Data Breach Notification Law and Enacts Data Security Requirements

Recently, Colorado’s governor signed into law House Bill 18-1128 “concerning strengthening protections for consumer data privacy” (the “Bill”), which takes effect September 1, 2018. Among other provisions, the Bill (1) amends the state’s data breach notification law to require notice to affected Colorado residents and the Colorado Attorney General within 30 days of determining that a security breach occurred, imposes content requirements for the notice to residents and expands the definition of personal information; (2) establishes data security requirements applicable to businesses and their third-party service providers; and (3) amends the state’s law regarding disposal of personal identifying information.

Key breach notification provisions of the Bill include:

  • Definition of Personal Information: The Bill amends Colorado’s breach notification law to define “personal information” as a Colorado resident’s first name or first initial and last name in combination with one or more of the following data elements: (1) Social Security number; (2) student, military or passport identification number; (3) driver’s license number or identification card number; (4) medical information; (5) health insurance identification number; or (6) biometric data. The amended law’s definition of “personal information” also includes a Colorado resident’s (1) username or email address in combination with a password or security questions and answers that would permit access to an online account and (2) account number or credit or debit card number in combination with any required security code, access code or password that would permit access to that account.
  • Attorney General Notification: If an entity must notify Colorado residents of a data breach, and reasonably believes that the breach has affected 500 or more residents, it must also provide notice to the Colorado Attorney General. Notice to the Attorney General is required even if the covered entity maintains its own procedures for security breaches as part of an information security policy or pursuant to state or federal law.
  • Timing: Notice to affected Colorado residents and the Colorado Attorney General must be made within 30 days after determining that a security breach occurred.
  • Content Requirements: The Bill also requires that notice to affected Colorado residents must include (1) the date, estimated date or estimated date range of the breach; (2) a description of the personal information acquired or reasonably believed to have been acquired; (3) contact information for the  entity; (4) the toll-free numbers, addresses and websites for consumer reporting agencies and the FTC; and (5) a statement that the Colorado resident can obtain information from the FTC and the credit reporting agencies about fraud alerts and security freezes. If the breach involves a Colorado resident’s username or email address in combination with a password or security questions and answers that would permit access to an online account, the entity must also direct affected individuals to promptly change their password and security questions and answers, or to take other steps appropriate to protect the individual’s online account with the entity and all other online accounts for which the individual used the same or similar information.

Key data security and disposal provisions of the Bill include:

  • Definition of Personal Identifying Information: The Bill defines personal identifying information as “a social security number; a personal identification number; a password; a pass code; an official state or government-issued driver’s license or identification card number; a government passport number; biometric data…; an employer, student, or military identification number; or a financial transaction device.”
  • Applicability: The information security and disposal provisions of the Bill apply to “covered entities,” defined as persons that maintain, own or license personal identifying information in the course of the person’s business, vocation or occupation.
  • Protection of Personal Identifying Information: The Bill requires a covered entity that maintains, owns or licenses personal identifying information to implement and maintain reasonable security procedures and practices appropriate to the nature of the personal identifying information it holds, and the nature and size of the business and its operations.
  • Third-Party Service Providers: Under the Bill, a covered entity that discloses information to a third-party service provider must require the service provider to implement and maintain reasonable security procedures and practices that are (1) appropriate to the nature of the personal identifying information disclosed and (2) reasonably designed to help protect the personal identifying information from unauthorized access, use, modification, disclosure or destruction. A covered entity does not need to require a third-party service provider to do so if the covered entity agrees to provide its own security protection for the information it discloses to the provider.
  • Written Disposal Policy: The Bill requires covered entities to create a written policy for the destruction or proper disposal of paper and electronic documents containing personal identifying information that requires the destruction of those documents when they are no longer needed. A covered entity is deemed in compliance with this section of the Bill if it is regulated by state or federal law and maintains procedures for disposal of personal identifying information pursuant to that law.

Vermont Enacts Nation’s First Data Broker Legislation

Recently, Vermont enacted legislation (H.764) that regulates data brokers who buy and sell personal information. Vermont is the first state in the nation to enact this type of legislation.

  • Definition of Data Broker. The law defines a “data broker” broadly as “a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship.”
  • Definition of “Brokered Personal Information.” “Brokered personal information” is defined broadly to mean one or more of the following computerized data elements about a consumer, if categorized or organized for dissemination to third parties: (1) name, (2) address, (3) date of birth, (4) place of birth, (5) mother’s maiden name, (6) unique biometric data, including fingerprints, retina or iris images, or other unique physical or digital representations of biometric data, (7) name or address of a member of the consumer’s immediate family or household, (8) Social Security number or other government-issued identification number, or (9) other information that, alone or in combination with the other information sold or licensed, would allow a reasonable person to identify the consumer with reasonable security.
  • Registration Requirement. The law requires data brokers to register annually with the Vermont Attorney General and pay a $100 annual registration fee.
  • Disclosures to State Attorney General. Data brokers must disclose annually to the State Attorney General information regarding their practices related to the collection, storage or sale of consumers’ personal information. Data brokers also must disclose annually their practices, if any, for allowing consumers to opt out of the collection, storage or sale of their personal information. Further, the law requires data brokers to report annually the number of data breaches experienced during the prior year and, if known the total number of consumers affected by the breaches. There are additional disclosure requirements if the data broker knowingly possesses brokered personal information of minors, including a separate statement detailing the data broker’s practices for the collection, storage and sale of that information and applicable opt-out policies. Importantly, the law does not require data brokers to offer consumers the ability to opt out.
  • Information Security Program. The law requires data brokers to develop, implement and maintain a written, comprehensive information security program that contains appropriate physical, technical and administrative safeguards designed to protect consumers’ personal information.
  • Elimination of Fees for Security Freezes. The law eliminates fees associated with a consumer placing or lifting a security freeze. Previously, Vermont law allowed for fees of up to $10 to place, and up to $5 to lift temporarily or remove, a security freeze.
  • Enforcement. A violation of the law is considered an unfair and deceptive act in commerce in violation of Vermont’s consumer protection law.
  • Effective Date. The registration and data security obligations take effect January 1, 2019, while the other provisions of the law take effect immediately.

In a statement, Vermont Attorney General T.J. Donovan said, “This bill not only saves [Vermonters] money, but it gives them information and tools to help them keep their personal information secure.”

Vietnam Approves New Cybersecurity Law

On June 12, 2018, Vietnam’s parliament approved a new cybersecurity law  that contains data localization requirements, among other obligations. Technology companies doing business in the country will be required to operate a local office and store information about Vietnam-based users within the country. The law also requires social media companies to remove offensive content from their online service within 24 hours at the request of the Ministry of Information and Communications and the Ministry of Public Security’s cybersecurity task force. Companies could face substantial penalties for failure to disclose information upon governmental request. In addition, the law bans internet users in Vietnam from organizing people for anti-state purposes and imposes broad restrictions on using speech to distort the country’s history or achievements. As reported in BNA Privacy Law Watch, the law will take effect on January 1, 2019.

Louisiana Amends Data Breach Notification Law, Eliminates Fees for Security Freezes

Recently, Louisiana amended its Database Security Breach Notification Law (the “amended law”). Notably, the amended law (1) amends the state’s data breach notification law to expand the definition of personal information and requires notice to affected Louisiana residents within 60 days, and (2) imposes data security and destruction requirements on covered entities. The amended law goes into effect on August 1, 2018.

Key breach notification provisions of the amended law include:

  • Definition of Personal Information: Under the amended law, “personal information” is now defined as a resident’s first name or first initial and last name together with one or more of the following data elements, when the name or the data element is not encrypted or redacted: (1) Social Security Number; (2) driver’s license number or state identification card number; (3) account number, credit or debit card number, together with any required security code, access code or password that would permit access to the individuals’ financial account; (4) passport number; and (5) biometric data, such as fingerprints, voice prints, eye retina or iris, or other unique biological characteristic, that is used to authenticate the individual’s identity.
  • Timing: The amended law requires that notice must be made to affected residents in the most expedient time possible and without unreasonable delay, but no later than 60 days from the discovery of a breach. This timing requirement also applies to third parties who are required to notify the owner or licensee of the personal information of a breach.
  • Delays: Under the amended law, entities must provide written notification to the Louisiana Attorney General within the 60-day period if notification is delayed due to (1) the entity’s determination that “measures are necessary to determine the scope of the breach, prevent further disclosures and restore the reasonable integrity of the system” or (2) law enforcement’s determination that notification would impede a criminal investigation. The Attorney General will allow an extension after receiving a written explanation of the reasons for delay.
  • Substitute Notification: The amended law lowers the bar for substitute notifications in the form of emails, postings on the website and notifications to major statewide media. Specifically, substitute notifications are permitted if (1) the cost of providing notifications would exceed $100,000 (previously the threshold was $250,000); (2) the number of affected individuals exceeds 100,000 (previously the threshold was 500,000); or (3) the entity does not have sufficient contact information.
  • Harm Threshold Documentation: Notification is not required if the entity determines that there is no reasonable likelihood of harm to Louisiana residents. The amended law requires that this written determination and supporting documents must be maintained for five years from the discovery. The Attorney General may request the documentation.

Key data security and destruction provisions of the amended law include:

  • “Reasonable” Security Procedures: The amended law creates a new requirement that entities that conduct business in Louisiana or own or license computerized personal information about Louisiana residents must maintain “reasonable security procedures and practices” to protect personal information. In addition, the security procedures and practices must be “appropriate to the nature of the information.” The amended law does not describe specifically what practices would meet these standards.
  • Data Destruction Requirement: The amended law creates a new requirement that, when Louisiana residents’ personal information owned or licensed by a business is “no longer to be retained,” “all reasonable steps” must be taken to destroy it. For instance, the personal information must be shredded or erased, or the personal information must be otherwise modified to “make it unreadable or undecipherable.”

Separately, on May 15, 2018, SB127 was signed by the governor and took immediate effect. The bill prohibits credit reporting agencies from charging a fee for placing, reinstating, temporarily lifting or revoking a security freeze.

DOE and DHS Assess U.S. Readiness to Manage Potential Cyber Attacks

On May 30, 2018, the federal government released a report that identifies gaps in assets and capabilities required to manage the consequences of a cyber attack on the U.S. electric grid. The assessment is a result of the U.S. Department of Energy (“DOE”) and the U.S. Department of Homeland Security’s (“DHS”) combined efforts to assess the potential scope and duration of a prolonged power outage associated with a significant cyber incident and the United States’ readiness to manage the consequences of such an incident.

DOE and DHS caution, “as cyber capabilities become more readily available over time, state and non-state actors will continue to seek and develop techniques, tactics, and procedures to use against U.S. interests.” They note that the National Security Agency has already identified intrusions into critical industrial control systems by entities with the apparent technical capability to take down power grids, water systems and other critical infrastructure. While no lasting damage from cyber attacks and intrusions targeting U.S. electrical utilities has been observed, the assessment references a December 2015 cyber attack on three Ukrainian electricity distribution companies. The attacks were executed within thirty minutes of each other and caused outages for up to six hours for 225,000 customers. DOE and DHS report that large scale or long duration attacks in the United States could impact public health and safety, as well as result in economic costs of billions of dollars.

Although the report concludes that the U.S. government is generally well prepared, it identifies gaps around enhancing cyber incident response capacity, developing high-priority plans, augmenting scarce and critical resources, and understanding and characterizing response efforts to catastrophic incidents. DOE and DHS organize these gaps under seven categories: (1) Cyber Situational Awareness and Incident Impact Analysis, (2) Roles and Responsibilities under Cyber Response Frameworks, (3) Cybersecurity Integration into State Energy Assurance Planning, (4) Electric Cybersecurity Workforce and Expertise, (5) Supply Chain and Trusted Partners, (6) Public-Private Cybersecurity Information Sharing, and (7) Resources for National Cybersecurity Preparedness.

Among its recommendations, the report emphasizes the importance of public-private cybersecurity information sharing: “DOE should work with DHS, industry partners, and other relevant organizations to better define information needs and reporting thresholds through an assessment of voluntary and mandatory reporting requirements.” The report credits the federal government for taking significant steps to enhance existing planning structures for responding to cyber incidents in the last two years, but stresses the importance of closing the identified gaps.

Bulgarian Presidency Presents Progress Report and Points for Debate on ePrivacy

On January 10, 2017, the EU Commission adopted a proposal for a Regulation on Privacy and Electronic Communications (“ePR”). On June 8, 2018, the Council of the European Union’s Bulgarian Presidency presented a progress report (the “Report”) on the draft ePR to the Transport, Telecommunications and Energy Council. The Report reflects on the amendments presented in the May 2018 Examination of the Presidency text. The Report is split into two sections: Annex I, a progress report, and Annex II, questions for the policy debate.

Annex I reports on the following features, among other things:

  • State of play in the council: The Presidency discusses the scope of the ePR and its link with the EU General Data Protection Regulation (“GDPR”), particularly where the protection provided by the ePR ends and protection provided by the GDPR becomes relevant.
  • Processing of electronic communications data (content and metadata) (Article 6(1)): The permitted processing of electronic communications data to maintain and restore the security of electronic communication networks and services now includes processing in relation to security risks.
  • Processing of electronic communications metadata (Article 6(2), (3a)): The Presidency has introduced new permissions for processing electronic communications metadata, including processing for the purposes of network management and optimization, as well as for statistical counting. These new grounds for processing are accompanied by appropriate safeguards.
  • Protection of terminal equipment information (Article 8): The prohibition on the processing of information from end-users’ terminal equipment remains, but the exceptions now include processing for the purposes of preventing fraud or detecting technical faults.
  • Privacy settings (Article 10): The Presidency introduces significant changes; software providers are only obliged to offer privacy settings at the time of installation or first usage, and inform users when updates change the privacy settings and how they may use them.
  • Data retention related elements (Articles 2 and 11): The Report touches on the issue of data retention and highlights that discussions have taken place in two joint meetings of the Working Party Telecommunications and Information Society and the Friends of Presidency, Working Party on Information Exchange and Data Protection. Exclusions have been included in the text relating to national security and defense.
  • Exceptions to presentation and restriction of calling, connected line identification, access to emergency services (Article 13): There is discussion over the power of emergency services to override an end-user’s choice to reject incoming calls without the calling line identification, in the case of an emergency. Emergency services may also use the end-user’s terminal equipment when called in order to identify that end-users location.
  • Unsolicited and direct marketing communications (Article 16): The Report highlights that, following extensive discussions, the new text allows for EU Member States to have the ability to establish laws with a maximum period of time that an organization can use its own customers’ contact details for direct marketing purposes.
  • Supervisory authorities (Article 18): The Report highlights that, similar to the GDPR, supervisory authorities shall have the responsibility for monitoring the application of the ePR, and should have the power to provide remedies and impose administrative fines. The Report states that most delegations seek further flexibility with regards to supervisory authorities and that requirements stemming from Article 8(3) of the Charter and Article 16(2) of the TFEU should be considered.

Annex II lists three questions for which the Presidency would like to seek guidance from the Ministers. These questions relate to permitted processing of metadata, protection of terminal equipment, privacy settings and the balance between innovation and the safeguarding of confidentiality.

Read the full report. Read the May 2018 amendments to the Proposal.

Eleventh Circuit Vacates FTC Data Security Order

On June 6, 2018, the U.S. Court of Appeals for the Eleventh Circuit vacated a 2016 Federal Trade Commission (“FTC”) order compelling LabMD to implement a “comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers.” The Eleventh Circuit agreed with LabMD that the FTC order was unenforceable because it did not direct the company to stop any “unfair act or practice” within the meaning of Section 5(a) of the Federal Trade Commission Act (the “FTC Act”).

The case stems from allegations that LabMD, a now-defunct clinical laboratory for physicians, failed to protect the sensitive personal information (including medical information) of consumers, resulting in two specific security incidents. One such incident occurred when a third party informed LabMD that an insurance-related report, which contained personal information of approximately 9,300 LabMD clients (including names, dates of birth and Social Security numbers), was available on a peer-to-peer (“P2P”) file-sharing network.

Following an FTC appeal process, the FTC ordered LabMD to implement a comprehensive information security program that included:

  • designated employees accountable for the program;
  • identification of material internal and external risks to the security, confidentiality and integrity of personal information;
  • reasonable safeguards to control identified risks;
  • reasonable steps to select service providers capable of safeguarding personal information, and requiring them by contract to do so; and
  • ongoing evaluation and adjustment of the program.

In its petition for review of the FTC order, LabMD asked the Eleventh Circuit to decide whether (1) its alleged failure to implement reasonable data security practices constituted an unfair practice within the meaning of Section 5 of the FTC Act and (2) whether the FTC’s order was enforceable if it does not direct LabMD to stop committing any specific unfair act or practice.

The Eleventh Circuit assumed, for purposes of its ruling, that LabMD’s failure to implement a reasonably designed data-security program constituted an unfair act or practice within the meaning of Section 5 of the FTC Act. However, the court held that the FTC’s cease and desist order, which was predicated on LabMD’s general negligent failure to act, was not enforceable. The court noted that the prohibitions contained in the FTC’s cease and desist orders and injunctions “must be stated with clarity and precision,” otherwise they may be unenforceable. The court found that in LabMD’s case, the cease and desist order contained no prohibitions nor instructions to the company to stop a specific act or practice. Rather, the FTC “command[ed] LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.” The court took issue with the FTC’s scheme of “micromanaging,” and concluded that the cease and desist order “mandate[d] a complete overhaul of LabMD’s data-security program and [said] precious little about how this [was] to be accomplished.” The court also noted that the FTC’s prescription was “a scheme Congress could not have envisioned.”

Oregon Amends Data Breach Notification Law

On June 2, 2018, Oregon’s amended data breach notification law (“the amended law”) went into effect. Among other changes, the amended law broadens the applicability of breach notification requirements, prohibits fees for security freezes and related services provided to consumers in the wake of a breach and adds a specific notification timing requirement.

Key Provisions of the Amended Law Include:

  • Definition of Personal Information: Oregon’s definition of personal information now includes the consumer’s first name or initial and last name combined with “any other information or combination of information that a person reasonably knows or should know would permit access to the consumer’s financial account.”
  • Expanded Scope of Application: Instead of applying only to persons who “own or license” personal information that they use in the course of their business, the amended law now also applies to any person who “otherwise possesses” such information and uses it in the course of their business. It also requires notice when an organization receives a notice of breach from another person that “maintains or otherwise possesses personal information on the person’s behalf.” Persons who maintain or otherwise possess information on behalf of another must “notify the other person as soon as is practicable after discovering a breach of security.”
  • Notice Requirements: The amended law adds a new notice deadline. Notice of a breach of security must be given in the “most expeditious manner possible, without unreasonable delay,” and not later than 45 days after discovering or being notified of the security breach. Also, while the amended law exempts entities that are required to provide breach notification under certain other requirements (e.g., federal laws such as HIPAA), such entities are now required to provide the Attorney General with any notice sent to consumers or regulators in compliance with such other requirements.
  • Providing Credit Monitoring Services: If organizations offer consumers credit monitoring services or identity theft prevention or mitigation services in connection with their notice of a breach, they cannot make those services contingent on the consumer providing a credit or debit card number, or accepting another service that the person offers to provide for a fee. The terms and conditions of any contract for the provision of these services must embody these requirements.
  • Prohibiting Fees for Security Freezes: Under the amended law, consumer reporting agencies are prohibited from charging a consumer a fee for “placing, temporarily lifting or removing a security freeze on the consumer’s report,” creating or deleting protective records, placing or removing security freezes on protected records, or replacing identification numbers, passwords or similar devices that the agency previously provided.

FTC Posts Blog on Data Deletion Rule under COPPA

On May 31, 2018, the Federal Trade Commission published on its Business Blog a post addressing the easily missed data deletion requirement under the Children’s Online Privacy Protection Act (“COPPA”).

The post cautions that companies must review their data policy in order to comply with the data retention and deletion rule. Under Section 312.10 of COPPA, an online service operator may retain personal information of a child “for only as long as is reasonably necessary to fulfill the purposes for which the information was collected.” After that, the operator must delete it with reasonable measures to ensure secure deletion.

The FTC explains that a thorough review of data retention policies is crucial for compliance, as the deletion requirement is triggered without an express request from parents. Companies must verify, among other items, when the data ceases to be necessary for the initial purpose for which it was collected, and what they do with the data at that point. For instance, the FTC illustrates, a subscription-based children’s app provider would want to ask what it does with the data when a parent closes an account, a subscription is not renewed or an account becomes inactive. If the information is still necessary for billing purposes, the company must determine how much longer it needs the information.

The FTC provides the following questions that companies want to ask to ensure compliance:

  • What types of personal information do you collect from children?
  • What is your stated purpose for collecting the information?
  • How long do you need to retain the information for the initial purpose?
  • Does the purpose for using the information end with an account deletion, subscription cancellation or account inactivity?
  • When it’s time to delete information, are you doing it securely?

EDPB Published Guidelines on Certification and Derogations under the GDPR

On May 30, 2018, the European Data Protection Board (“EDPB”), replacing the Article 29 Working Party, published the final version of Guidelines 2/2018 on derogations in the context of international data transfers and draft Guidelines 1/2018 on certification under the EU General Data Protection Regulation (“GDPR”). 

What are the Guidelines about?

Guidelines on Derogations

Derogations under Article 49 of the GDPR are exemptions from the general principle that personal data may only be transferred to countries outside of the European Economic Area (“EEA”) or to an international organization if an adequate level of data protection is provided for in that country or by that international organization, or if appropriate safeguards have been adduced and data subjects enjoy enforceable and effective rights to ensure that the level of protection guaranteed by the GDPR is not undermined.

The Guidelines provide clarification to the following derogations:

  • The data subject has explicitly consented to the proposed transfer, after having been informed of the possible risks of such transfers for the data subject due to the absence of an adequacy decision and appropriate safeguards (Article 49 (1)(a)). In particular, the Guidelines focus on the specific elements required for consent to be considered a valid legal ground for international data transfers to third countries and international organizations.
  • The transfer is necessary for the performance of a contract between the data subject and data controller, or for the implementation of pre-contractual measures taken at the data subject’s request (Article 49 (1)(b)).
  • The transfer is necessary for the conclusion or performance of a contract concluded in the interest of the data subject between the data controller and another natural or legal person (Article 49 (1)(c)).
  • The transfer is necessary for important reasons of public interest (Article 49 (1)(d)).
  • The transfer is necessary for the establishment, exercise or defense of legal claims (Article 49 (1)(e)).
  • The transfer is necessary in order to protect the vital interests of data subjects or other persons (Article 49 (1)(f)).
  • The transfer is made from a public register (Article 49 (1)(g) and (2)).
  • The transfer is necessary for the purposes of compelling legitimate interests pursued by the data exporter that are not overridden by the interests or rights and freedoms of the data subjects (Article 49(1) and (2)).

The Guidelines note that personal data may only be transferred under these derogations if the transfer is occasional and not repetitive (i.e., the transfer “may happen more than once, but not regularly,” and may occur “under random, unknown circumstances and within arbitrary time intervals”). Derogations may therefore not be used to legitimize data transfers where, for example, the data importer is granted direct access to a database on a general basis. The Guidelines also stress the importance of conducting a necessity test to assess the possible use of most of the above derogations. This necessity test requires an evaluation by the EU/EEA data exporter on whether a transfer of personal data can be considered necessary for the specific purpose of the derogation to be used. Finally, the Guidelines note that a transfer in response to a decision from a non-EEA authority is only lawful if that transfer complies with the data transfer rules of the GDPR. In particular, where there is an international agreement (e.g., a mutual legal assistance treaty (“MLAT”)), the EDPB recommends that EU companies should generally refuse direct requests and refer the requesting non-EEA authority to the existing MLAT or agreement.

Guidelines on Certification

Certification mechanisms are accountability tools that companies may choose to implement to demonstrate compliance with the GDPR. The Guidelines on certification aim to identify overarching criteria that may be relevant to all types of certification mechanisms issued in accordance with the GDPR, thereby helping EU Member States, supervisory authorities and national accreditation bodies establish a more consistent, harmonized approach for the implementation of certification mechanisms.

The Guidelines on certification consist of 6 core sections:

  • Section 1 explains the purpose of certification and the key concepts of the certification provisions in the GDPR.
  • Sections 2 and 3 explore the role of the EU supervisory authorities and certification bodies that may issue certification.
  • Section 4 discusses the approval of certification criteria by the competent EU supervisory authority as well as the approval of certification criteria by the EDPB that may result in a European Data Protection Seal requirements, which would prevent fragmentation amongst data protection certification.
  • Section 5 highlights what can be certified under the GDPR and the documentation aspects to ensure it is clearly recorded including assessment and results.
  • Section 6 provides guidance for defining certification criteria.

The EDPB is accepting comments on the Guidelines on certification until July 12, 2018. The EDPB will publish separate guidelines to address the identification of criteria to approve certification mechanisms as data transfer tools to third countries or international organizations.

Data Protection Officers: The Unsung Heroes of the GDPR

On May 29, 2018, Bojana Bellamy published a letter on the importance and value of data protection officers (“DPOs”) on the International Association of Privacy Professionals’ Privacy Perspectives blog, entitled A Letter to the Unsung Hero of the GDPR (the “Letter”). The Letter acknowledges the herculean efforts and boundless commitment DPOs and those in a similar role have demonstrated in preparing their organizations for the GDPR.

In the Letter, Bellamy recounts the immense number of tasks that DPOs have to engage in on a daily basis as part of the many different roles that they must play in balancing company data uses with data protection laws and customs. Among these tasks and roles, the Letter highlights that DPOs have to navigate vast and complex privacy laws and understand their organizations’ data usages and methods inside and out. They have to be able to communicate proficiently with technology teams and data scientists while being able to explain complex technical concepts to colleagues without a technological background. They must play the role of an ethicist to determine if a processing is fair and be a guardian of human rights, while also enabling innovation.

The Letter also serves as a thank you note to DPOs for their efforts, and credits them with being the catalyst for making data privacy more than just a compliance issue for many organizations, but also a key business and strategic issue. The Letter calls for DPOs to receive the recognition they deserve, not only from their respective organizations, but also from regulators, privacy activists, media and customers.

To read Bellamy’s full acknowledgment of data protection officers and their efforts, please see the full Letter.

Department of Energy Announces New Efforts in Energy Sector Cybersecurity

On May 14, 2018, the Department of Energy (“DOE”) Office of Electricity Delivery & Energy Reliability released its Multiyear Plan for Energy Sector Cybersecurity (the “Plan”). The Plan is significantly guided by DOE’s 2006 Roadmap to Secure Control Systems in the Energy Sector and 2011 Roadmap to Achieve Energy Delivery Systems Cybersecurity. Taken together with DOE’s recent announcement creating the new Office of Cybersecurity, Energy Security, and Emergency Response (“CESER”), DOE is clearly asserting its position as the energy sector’s Congressionally-recognized sector-specific agency (“SSA”) on cybersecurity.

Multiyear Plan for Energy Sector Cybersecurity

Under development over the last year, the Plan aligns with President Trump’s Executive Order 13800, which calls on the government to engage with critical infrastructure owners and operators to identify authorities and capabilities that agencies could employ to support critical infrastructure cybersecurity. To this end, the Plan lays out DOE’s integrated strategy to reduce cyber risks to the U.S. energy sector. The Plan seeks to leverage strong partnerships with the private sector to: (1) strengthen today’s cyber systems and risk management capabilities and (2) develop innovative solutions for tomorrow’s inherently secure and resilient systems. It identifies three goals to accomplish these priorities: (1) strengthen energy sector cybersecurity preparedness, (2) coordinate incident response and recovery and (3) accelerate game-changing research, development and demonstration of resilient delivery systems.

Office of Cybersecurity, Energy Security, and Emergency Response

Featured heavily in the Plan is CESER, which was announced by DOE Secretary Perry on February 14, 2018. The announcement stated that CESER would be led by an Assistant Secretary, which the Administration has yet to nominate, and that President Trump’s FY 19 budget requested $96 million for the new office.

DOE Undersecretary Mark Menezes testified to Congress that “initially, the office will be comprised of the work we currently do” under existing programs. Indeed, DOE’s FY 19 budget request indicates that CESER will be formed from existing reliability programs in the Office of Electricity Delivery & Energy Reliability, which will be renamed the Office of Electricity Delivery (“OE”). OE will maintain the Transmission Reliability, Resilient Distribution Systems, Energy Storage, and Transmission Permitting and Technical Assistance programs, while CESER will inherit the Cybersecurity for Energy Delivery Systems (“CEDS”) program, currently led by Deputy Assistant Secretary Henry S. Kenchington, and the Infrastructure Security and Energy Restoration (“ISER”) program, currently headed by Deputy Assistant Secretary Devon Streit.

CEDS forms the core of DOE’s work on energy sector cybersecurity and aligns with the Plan’s goals of increasing energy cyber preparedness and developing new cybersecurity technologies. Besides conducting cybersecurity research and development, CEDS also oversees DOE’s primary programs for sharing cybersecurity information with the private sector. This includes the Cybersecurity Risk Information Sharing Program (“CRISP”), which facilitates timely bi-directional sharing of cyber threat information in order to monitor energy sector IT networks. At present, 75% of U.S. electric utilities participate in CRISP. CEDS also includes the Cybersecurity for Operational Technology Environment (“CYOTE”) pilot project, which applies lessons learned from CRISP to monitor operating technology (“OT”) networks. According to the budget request, DOE intends to improve both CRISP and CYOTE by integrating utility data into the Intelligence Community environment to enhance threat information. The request also states that DOE will create a new “Advanced Industrial Control System Analysis Center” within CEDS that will “span the DOE laboratory network and work in collaboration with private sector partners to use the analysis of energy sector supply chain component and model impacts to address system threats and vulnerabilities through technical solutions, share information about findings, and develop mitigation and response solutions.”

ISER provides technical expertise on supporting resiliency of critical infrastructure assets key to energy sector operation and addresses the Plan’s goal of coordinating incident response. ISER’s focus is operational and spans all hazards facing the energy sector. However, the DOE budget notes that in the next fiscal year, ISER will “build out its effective, timely, and coordinated cyber incident management capability” and “envisions” forming a team of at least six cyber energy responders to support incident response within the energy sector.

DOE’s Emerging Role in Energy Sector Cybersecurity

DOE, under the Trump Administration, is reprioritizing cybersecurity higher on the Department’s agenda. To be sure, the Plan and CESER are a reshuffling of already-existing resources rather than entirely new programs. But it is clear that DOE is intent on flexing its position under the Fixing America’s Surface Transportation Act (“FAST Act”) to act as the energy sector SSA on cybersecurity.

DOE’s efforts come as the Department of Homeland Security (“DHS”) is also increasing its profile on cybersecurity. Utilizing authority under the Cybersecurity Information Sharing Act, passed just weeks after the FAST Act in 2015, DHS has certified its National Cybersecurity and Communications Integration Center (“NCCIC”) as a certified portal to accept cybersecurity information. As such, entities enjoy liability protection for sharing cybersecurity information with the NCCIC, through programs like Automated Indicator Sharing (“AIS”) and the even more robust Cyber Information Sharing and Collaboration Program (“CISCP”).

Those within the energy sector can utilize both DOE’s and DHS’s information sharing programs to strengthen their cybersecurity. Coordination with the NCCIC and sharing through AIS or CISCP provides access to the government’s cross-sectoral cybersecurity activities, though reports indicate that businesses have been slow to adopt AIS. Tailored specifically to electricity, DOE’s CRISP and CYOTE programs represent a more specialized package of information sharing, particularly appropriate for electricity sub-sector stakeholders.

DHS and DOE can be expected to continue asserting jurisdictional claims over cybersecurity issues. Hopefully, this will represent little more than the traditional rivalry between government agencies, and result in complementary rather than competing federal cybersecurity programs.

FTC Approves Settlement with PayPal Regarding Alleged Venmo Privacy Misrepresentations

On May 24, 2018, the Federal Trade Commission granted final approval to a settlement (the “Final Settlement”) with PayPal, Inc., to resolve charges that PayPal’s peer-to-peer payment service, Venmo, misled consumers regarding certain restrictions on the use of its service, as well as the privacy of transactions. The proposed settlement was announced on February 27, 2018. In its complaint, the FTC alleged that Venmo misrepresented its information security practices by stating that it “uses bank-grade security systems and data encryption to protect your financial information.” Instead, the FTC alleged that Venmo violated the Gramm-Leach-Bliley Act’s (“GLBA’s”) Safeguards Rule by failing to (1) have a written information security program; (2) assess the risks to the security, confidentiality and integrity of customer information; and (3) implement basic safeguards such as providing security notifications to users that their passwords were changed. The complaint also alleged that Venmo (1) misled consumers about their ability to transfer funds to external bank accounts, and (2) misrepresented the extent to which consumers could control the privacy of their transactions, in violation of the GLBA Privacy Rule.

The Final Settlement prohibits Venmo from misrepresenting “any material restrictions on the use of its service, the extent of control provided by any privacy settings, and the extent to which Venmo implements or adheres to a particular level of security.” Venmo also must make certain transaction- and privacy-related disclosures to consumers and refrain from violating the Privacy Rule and Safeguards Rule. Venmo is required to obtain biennial third-party assessments of its compliance with the Rules for 10 years, which, according to the FTC, is “[c]onsistent with past cases involving violations of Gramm-Leach-Bliley Act Rules.”

HHS Publishes Advance Notices of Proposed Rulemaking on Accounting of Disclosures and Civil Monetary Penalties

The Department of Health and Human Services (“HHS”) recently published two advance notices of proposed rulemaking that address the accounting of disclosures and the potential distribution of civil monetary penalties to affected individuals.

The first notice of proposed rulemaking would solicit the public’s views on modifying the HIPAA Privacy Rule as necessary to implement the accounting of disclosures provisions of the Health Information Technology for Economic and Clinical Health (“HITECH”) Act. HHS had previously published a notice of proposed rulemaking on these provisions in 2011, which is finally being withdrawn.

The second notice of proposed rulemaking would solicit the public’s views on “establishing a methodology under which an individual who is harmed by an offense punishable under HIPAA may receive a percentage of any civil money penalty or monetary settlement collected with respect to the offense.” This is required under Section 13410(c)(3) of the HITECH Act, but has not been implemented. HHS has collected almost $40 million since it began imposing civil monetary penalties, a considerable sum that could be distributed to affected individuals.

Arizona Amends Data Breach Notification Law

On April 11, 2018, Arizona amended its data breach notification law (the “amended law”). The amended law will require persons, companies and government agencies doing business in the state to notify affected individuals within 45 days of determining that a breach has resulted in or is reasonably likely to result in substantial economic loss to affected individuals. The old law only required notification “in the most expedient manner possible and without unreasonable delay.” The amended law also broadens the definition of personal information and requires regulatory notice and notice to the consumer reporting agencies (“CRAs”) under certain circumstances.

Key provisions of the amended law include:

  • Definition of Personal Information. Under the amended law, the definition of “personal information” now includes an individual’s first name or initial and last name in combination with one or more of the following “specified data elements:” (1) Social Security number; (2) driver’s license or non-operating license number; (3) a private key that is unique to an individual and that is used to authenticate or sign an electronic record; (4) financial account number or credit or debit card number in combination with any required security code, access code or password that would allow access to the individual’s financial account; (5) health insurance identification number; (6) medical or mental health treatment information or diagnoses by a health care professional; (7) passport number; (8) taxpayer identification or identity protection personal identification number issued by the Internal Revenue Service; and (9) unique biometric data generated from a measurement or analysis of human body characteristics to authenticate an individual when the individual accesses an online account. The amended law also defines “personal information” to include “an individual’s user name or e-mail address, in combination with a password or security question and answer, which allows access to an online account.”
  • Harm Threshold. Pursuant to the amended law, notification to affected individuals, the Attorney General and the CRAs is not required if breach has not resulted in or is not reasonably likely to result in substantial economic loss to affected individuals.
  • Notice to the Attorney General and Consumer Reporting Agencies. If the breach requires notification to more than 1,000 individuals, notification must also be made to the Attorney General and the three largest nationwide CRAs.
  • Timing. Notifications to affected individuals, the Attorney General and the CRAs must be issued within 45 days of determining that a breach has occurred.
  • Substitute Notice. Where the cost of making notifications would exceed $50,000, the affected group is bigger than 100,000 individuals, or there is insufficient contact information for notice, the amended law now requires that substitute notice be made by (1) sending a written letter to the Attorney General demonstrating the facts necessary for substitute notice and (2) conspicuously posting the notice on the breached entity’s website for at least 45 days. Under the amended law, substitute notice no longer requires email notice to affected individuals and notification to major statewide media.
  • Penalty Cap. The Attorney General may impose up to $500,000 in civil penalties for knowing and willful violations of the law in relation to a breach or series of related breaches. The Attorney General also Is entitled to recover restitution for affected individuals.

Senator Wyden Calls for FCC Investigation into Company Sharing Location Data

On May 8, 2018, Senator Ron Wyden (D–OR) demanded that the Federal Communications Commission investigate the alleged unauthorized tracking of Americans’ locations by Securus Technologies, a company that provides phone services to prisons, jails and other correctional facilities. Securus allegedly purchases real-time location data from a third-party location aggregator and provides the data to law enforcement without obtaining judicial authorization for the disclosure of the data. In turn, the third-party location aggregator obtains the data from wireless carriers. Federal law restricts how and when wireless carriers can share certain customer information with third parties, including law enforcement. Wireless carriers are prohibited from sharing certain customer information, including location data, unless the carrier has obtained the customer’s consent or the sharing is otherwise required by law.

To access real-time location data from Securus, Senator Wyden’s letter alleges, correctional officers can enter any U.S. wireless phone number and upload a document purporting to be an “official document giving permission” to obtain real-time location data about the wireless customer. According to the letter, Securus does not take any steps to verify that the documents actually provide judicial authorization for the real-time location surveillance. The letter requests that the FCC investigate Securus’ practices and the wireless carriers’ failure to maintain exclusive control over law enforcement access to their customers’ location data. The letter also calls for a broader investigation into the customer consent that each wireless carrier requires from other companies before sharing customer location information and other data. Separately, Senator Wyden also sent a letter to the major wireless carriers requesting an investigation into the safeguards in place to prevent the unauthorized sharing of wireless customer information.

In response, the FCC confirmed that it has opened an investigation into LocationSmart, reportedly the third-party vendor that sold the location data to Securus. Senator Wyden provided comment to the website Ars Technica that the “location aggregation industry” has functioned with “essentially no oversight,” and urged the FCC to “expand the scope of this investigation and to more broadly probe the practice of third parties buying real-time location data on Americans.”

Irish Data Protection Bill in Final Committee Stage Before the Irish Legislature

On May 16, 2018, the Irish Data Protection Bill 2018 (the “Bill”) entered the final committee stage in Dáil Éireann (the lower house and principal chamber of the Irish legislature). The Bill was passed by the Seanad (the upper house of the legislature) at the end of March 2018. In the current stage, final statements on the Bill will be made before it is signed into law by the President.

The Bill implements Ireland’s national legislation in areas where the EU General Data Protection Regulation (“GDPR”) provides a margin of maneuver to Member States, and specifies the investigative and enforcement powers of the Irish Data Protection Commission. The Bill also implements Directive 2016/680 (Law Enforcement Directive) into Irish law.

Key highlights of the Bill include:

  • Data Protection Commission: The Bill establishes the Data Protection Commission, which replaces the current Office of the Data Protection Commissioner. The Bill permits the appointment of three Commissioners, one of which will act as Chair and have voting rights in cases of decisions to be taken by the Commission where the vote is tied.
  • Children’s Data: The Bill notes that for the purposes of Data Protection Regulation in Ireland, a child is a person under 18 years of age. The initial draft of the Bill specified 13 years as its implementing age of digital consent in the context of Article 8 of the GDPR. However, in the previous committee stage, the age was amended to 16 years. A review of the provision is to take place three years after it comes into operation. Furthermore, the Bill specifies that processing children’s data for purposes of direct marketing, profiling or micro-targeting is an offense punishable by administrative fines.
  • Common Travel Area: The Bill provides that processing of personal data and disclosure of data for purposes of preserving the Common Travel Area (between Ireland, the United Kingdom of Great Britain and Northern Ireland, the Channel Islands and the Isle of Man) is lawful where the controller is an airline or ship.
  • Further Processing: The Bill states that processing of personal data or sensitive data for a purpose other than that for which the data was originally collected is lawful where the processing is necessary to (1) prevent a threat to national security, defense or public security; (2) prevent, detect, investigate or prosecute criminal offenses; (3) provide or obtain legal advice or for legal claims and proceedings; or (4) establish, exercise or defend legal rights.
  • Sensitive Data: The Bill outlines circumstances additional to those of Article 9 of the GDPR where the processing of special categories of data is permitted. These include the processing of (1) special categories of data for purposes of providing or obtaining legal advice, for legal claims and proceedings or to establish, exercise or defend legal rights; (2) political opinion data carried out in the course of electoral activities for compiling data on peoples’ political opinions by a political party or a candidate for election, or a holder of elective political office in Ireland and by the Referendum Commission in the performance of its functions; (3) special categories of data where necessary and proportionate for the administration of justice or the performance of a function conferred on a person by or under an enactment or by the Constitution; and (4) health data where necessary and proportionate for insurance, pension or property mortgaging purposes.
  • Right to Access Results of Examinations and Appeals: The Bill specifically provides for a right of access to examination results, examination scripts and the results of an examination appeal.
  • Enforced Access Requests: The Bill notes that a person who requests that an individual make an access request in connection with the recruitment of that individual as an employee, the continued employment of that individual or for purposes of a contract for the provision of services to the person by the individual will be guilty of an offense and subject to a fine or imprisonment.
  • Right to Object to Direct Marketing: The Bill protects direct mailing carried out in the course of electoral activities, subject to certain conditions, from the right to object to direct marketing.
  • Administrative Fines: The Bill specifies that where the commission decides to impose an administrative fine on a controller or processor that is a public authority or public body, but is not a public authority or public body that acts as an undertaking within the meaning of the Competition Act 2002, the amount of the administrative fine concerned shall not exceed €1,000,000. Previous editions of the Bill exempted such public authorities and public bodies from administrative fines.
  • Representative Actions: The Bill permits a data protection action to be brought on behalf of a data subject by a non-profit body, organization or association, and the court hearing the action shall have the power to grant the data subject relief by way of injunction, declaration or compensation for the damage suffered by the plaintiff as a result of the infringement. Previous editions of the Bill did not permit recovery in the form of damages.

Belgian Privacy Commission Releases 2017 Annual Activity Report

On May 2, 2018, the Belgian Privacy Commission (the “Belgian DPA”) published its Annual Activity Report for 2017 (the “Annual Report”), highlighting its main accomplishments for the past year.

In 2017, the Belgian DPA focused on the following topics:

EU General Data Protection Regulation (“GDPR”). The Belgian DPA issued a number of  documents to help companies and organizations with their GDPR compliance efforts, including a Recommendation on internal register of processing activities and a template register, FAQs on Codes of Conduct, and a Recommendation on the appointment and role of a Data Protection Officer.

Facebook case. Despite the changes Facebook made on its cookies policy and practices following the 2015 Recommendation, the Belgian DPA considered that the social networking site did not obtain valid consent from the individuals concerned. In April 2017, the Belgian DPA issued a new set of Recommendations that provides additional guidelines for external website operators that use Facebook technologies and services, as well as several recommendations for Internet users that wish to protect themselves against Facebook’s tracking practices.

Other issues. Among others, the Belgian DPA also worked on the following issues:

  • The Belgian DPA expressed an unfavorable Opinion to the long-term retention of fingerprints and recalled that data minimization must remain the norm.
  • The Belgian DPA criticized the mass screening of festival visitors and the screening of asylum-seekers using data available on social media platforms, highlighting the lack of free consent and insufficient privacy safeguards.
  • In addition, the Belgian DPA published a Report on Big Data with recommendations to assess projects in practice in light of the GDPR.

Finally, the Annual Report states that the Belgian DPA processed 4,934 requests or complaints (an increase of 443 from 2016), including requests for information, mediation and control. Most requests for information were related to the use of CCTV, applicable privacy principles, data subjects rights, the right to one’s image and notifications via the public register.

Read the Annual Activity Report for 2017 (in French and Dutch) and the press release (in French and Dutch).

CIPL Publishes Study on How the ePrivacy Regulation will Affect the Design of Digital Services

On May 14, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP published a study on how the ePrivacy Regulation will affect the design and user experiences of digital services (the “Study”). The Study was prepared by Normally, a data product and service design studio, whom CIPL had asked for an independent expert opinion on user experience design.

Using examples, the Study examines already established user experience design principles and potential new principles that support an approach to design with user privacy in mind. The Study further details how Articles 6, 8 and 10 of the proposed ePrivacy Regulation will affect user experience design, and emphasizes that greater flexibility in how the ePrivacy Regulation is formulated would facilitate design that would enhance options and the experience for end users.

Key findings of the study include:

  • While principles for good user experience design already exist (i.e., designs must be timely, efficient, personal and convenient), there is more work to be done to create privacy centered design principles.
  • Design principles which enable design for privacy could include (1) transparent designs which ensure a user is informed to engage meaningfully; (2) empowering designs which enable users to make active choices and control their personal data; and (3) conscientious designs which recognize users can be lax with their privacy and proactively remind individuals of their choices and their ability to control or adjust them.
  • Article 6 of the ePrivacy Regulation—which requires that communications content only be processed with consent for a specific purpose—creates specific design challenges in respect of many digital services (e.g., obtaining group consent for group message chats and the ePrivacy Regulation applicability to smart messaging).
  • Article 8 of the ePrivacy Regulation—which states that service providers may collect information from a user’s device or use the device’s processing and storage capabilities only if it is technically essential or if the user has expressed consent for a specific purpose—creates specific design issues in respect of obtaining cookie consent. Cookie walls create the risk of fatigue to users and can hinder the “open web.”
  • Article 10 of the ePrivacy Regulation requires that software providers must allow users to prevent third parties from collecting information from their device or to use the device’s processing and storage capabilities, and suggests that the best moment for providers to exercise this responsibility is at the moment of installation. By frontloading such choices at installation, users will be asked to make blanket privacy choices before interacting with the digital services that these decisions will affect, thus inhibiting a user from making a fully informed choice.
  • In order to find solutions to the problems posed by the proposed ePrivacy Regulation, designers need more freedom to select and sequence privacy controls throughout the user experience, not just upfront. Distributing the controls across the user journey avoids overloading the onboarding experience, helps users engage with privacy settings through contextual relevance and allows for user understanding to build over time.

To read more about the Study’s key findings, along with all its other conclusions, please see the full Study.

FTC Issues Warning Letters for Potential COPPA Violations

On April 27, 2018, the Federal Trade Commission issued two warning letters to foreign marketers of geolocation tracking devices for violations of the U.S. Children’s Online Privacy Protection Act (“COPPA”). The first letter was directed to a Chinese company, Gator Group, Ltd., that sold the “Kids GPS Gator Watch” (marketed as a child’s first cellphone); the second was sent to a Swedish company, Tinitell, Inc., marketing a child-based app that works with a mobile phone worn like a watch. Both products collect a child’s precise geolocation data, and the Gator Watch includes geofencing “safe zones.”  

Importantly, in commenting on its ability to reach foreign companies that target U.S. children, the FTC stated that “[t]he COPPA Rule applies to foreign-based websites and online services that are involved in commerce in the United States. This would include, among others, foreign-based sites or services that are directed to children in the United States, or that knowingly collect personal information from children in the United States.”

In both letters, the FTC warned that it had specifically reviewed the foreign operators’ online services and had identified potential COPPA violations (i.e., a failure to provide direct notice or obtain parental consent prior to collecting geolocation data). The FTC stated that it expected the companies to come into compliance with COPPA, including in the case of Tinitell, which had stopped marketing the watch in an effort to adhere to COPPA’s ongoing obligation to keep children’s data secure.

St. Kitts and Nevis Pass the Data Protection Bill 2018

On May 4, 2018, St. Kitts and Nevis’ legislators passed the Data Protection Bill 2018 (the “Bill”). The Bill was passed to promote the protection of personal data processed by public and private bodies.

Attorney General the Honourable Vincent Byron explained that the Bill is largely derived from the Organization of Eastern Caribbean States model and “seeks to ensure that personal information in the custody or control of an organization, whether it be a public group like the government, or private organization, shall not be disclosed, processed or used other than the purpose for which it was collected, except with the consent of the individual or where exemptions are clearly defined.”

Read more about the Bill.

National Standard on Personal Information Security Goes into Effect in China

On May 1, 2018, the Information Security Technology – Personal Information Security Specification (the “Specification”) went into effect in China. The Specification is not binding and cannot be used as a direct basis for enforcement. However, enforcement agencies in China can still use the Specification as a reference or guideline in their administration and enforcement activities. For this reason, the Specification should be taken seriously as a best practice in personal data protection in China, and should be complied with where feasible.

The Specification constitutes a best practices guide for the collection, retention, use, sharing and transfer of personal information, and for the handling of related information security incidents. It includes (without limitation) basic principles for personal information security, notice and consent requirements, security measures, rights of data subjects and requirements related to internal administration and management. The Specification establishes a definition of sensitive personal information, and provides specific requirements for its collection and use.

Read our previous blog post from January 2018 for a more detailed description of the Specification.

Mobile Phone Maker BLU Settles FTC Privacy and Data Security Claims

On April 30, 2018, the Federal Trade Commission announced that BLU Products, Inc. (“BLU”), a mobile phone manufacturer, agreed to settle charges that the company allowed ADUPS Technology Co. Ltd. (“ADUPS”), a third-party service provider based in China to collect consumers’ personal information without their knowledge or consent, notwithstanding the company’s promises that it would keep the relevant information secure and private. The relevant personal information allegedly included, among other information, text message content and real-time location information. On September 6, 2018, the FTC gave final approval to the settlement in a unanimous 5-0 vote.

The FTC’s complaint alleged that BLU falsely claimed that the company (1) limited third-party collection of data from users’ devices to information needed to perform requested services, and (2) implemented appropriate physical, technical and administrative safeguards to protect consumers’ personal information. The FTC alleged that BLU in fact failed to implement appropriate security procedures to oversee the security practices of its service providers, including ADUPS, and that as a result, ADUPS was able to (and did in fact) collect sensitive personal information from BLU devices without consumers’ knowledge or consent. ADUPS allegedly collected text message contents, call and text logs with full telephone numbers, contact lists, real-time location data, and information about applications used and installed on consumers’ BLU devices. The FTC alleged that BLU’s lack of oversight allowed ADUPS to collect this information notwithstanding the fact that ADUPS did not need this information to perform the relevant services for BLU. The FTC further alleged that preinstalled ADUPS software on BLU devices “contained common security vulnerabilities that could enable attackers to gain full access to the devices.”

The terms of the proposed settlement prohibit BLU from misrepresenting the extent to which it protects the privacy and security of personal information and requires the company to implement and maintain a comprehensive security program. The company also must undergo biannual third-party assessments of its security program for 20 years and is subject to certain recordkeeping and compliance monitoring requirements.

Article 29 Working Party Releases Updated Standard Application Forms for BCRs

On April 11, 2018, the Article 29 Working Party (the “Working Party”) adopted two Recommendations on the Standard Application for Approval of Data Controller or Processor Binding Corporate Rules for the Transfer of Personal Data (the “Recommendations”). Binding Corporate Rules (“BCRs”) are one of the mechanisms offered to companies to transfer data outside the European Economic Area to a country which does not provide an adequate level of protection for the data according to Article 45 of the GDPR. These Recommendations, in the form of questionnaires, are intended to help BCR applicants demonstrate how they fulfill the requirements of Article 47 of the GDPR.

In addition, the Working Party also released a Working Document Setting Forth a Co-Operation Procedure for the Approval of “Binding Corporate Rules” for Controllers and Processors under the GDPR (“the Working Document”). The Working Document specifically refers to a scenario in which a group of undertakings or group of enterprises engaged in a joint economic activity, with entities in more than one Member State, wishes to submit draft BCRs to a Supervisory Authority (“SA”). In this case, the GDPR does not provide any guidance on how the group of undertakings or enterprises should determine their choice of SA as BCR Lead. The BCR Lead will act as the point of contact for the applicant during the approval process and will also manage the application throughout the cooperation phase with the relevant SAs. As such, the Working Document provides a series of criteria to identify the appropriate BCR Lead, such as the location of the group’s European Headquarters or the location of the company with the delegated data protection responsibilities. Finally, the Working Document also details the cooperation procedure to follow between the BCR Lead and the other relevant SAs for the approval of BCRs.

The Recommendations are available for data controllers and for data processors. View the Working Document.

FTC Nominees Confirmed by Senate

On April 26, 2018, the U.S. Senate confirmed by unanimous consent all five pending nominees to the Federal Trade Commission. Once installed, the agency will have a full complement of Commissioners for the first time in nearly three years. The FTC will be comprised of three Republicans — Joseph Simons (Chairman), Noah Joshua Phillips and Christine Wilson — and two Democrats — Rebecca Kelly Slaughter and Rohit Chopra.

FTC Commissioners serve staggered seven-year terms. April 27, 2018 is Democrat Terrell McSweeny’s last day at the FTC, and Simons will take over her seat, which expires in 2024. Christine Wilson, who is slated to take over Acting Chair Maureen Ohlhausen’s seat, must wait until Ohlhausen’s departure from the agency. Ohlhausen has been nominated to the U.S. Court of Federal Claims, but has not yet been confirmed.

Belgian Privacy Commission Issues Recommendation on Data Protection Impact Assessment

The Belgian Privacy Commission (the “Belgian DPA”) recently released a Recommendation (in French and Dutch) on Data Protection Impact Assessment (“DPIA”) and the prior consultation requirements under Articles 35 and 36 of the EU General Data Protection Regulation (“GDPR”) (the “Recommendation”). The Recommendation aims to provide guidance on the core elements and requirements of a DPIA, the different actors involved and specific provisions.

Key takeaways from the Recommendation are summarized below:

  • Why proceed to a DPIA? The Belgian DPA states that the obligation to conduct a DPIA in certain circumstances should be understood in light of two central principles of the GDPR, namely the principle of accountability and the risk-based approach.
  • When is a DPIA required? The Belgian DPA indicates that carrying out a DPIA is not mandatory for every processing operation. Instead, a DPIA is only required where a type of processing is “likely to result in a high risk to the rights and freedoms of natural persons.” The Belgian DPA refers to the Guidelines of the Article 29 Working Party (“Working Party”) for such assessment and, in particular, to the nine criteria set out in the Guidelines to consider when determining whether the processing of personal data is likely to create a high risk for the rights and freedoms of individuals. According to the Belgian DPA, if two criteria of this list are detected, a DPIA must be conducted.
  • When should a DPIA be conducted? The Belgian DPA stresses that the DPIA must be done before any processing of personal data, and is a tool available to help make decisions concerning the processing.
  • What are the essential elements of a DPIA? A DPIA must contain the systematic description of the considered processing as well as the purposes of the processing, including at the minimum a clear description of the processing, personal data involved, categories of recipients and retention period of the data, and finally the material (e.g., software, network, papers, etc.) on which the data are saved. The DPIA must also include an evaluation of the necessity and proportionality of the processing activities with regards to the purposes of the processing, taking into account several criteria. Additionally, the DPIA must include a risk assessment of the whole process of the identification, including the analysis and evaluation of those risks. To conduct such an assessment, companies can chose the method as long as it leads to an objective evaluation of the risks. However, the Belgian DPA recommends favoring existing risk management methods. Finally, the DPIA must include the measures anticipated to address those risks, such as the safeguards, security measures and tools implemented to ensure the protection of the data and compliance with the GDPR.
  • Prior consultation of the Supervisory Authorities (“SAs”). The Belgian DPA states that the GDPR requires a prior consultation of the SAs only when the residual risk is high. If the risks can be mitigated, then a prior consultation is not mandatory.

The Belgian DPA also makes additional recommendations, including inter alia:

  • Similar or joint processing activities. A single DPIA could be used to assess multiple processing operations that are similar in terms of nature, scope, context, purpose and risks.
  • Monitoring and review. The controller should, if necessary, conduct a periodic review of the processing activity to assess whether the processing is consistent with the DPIA that was performed. Such a review must at least take place where there is a modification of the risk resulting from the processing operations.
  • Preexistent processing. For processing activities prior to May 25, 2018, conducting a DPIA is only required if the risk(s) change after May 25, 2018 (e.g., a new technology is used or personal data are used for another purpose). However, the Belgian DPA recommends, as a best practice, to also conduct DPIAs for existing processing activities if they are likely to result in a high risk to the rights and freedoms of individuals.

Finally, the Recommendation includes annexes:

  • Annex 1: The Belgian DPA recommends some minimal characteristics for appropriate risk management.
  • Annex 2: The Belgian DPA provides a draft list of processing activities requiring a DPIA. The list includes, inter alia, processing of biometric data for the purpose of  identifying individuals in a public area, collecting personal data from third parties for the purpose of making decisions (including to refuse or terminate) regarding a contract to which an individual is party, large-scale processing of personal data from vulnerable individuals (e.g., children), or large-scale processing of personal data where individuals’ behavior is observed, collected, established or influenced in a systematic manner and using automated means, including for advertising purposes.
  • Annex 3: The Belgian DPA provides a draft list of processing activities that are exempt from a DPIA, including, inter alia, processing activities by private entities which are necessary to meet their legal obligations, subject to conditions, the processing of personal data for payroll purposes and HR management, and the processing of personal data for client and vendor management purposes, subject to certain conditions.