Author Archives: Hunton Andrews Kurth LLP

California Enacts Blockchain Legislation

As reported on the Blockchain Legal Resource, California Governor Jerry Brown recently signed into law Assembly Bill No. 2658 for the purpose of further studying blockchain’s application to Californians. In doing so, California joins a growing list of states officially exploring distributed ledger technology.

Specifically, the law requires the Secretary of the Government Operations Agency to convene a blockchain working group prior to July 1, 2019. Under the new law, “blockchain” means “a mathematically secured, chronological and decentralized ledger or database.” In addition to including various representatives from state government, the working group is required to include appointees from the technology industry and non-technology industries, as well as appointees with backgrounds in law, privacy and consumer protection.

Under the new law, which has a sunset date of January 1, 2022, the working group is required to evaluate:

  • the uses of blockchain in state government and California-based businesses;
  • the risks, including privacy risks, associated with the use of blockchain by state government and California-based businesses;
  • the benefits associated with the use of blockchain by state government and California-based businesses;
  • the legal implications associated with the use of blockchain by state government and California-based businesses; and
  • the best practices for enabling blockchain technology to benefit the State of California, California-based businesses and California residents.

In doing so, the working group is required to seek “input from a broad range of stakeholders with a diverse range of interests affected by state policies governing emerging technologies, privacy, business, the courts, the legal community and state government.”

The working group is also tasked with delivering a report to the California Legislature by January 1, 2020, on the potential uses, risks and benefits of blockchain technology by state government and California businesses. Moreover, the report is required to include recommendations for amending relevant provisions of California law that may be impacted by the deployment of blockchain technology.

Hunton Insurance Head Comments on Hotel Data Breach Coverage Dispute

As reported on the Insurance Recovery Blog, Hunton Andrews Kurth insurance practice head Walter Andrews recently commented to the Global Data Review regarding the infirmities underlying an Orlando, Florida federal district court’s ruling that an insurer does not have to defend its insured for damage caused by a third-party data breach.

The decision in St. Paul Fire & Marine Ins. Co. v. Rosen Millennium Inc., which involved a claim for coverage under two general liability insurance policies, turned on whether or not customers’ credit card information obtained from the insured’s payment system had been “made known” and by whom. According to the district court, the insurance policies required that the credit card information be “made known” by the insured, however in this instance, the publication was made by the third-party hackers. As Andrews explained, however, although it was undisputed that Florida law controlled interpretation of Millennium’s policies, the district court based its decision on a prior decision decided under South Carolina law, which differs from Florida law in many fundamental respects. “Florida state law makes it very clear that coverage is meant to be construed in favor of the policyholder where there is ambiguity,” Andrews said. “To me, it’s clear that there were two reasonable interpretations of the insurance policy here.”

Despite the outcome, Andrews noted that there are helpful takeaways from this decision for policyholders and prospective insureds facing potential exposure from cyber events: “Given how strenuously the insurers are fighting to deny coverage for data breach claims, a readable takeaway is that policyholders should consider getting very specific cyber insurance coverage.”

View the district court’s decision, and Andrews’ comments to the Global Data Review.

CIPL Hosts Workshop on Accountability Under the GDPR in Paris

On October 5, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP hosted a workshop on how to implement, demonstrate and incentivize accountability under the EU General Data Protection Regulation (“GDPR”), in collaboration with AXA in Paris, France. In addition to the workshop, on October 4, 2018, CIPL hosted a Roundtable on the Role of the Data Protection Office (“DPO”) under the GDPR at Mastercard and a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity.

Roundtable on the Role of the DPO Under the GDPR

On October 4, 2018, CIPL hosted a Roundtable on the Role of the DPO under the GDPR. The industry-only session consisted of an open discussion among CIPL members who have firsthand experience in carrying out the role and tasks of a DPO in diverse and complex multinational organizations. Following opening remarks by CIPL president Bojana Bellamy, participants discussed practical challenges, best practices and solutions to the effective exercise of the DPO’s functions. The Roundtable addressed issues such as the position of the DPO in the organization, independence and conflict of interests and rights, duties and liability of the DPO. View the full program and discussion topics as listed in the program agenda.

CIPL Pre-Workshop Dinner at Chanel School of Fashion

On the evening of October 4, 2018, CIPL hosted a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity. The event brought together CIPL members and data protection authorities (“DPAs”) in advance of CIPL’s all day accountability workshop. During the dinner, remarks were given by Bojana Bellamy, as well as Anna Pouliou, Head of Privacy at Chanel and Terry McQuay, Nymity president and sponsor of the event.

CIPL Workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR

On October 5, 2018, CIPL hosted an all day workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR, in collaboration with AXA. CIPL’s two newest papers on the Central Role of Accountability in Data Protection formed the basis of the program, placing an emphasis on how accountability enables effective data protection and trust in the digital society, and on the need for DPAs to encourage and incentivize accountability. Over 100 CIPL members and invited guests attended the session, including over 10 data privacy regulators.

Following opening remarks by Emmanuel Touzeau, Group Communication and Brand Director – GDPR Sponsor at AXA and CIPL’s Bojana Bellamy, introductory scene setting keynotes by Peter Hustinx, Former European Data Protection Supervisor and Patrick Rowe, Deputy General Counsel at Accenture laid the foundation for the day’s discussions.

The first panel on “Accountability under the GDPR” featured a wide ranging discussion by DPAs and industry experts on the important role of accountability in data protection. The meaning of accountability and its role in enabling effective privacy protections for individuals while ensuring innovation by organizations informed the discussion, along with dialogue around the key elements of accountability and how specific requirements of the GDPR map to these core elements. An important topic of discussion during this session concerned how to reconcile the need for proactive engagement between companies and DPAs with enforcement practices.

The second panel on “How to Demonstrate Accountability Internally and Externally” progressed the discussion from what constitutes accountability to how to implement and demonstrate it in practice, both within an organization and externally to DPAs. Participants also discussed whether accountability should be showcased proactively and how it can be demonstrated by participation in accountability schemes such as Binding Corporate Rules and future GDPR certifications and codes of conduct.

The final session of the day on “Best Practices: How are DPAs Incentivizing Accountability?” considered how DPAs can incentivize accountability under the GDPR. A wide range of incentives that are – or could be – used to encourage organizations to implement strong accountability measures were discussed, along with those that feature in CIPL’s paper on incentivizing accountability.

The workshop formed part of CIPL’s ongoing work around the concept of accountability in data protection and reaching consensus on its essential elements. View the full workshop agenda. CIPL’s papers on The Case for Accountability: How it Enables Effective Data Protection and Trust in Digital Society and Incentivizing Accountability: How Data Protection Authorities and Law Makers Can Encourage Accountability are the latest papers in this initiative and form the foundations for more work on accountability to follow from CIPL .

EDPB Adopts Opinions on National DPIA Lists in the EU

The European Data Protection Board (“EDPB”) recently published 22 Opinions on the draft lists of Supervisory Authority (“SAs”) in EU Member States regarding which processing operations are subject to the requirement of conducting a data protection impact assessment (“DPIA”) under the EU General Data Protection Regulation (“GDPR”).

National DPIA Lists

Article 35(4) of the GDPR states that the SAs of the EU Member States must establish, publish and communicate to the EDPB a list of processing operations that trigger the DPIA requirement under the GDPR. The following EU Members States have submitted their lists: Austria, Belgium, Bulgaria, Czech Republic, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Sweden and the United Kingdom.

In some cases, the EDPB requests that the SAs include processing activities in their list or specify additional criteria that, when combined, would satisfy the DPIA requirement. In other cases, the EDPB requests that the SAs remove some processing activities or criteria not considered to present a high risk to individuals. The purpose of the EDPB opinions is to ensure the consistent application of the GDPR’s DPIA requirement and to limit inconsistencies among EU Member States with respect to this requirement. The national lists will not be identical because, in establishing DPIA lists, the SAs must take into account their national or regional context and national legislation.

The EDPB has emphasized that the national DPIA lists are aimed to improve transparency for data controllers, but they are not exhaustive. Importantly, the EDPB requests national SAs to include in their DPIA lists a clear reference to the high risk criteria for conducting DPIAs as established by the Article Working Party 29 in its guidance. The draft lists should aim to rely on and complement these guidelines.

Next Steps

After receiving the EDPB’s opinions, the SAs have two weeks to (1) communicate to the EDPB whether they intend to amend their draft list or maintain it in its current form and (2) provide an explanation for such decision.

View the 22 Opinions of the EDPB on national DPIA lists.

Vizio Agrees to $17M Settlement to Resolve Smart TV Class Action Suit

Vizio, Inc. (“Vizio”), a California-based company best known for its internet-connected televisions, agreed to a $17 million settlement that, if approved, will resolve multiple proposed consumer class actions consolidated in California federal court. The suits’ claims, which are limited to the period between February 1, 2014 and February 6, 2017, involve data-tracking software Vizio installed on its smart TVs. The software allegedly identified content displayed on Vizio TVs and enabled Vizio to determine the date, time, channel of programs and whether a viewer watched live or recorded content. The viewing patterns were connected to viewer’s IP addresses, though never, Vizio emphasized in its press release announcing the proposed settlement, to an individual’s name, address, or similar identifying information. According to Vizio, viewing data allows advertisers and programmers to develop content better aligned with consumers’ preferences and interests.  

Among other claims, the suits allege that Vizio failed to adequately disclose its surveillance practices and obtain consumers’ express consent before collecting the information. The various suits, some of which were filed in 2015, were consolidated in California’s Central District in April 2016 and subsequently survived Vizio’s motion to dismiss. Vizio had argued that several of the claims were deficient, and contended that the injunctive relief claims were moot in light of a February 2017 consent decree resolving the Federal Trade Commission’s (“FTC”) complaint over Vizio’s collection and use of viewing data and other information. To settle the FTC case, Vizio agreed, among other things, to stop unauthorized tracking, to prominently disclose its TV viewing collection practices and to get consumers’ express consent before collecting and sharing viewing information.

The parties notified the district court in June that they struck a settlement in principle. On October 4, 2018, they jointly moved for preliminary settlement approval. Counsel for the consumers argued that the deal is fair, because revenue that Vizio obtained from sharing consumers’ data will be fully disgorged and class members who submit a claim will receive a proportion of the settlement of between $13 and $31, based on a 2 to 5 percent claims rate. Vizio also agreed to provide non-monetary relief including revised on-screen disclosures concerning its viewing data practices and deleting all viewing data collected prior to February 6, 2017. The relief is pending until the court approves the settlement.

SEC Fines Broker-Dealer $1 Million in First Enforcement Action Under Identity Theft Rule

On September 26, 2018, the SEC announced a settlement with Voya Financial Advisers, Inc. (“Voya”), a registered investment advisor and broker-dealer, for violating Regulation S-ID, also known as the “Identity Theft Red Flags Rule,” as well as Regulation S-P, the “Safeguards Rule.” Together, Regulations S-ID and S-P are designed to require covered entities to help protect customers from the risk of identity theft and to safeguard confidential customer information. The settlement represents the first SEC enforcement action brought under Regulation S-ID.

I.  The Identity Theft Red Flags Rule

Regulation S-ID covers SEC-registered broker-dealers, investment companies and investment advisors and mandates a written identity theft program, including policies and procedures designed to:

  • identify relevant types of identity theft red flags;
  • detect the occurrence of those red flags;
  • respond appropriately to the detected red flags; and
  • periodically update the identity theft program.

Covered entities are also required to ensure the proper administration of their preventative programs.

II.  The Safeguards Rule

Rule 30(a) of Regulation S-P requires financial institutions to adopt written policies and procedures that address administrative, technical and physical safeguards to protect customer records and information. It further requires that those policies and procedures be reasonably designed to (1) ensure the security and confidentiality of customer records and information; (2) protect against anticipated threats or hazards to the security or integrity of customer records and information; and (3) protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

III.  The Voya Violations

According to the SEC’s order, cyber intruders successfully impersonated Voya contractor-representatives, gaining access to a web portal that housed the personally identifiable information (“PII”) of approximately 5,600 Voya customers. Over a six-day period, intruders called Voya’s service call center and requested that three representatives’ passwords be reset; the intruders then used the temporary passwords to create new customer profiles and access customer information and documents. The order indicated that, in two of the three cases, the phone number used to call the Voya service center had previously been flagged as associated with fraudulent activity.

Three hours after the first fraudulent reset, the targeted representative allegedly notified technical support that they had not requested the reset. While Voya did take some steps in response, the order found that those steps did not include terminating the fraudulent login sessions or imposing safeguards sufficient to prevent intruders from obtaining passwords for two additional representative accounts over the next several days.

The SEC determined that Voya violated the Identity Theft Red Flags Rule because, while it had adopted an Identity Theft Prevention Program in 2009, it did not review and update this program in response to changes in the technological environment. The SEC also found that Voya failed to provide adequate training to its employees. Finally, the SEC found that Voya’s Identity Theft Program lacked reasonable policies and procedures to respond to red flags. In addition to these violations, the SEC determined that Voya violated the Safeguards Rule by failing to adopt written policies and procedures reasonably designed to safeguard customer records and information.

IV.  Aftermath and Implications

While neither admitting nor denying the SEC’s findings, Voya agreed to a $1 million fine to settle the enforcement action and will engage an independent consultant to evaluate its policies and procedures for compliance with the Safeguards Rule, Identity Theft Red Flags Rule and related regulations. The SEC additionally ordered that Voya cease and desist from committing any violations of Regulations S-ID and S-P.

The Voya settlement demonstrates that the SEC is focused on protecting consumer information, and ensuring that broker-dealers, investment companies and investment advisors comply with Regulation S-ID. The Voya settlement also provides that having policies and procedures designed to protect customer information alone may not suffice; entities subject to Regulation S-ID should frequently evaluate the adequacy of their policies and procedures designed to identify and address “red flags,” and they should ensure that all relevant employees receive comprehensive training on identify theft. Such entities must also ensure that their compliance program is frequently updated to address changes in technology and corresponding changes to the risk environment.

NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic.

The Draft Report is designed “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their IoT devices throughout their lifecycles.” According to the Draft Report, “[m]any organizations are not necessarily aware they are using a large number of IoT devices. It is important that organizations understand their use of IoT because many IoT devices affect cybersecurity and privacy risks differently than conventional IT devices do.”

The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy.

In order to address those considerations and risk mitigation goals, the Draft Report provides the following recommendations:

  • Understand the IoT device risk considerations and the challenges they may cause to mitigating cybersecurity and privacy risks for devices in the appropriate risk mitigation areas.
  • Adjust organizational policies and processes to address the cybersecurity and privacy risk mitigation challenges throughout the IoT device lifecycle.
  • Implement updated mitigation practices for the organization’s IoT devices as you would any other changes to practices.

Comments are due by October 24, 2018.

APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade recognizes “the economic and social benefits of protecting the personal information of users of digital trade” and will require the U.S., Canada and Mexico (the “Parties”) to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users[.]” The frameworks should include key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability.

In adopting such a framework, Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) within their respective legal systems:

Art. 19.8(6) Recognizing that the Parties may take different legal approaches to protecting personal information, each Party should encourage the development of mechanisms to promote compatibility between these different regimes. The Parties shall endeavor to exchange information on the mechanisms applied in their jurisdictions and explore ways to extend these or other suitable arrangements to promote compatibility between them. The Parties recognize that the APEC Cross-Border Privacy Rules system is a valid mechanism to facilitate cross-border information transfers while protecting personal information.

In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to… cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.”

The APEC CBPRs were developed by the 21 APEC member economies as a cross-border transfer mechanism and comprehensive privacy program for private sector organizations  to enable the accountable free flow of data across the APEC region. Organizations must be certified by a third-party APEC recognized Accountability Agent to participate in this system. The CBPRs are binding and enforceable against participating companies.

The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate.

CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.

What is a Blockchain?

A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.

Scope of the CNIL’s Assessment

The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.

  • DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
  • Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).

In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.

Role of Actors in a Blockchain Network

The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.

If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.

According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.

How to Minimize Risks To Data Subjects

  • Assessing the appropriateness of using blockchain

As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.

If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.

  • Choosing the right format under which the data will be recorded

As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.

In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.

The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.

With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).

In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”

How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights

According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)

However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.

In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.

With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.

Security Requirements

The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.

Next Steps

In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.

Chipotle Consumer Plaintiffs’ Putative Class Case Survives in Part

On September 26, 2018, the U.S. District Court for the District of Colorado (“the Court”) refused to dismiss all putative class claims against Chipotle Mexican Grill, Inc. (“Chipotle”). This litigation arose from a 2017 data breach in which hackers stole customers’ payment card and other personal information by using malicious software to access the point-of-sale systems at Chipotle’s locations. 

Chipotle moved to dismiss all claims, arguing that two of the named plaintiffs – Plaintiff Lawson and Plaintiff Baker – lacked standing and that all other plaintiffs failed to state a claim. The motion was first considered by a United States Magistrate Judge, who recommended granting only part of Chipotle’s requested relief. Both Plaintiffs and Chipotle objected to portions of the recommendation. The District Court Judge agreed with the recommendation in part.

The Court first found that Plaintiff Lawson’s allegations of debit card misuse, time spent obtaining a new debit card, inability to receive cash back awards on certain purchases, and the cost to expedite delivery of a new card for impending travel all demonstrated injury in fact sufficient for standing. It also determined that more than just Plaintiff Baker’s name and payment card number may have been stolen, thus alleging facts sufficient to establish an impending injury.

The District Court Judge further found that certain allegations failed to state claims. Specifically, the Court dismissed claims for: (1) negligence; (2) negligence per se; (3) violation of the Colorado Consumer Protection Act; (4) unjust enrichment; and (5) violation of the Illinois Uniform Deceptive Trade Practices Act. However, the following claims survived Chipotle’s dismissal efforts: (1) breach of implied contract; (2) fraudulent omission claims (under Arizona, California, and Illinois consumer protection laws); (3) violation of California’s Unfair Competition Law; and (4) various damages claims (under California, Illinois, and Missouri consumer protection laws).

View Court’s Order.

California Enacts New Requirements for Internet of Things Manufacturers

On September 28, 2018, California Governor Jerry Brown signed into law two identical bills regulating Internet-connected devices sold in California. S.B. 327 and A.B. 1906 (the “Bills”), aimed at the “Internet of Things,” require that manufacturers of connected devices—devices which are “capable of connecting to the Internet, directly or indirectly,” and are assigned an Internet Protocol or Bluetooth address, such as Nest’s thermostat—outfit the products with “reasonable” security features by January 1, 2020; or, in the bills’ words: “equip [a] device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure[.]”

According to Bloomberg Law, the Bills’ non-specificity regarding what “reasonable” features include is intentional; it is up to the manufacturers to decide what steps to take. Manufacturers argue that the Bills are egregiously vague, and do not apply to companies that import and resell connected devices made in other countries under their own labels.

The Bills are opposed by the Custom Electronic Design & Installation Association, Entertainment Software Association and National Electrical Manufacturers Association. They are sponsored by Common Sense Kids Action; supporters include the Consumer Federation of America, Electronic Frontier Foundation and Privacy Rights Clearinghouse.

Four Companies Settle FTC Allegations Regarding False EU-U.S. Privacy Shield Certifications

On September 27, 2018, the Federal Trade Commission announced a settlement agreement with four companies – IDmission, LLC, (“IDmission”) mResource LLC (doing business as Loop Works, LLC) (“mResource”), SmartStart Employment Screening, Inc. (“SmartStart”), and VenPath, Inc. (“VenPath”) – over allegations that each company had falsely claimed to have valid certifications under the EU-U.S. Privacy Shield framework. The FTC alleged that SmartStart, VenPath and mResource continued to post statements on their websites about their participation in the Privacy Shield after allowing their certifications to lapse. IDmission had applied for a Privacy Shield certification but never completed the necessary steps to be certified.

In addition, the FTC alleged that both VenPath and SmartStart failed to comply with a provision under the Privacy Shield requiring companies that cease participation in the Privacy Shield framework to affirm to the Department of Commerce that they will continue to apply the Privacy Shield protections to personal information collected while participating in the program.

As part of the proposed settlements with the FTC, each company is prohibited from misrepresenting their participation in any privacy or data security program sponsored by the government or any self-regulatory or standard-setting organization and must comply with FTC reporting requirements. Further, VenPath and SmartStart must either (1) continue to apply the Privacy Shield protections to personal information collected while participating in the Privacy Shield, (2) protect it by another means authorized by the Privacy Shield framework, or (3) return or delete the information within 10 days of the FTC’s order.

“Companies need to know that if they fail to honor their Privacy Shield commitments, or falsely claim participation in the Privacy Shield framework, we will hold them accountable,” said Andrew Smith, director of the FTC’s Bureau of Consumer Protection. “We have now brought enforcement actions against eight companies related to the Privacy Shield, and we will continue to aggressively enforce the Privacy Shield and other cross-border privacy frameworks.”

CIPL Submits Comments on Draft Indian Data Protection Bill

On September 26, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Indian Ministry of Electronics and Information Technology on the draft Indian Data Protection Bill 2018 (“Draft Bill”).

CIPL’s comments on the Draft Bill focus on several key issues that are of particular importance for any modern-day data protection law, including increased emphasis on accountability and the risk-based approach to data processing, interoperability with other data protection laws globally, the significance of having a variety of legal bases for processing and not overly relying on consent, the need for extensive and flexible data transfer mechanisms, and the importance of maximizing the effectiveness of the data protection authority.

Specifically, the comments address the following key issues:

  • the Draft Bill’s extraterritorial scope;
  • the standard for anonymization;
  • notice requirements;
  • accountability and the risk-based approach;
  • legal bases for processing, including importance of the reasonable purposes ground;
  • sensitive personal data;
  • children’s data;
  • individual rights;
  • data breach notification;
  • Data Protection Impact Assessments;
  • record-keeping requirements and data audits;
  • Data Protection Officers;
  • the adverse effects of a data localization requirement;
  • cross-border transfers;
  • codes of practice; and
  • the timeline for adoption.

These comments were formed as part of CIPL’s ongoing engagement in India. In January 2018, CIPL responded to the Indian Ministry of Electronics and Information Technology’s public consultation on the White Paper of the Committee of Experts on a Data Protection Framework for India.

NTIA Seeks Public Comment on Approach to Consumer Privacy with an Eye Toward Building Better Privacy Protections

On September 26, 2018, the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced that it is seeking public comments on a proposed approach to advancing consumer privacy. The approach is divided into two parts: (1) a set of desired user-centric privacy outcomes of organizational practices, including transparency, control, reasonable minimization (of data collection, storage length, use and sharing), security, access and correction, risk management and accountability; and (2) a set of high-level goals that describe the outlines of the ecosystem that should be created to provide those protections, including harmonizing the regulatory landscape, balancing legal clarity and the flexibility to innovate, ensuring comprehensive application, employing a risk and outcome-based approach, creating mechanisms for interoperability with international norms and frameworks, incentivizing privacy research, ensuring that the Federal Trade Commission has the resources and authority to enforce, and ensuring scalability.

The NTIA is specifically looking to the public to respond with comments on the following questions:

  • Are there other outcomes or goals that should be included, or outcomes or goals that should be expanded upon as separate items?
  • Are the descriptions for the outcomes and goals clear, or are there are any issues raised by how any of them are described?
  • Are there any risks that accompany the list of outcomes, the list of goals or the general approach taken?
  • Are there any aspects of the approach that could be implemented or enhanced through Executive action or non-regulatory actions, and if so, what actions?
  • Should further explorations be made regarding additional commercial data privacy-related issues, including any recommended focus and desired outcomes?
  • Are there any aspects of the approach that may be achieved by other means, such as through statutory changes?
  • Do any terms used in the approach require more precise definitions, including suggestions for better definitions and additional terms?
  • Do changes need to be made with regard to the FTC’s resources, processes and/or statutory authority?
  • If all or some of the outcomes or goals described in this approach were replicated by other countries, do you believe it would be easier for U.S. companies to provide goods and services in those countries?
  • Are there other ways to achieve U.S. leadership that are not included in the approach?
  • Are there any high-level goals in this approach that would be detrimental to achieving U.S. leadership?

Comments are due by October 26, 2018, and may be submitted by email. Additional information can be found in the Federal Register Notice.

Senate Commerce Committee Holds Hearing on Examining Consumer Privacy Protections

On September 26, 2018, the U.S. Senate Committee on Commerce, Science, and Transportation convened a hearing on Examining Consumer Privacy Protections with representatives of major technology and communications firms to discuss approaches to protecting consumer privacy, how the U.S. might craft a federal privacy law, and companies’ experiences in implementing the EU General Data Protection Regulation (“GDPR”) and the California Consumer Privacy Act (“CCPA”).

After introductory remarks by Senator and Chairman of the Committee John Thune (R-SD) and Senator Bill Nelson (D-FL), representatives from AT&T, Amazon, Google, Twitter, Apple and Charter Communications provided testimony on the importance of protecting consumer privacy, the need for clear rules that still ensure the benefits that flow from the responsible use of data, and key principles that should be included in any federal privacy law. A question and answer session followed, with various senators posing a variety of questions to the witnesses, covering topics such as comparisons to global data privacy regimes, the current and potential future authority of the Federal Trade Commission, online behavioral advertising and political advertising, current privacy tools and issues surrounding children’s data.

Key views expressed by the witnesses from the hearing include:

  • support for the creation of a federal privacy law and a preference for preemption rather than a patchwork of different state privacy laws;
  • agreement that the FTC should be the regulator for a federal privacy law but the authority of the FTC under such a law should be discussed and examined further;
  • concern around a federal privacy law attempting to copy the GDPR or CCPA. A federal privacy law should seek to avoid the difficulties and unintended consequences created by these laws and the U.S. should put its own stamp on what the law should be; and
  • agreement that a federal law should not be unduly burdensome for small and medium sized enterprises.

An archived webcast of the hearing is available on the Senate Commerce Committee’s website.

The hearing marked the first of several as the U.S. debates whether to adopt federal privacy legislation. The next hearing is scheduled for early October where Andrea Jelinek, head of the European Data Protection Board, Alastair MacTaggert, California privacy activist, and representatives from consumer organizations will participate and answer questions on consumer privacy, the GDPR and the CCPA.

Uber Settles with 50 State Attorneys General for $148 Million In Connection with 2016 Data Breach

On September 26, 2018, Uber Technologies Inc. (“Uber”) agreed to a settlement (the “Settlement”) with all 50 U.S. state attorneys general (the “Attorneys General”) in connection with a 2016 data breach affecting the personal information (including driver’s license numbers) of approximately 607,000 Uber drivers nationwide, as well as approximately 57 million consumers’ email addresses and phone numbers. The Attorneys General alleged that after Uber learned of the breach, which occurred in November 2016, the company paid intruders a $100,000 ransom to delete the data. The Attorneys General alleged that Uber failed to promptly notify affected individuals of the incident, as required under various state laws, instead notifying affected customers and drivers of the breach one year later in November 2017. 

As reported by the Pennsylvania Office of the Attorney General, the Settlement will require Uber to pay $148 million to the Attorneys General, which will be divided among the 50 states. In addition, Uber must undertake certain data security measures, including:

  • comply with applicable breach notification and consumer protection laws regarding protecting personal information;
  • implement measures to protect user data stored on third-party platforms;
  • implement stricter internal password policies for employee access to Uber’s network;
  • develop and implement an overall data security policy to address the collection and protection of personal information, including assessing potential data security risks;
  • implement additional data security measures with respect to personal information stored on Uber’s network;
  • implement a corporate integrity program to ensure appropriate reporting channels for internal ethics concerns or complaints; and
  • engage a third-party expert to conduct regular assessments of Uber’s data security efforts and make recommendations for improvement, as appropriate.

The Settlement is pending court approval. In a statement, California Attorney General Xavier Becerra said, “Uber’s decision to cover up this breach was a blatant violation of the public’s trust. The company failed to safeguard user data and notify authorities when it was exposed. Consistent with its corporate culture at the time, Uber swept the breach under the rug in deliberate disregard of the law.”

We previously reported that the Federal Trade Commission modified a 2017 settlement with Uber after learning of the company’s response to the 2016 breach.

CNIL Publishes Initial Assessment of GDPR Implementation

On September 25, 2018, the French Data Protection Authority (the “CNIL”) published the first results of its factual assessment of the implementation of the EU General Data Protection Regulation (GDPR) in France and in Europe. When making this assessment, the CNIL first recalled the current status of the French legal framework, and provided key figures on the implementation of the GDPR from the perspective of privacy experts, private individuals and EU supervisory authorities. The CNIL then announced that it will adopt new GDPR tools in the near future. Read the full factual assessment (in French).

Upcoming Consolidation of the French Legal Framework

The French Data Protection Act (“the Act”) and its implementing Decree were amended by a law and Decree published respectively on June 21 and August 3, 2018, in order to bring French law in line with the GDPR and implement the EU Data Protection Directive for Police and Criminal Justice Authorities. However, some of the provisions of the Act still remain unchanged and are no longer applicable. In addition, the Act does not mention all new obligations imposed by the GDPR or the new rights of data subjects, and is therefore incomplete. The CNIL recalled that an ordinance is expected to be adopted by the end of this year to re-write the Act and facilitate readability of the French data protection framework.

Gradual Rolling Out of the GDPR by Privacy Experts

The CNIL noted that 24,500 organizations have appointed a data protection officer (“DPO”), which represents 13,000 DPOs. In comparison, only 5,000 DPOs were appointed under the previous data protection framework. Since May 25, 2018, the CNIL has also received approximately 7 data breach notifications a day, totaling more than 600 data breach notifications, which affected 15 million individuals. The CNIL  continues to receive a large number of authorization requests in the health sector (more than 100 requests filed since May 25, 2018, in particular for clinical trial purposes).

Individuals’ Unprecedented GDPR Awareness

Since May 25, 2018, the CNIL has received 3,767 complaints from individuals. This represents an increase of 64% compared to the number of complaints received during the same period in 2017, and can be explained by the widespread media coverage of the GDPR and cases such as Cambridge Analytica. EU supervisory authorities are currently handling more than 200 cross-border complaints under the cooperation procedure provided for by the GDPR, and the CNIL is a supervisory authority concerned for most of these cases.

Effective European Cooperation Under the GDPR

The CNIL recalled that a total of 18 GDPR guidelines have been adopted at the EU level and 7 guidelines are currently being drawn up by the European Data Protection Board (“EDPB”) (e.g., guidelines on the territorial scope of the GDPR, data transfers and video surveillance). Further, the IT platform chosen to support cooperation and consistency procedures under the GDPR has been effective since May 25, 2018. With respect to Data Protection Impact Assessments (“DPIAs”), the CNIL has submitted to the EDPB a list of processing operations requiring a DPIA. Once validated by the EDPB, this list and additional guidelines will be published by the CNIL.

Next Steps

In terms of the CNIL’s upcoming actions or initiatives, the CNIL announced that it will shortly propose the following new tools:

  • “Referentials” (i.e., guidelines) relating to the processing of personal data for HR and customer management purposes. These referentials are intended to update the CNIL’s well established doctrine in light of the new requirements of the GDPR. The draft referentials will be open for public consultation. Once finalized, the CNIL announced its intention to promote those referentials at the EU level.
  • A Model Regulation regarding biometric data. According to Article 9(4) of the GDPR, EU Member States may maintain and introduce further conditions, including limitations, with regard to the processing of biometric data. France introduced such conditions by amending the French Data Protection Act in order to allow the processing of biometric data for the purposes of controlling access to a company’s premises and/or devices and apps used by staff members to perform their job duties if that processing complies with the CNIL’s Model Regulation. Compliance with that Model Regulation constitutes an exception from the prohibition to process biometric data.
  • A first certification procedure. In May 2018, the CNIL launched a public consultation on the certification of the DPO, which ended on June 22, 2018. The CNIL will finalize the referentials relating to the certification of the DPO by the end of this month.
  • Compliance packs. The CNIL confirmed that it will continue to adopt compliance packs, (i.e., guidelines for a particular sector or industry).  The CNIL also announced its intention to promote some of these compliance packs at the EU level (such as the compliance pack on connected vehicles) in order to develop a common European doctrine that could be endorsed by the EDPB.
  • Codes of conduct. A dozen codes of conduct are currently being prepared, in particular codes of conduct on medical research and cloud infrastructures.
  • A massive open online course. This course will help participants familiarize themselves with the fundamental principles of the GDPR.

CCPA Amendment Bill Signed Into Law

On September 23, 2018, California Governor Jerry Brown signed into law SB-1121 (the “Bill”), which makes limited substantive and technical amendments to the California Consumer Privacy Act of 2018 (“CCPA”). The Bill takes effect immediately,  and delays the California Attorney General’s enforcement of the CCPA until six months after publication of the Attorney General’s implementing regulations, or July 1, 2020, whichever comes first. 

We have previously posted about the modest changes that SB-1121 makes to the CCPA. As reported in BNA Privacy Law Watch, the California legislature may consider broader substantive changes to the CCPA in 2019.

ICO Issues First Enforcement Action Under the GDPR

The Information Commissioner’s Office (“ICO”) in the UK has issued the first formal enforcement action under the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act 2018 (the “DPA”) on Canadian data analytics firm AggregateIQ Data Services Ltd. (“AIQ”). The enforcement action, in the form of an Enforcement Notice served under section 149 of the DPA, requires AIQ to “cease processing any personal data of UK or EU citizens obtained from UK political organizations or otherwise for the purposes of data analytics, political campaigning or any other advertising purposes.”

AIQ uses data to target online advertisements at voters, and its clients include UK political organizations, in particular Vote Leave, BeLeave, Veterans for Britain and the DUP Vote to Leave. These organizations provide personal data to AIQ for the purposes of targeting individuals with political advertising messages on social media.

While not established in the EU, the ICO has determined that as long as AIQ’s processing activities relate to the monitoring of data subjects’ behavior when that behavior takes place within the EU, then AIQ is subject to the GDPR, under its territorial scope provisions at Article 3(2)(b).

AIQ was found to be in breach of Articles 5(a) – 5(c) and Article 6 of the GDPR for processing personal data in a way that data subjects were not aware of, for a purpose they would not have expected, and without a lawful basis for processing. In addition, AIQ failed to provide the transparency information required under Article 14 of the GDPR.

AIQ is challenging the ICO’s decision and has exercised its right of appeal to the First-tier Tribunal, under section 162(1)(c) of DPA.

UK ICO Fines Equifax for 2017 Breach

Recently, the UK Information Commissioner’s Office (“ICO”) fined credit rating agency Equifax £500,000 for failing to protect the personal data of up to 15 million UK individuals. The data was compromised during a cyber attack that occurred between May 13 and July 30, 2017, which affected 146 million customers globally. Although Equifax’s systems in the U.S. were targeted, the ICO found the credit agency’s UK arm, Equifax Ltd, failed to take appropriate steps to ensure that its parent firm, which processed this data on its behalf, had protected the information. The ICO investigation uncovered a number of serious contraventions of the UK Data Protection Act 1998 (the “DPA”), resulting in the ICO imposing on Equifax Ltd the maximum fine available.

The compromised UK data was controlled by Equifax Ltd and was processed by Equifax Ltd’s parent company and data processor, Equifax Inc. The breach affected Equifax’s Identity Verifer (“EIV”) dataset, which related to the EIV product, and its GCS dataset. The compromised data included names, telephone numbers, driver’s licence numbers, financial details, dates of birth, security questions and answers (in plain text), passwords (in plain text) and credit card numbers (obscured). The ICO investigation found that there had been breaches of five of the eight data protection principles of the DPA. In particular, the ICO commented in detail on Equifax’s breaches of the fifth and seventh principles and noted the following:

  • Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes (Fifth Principle):
    • In 2016, Equifax Ltd moved the EIV product from the U.S. to be hosted in the UK. Once the EIV product had been migrated to the UK, it was no longer necessary to keep any of the EIV dataset, in particular the compromised UK data, on Equifax Inc.’s systems. The EIV dataset, however, was not deleted from Equifax’s U.S. systems and was subsequently compromised.
    • With respect to the GCS datasets stored on the U.S. system, Equifax Ltd was not sufficiently aware of the purpose(s) for which it was being processed until after the breach. In the absence of a lawful basis for processing (in breach of the First Principle of the DPA), the personal data should have been deleted. The data was not deleted and Equifax Ltd failed to follow-up or check that all UK data had been removed from Equifax’s U.S. systems.
  • Appropriate technical and organizational measures shall be taken against unauthorized or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data (Seventh Principle):
    • Equifax Ltd failed to undertake an adequate risk assessment of the security arrangements that Equifax Inc. had in place, prior to transferring data to Equifax Inc. or following the transfer.
    • Equifax Ltd and Equifax Inc. had various data processing agreements in place, however, these agreements failed to (1) provide appropriate safeguards (not limited to security requirements), and (2) properly incorporate the EU Standard Contractual Clauses (in breach of the Eighth Principle of the DPA).
    • Equifax Ltd had a clear contractual right to audit Equifax Inc.’s compliance with its obligations under the aforementioned data processing agreements. Despite this right, Equifax Ltd failed to exercise it to check Equifax Inc.’s compliance with its obligations.
    • Communication procedures between Equifax Ltd and Equifax Inc. were deemed inadequate. In particular, this was highlighted by the delay of over one month between Equifax Inc. becoming aware of the breach and Equifax Ltd being informed of it.
    • Equifax Ltd failed to ensure adequate security measures were in place or notice that Equifax Inc. had failed to take such measures, including:
      • failing to adequately encrypt personal data or protect user passwords. The ICO did not accept Equifax Ltd’s reasons (i.e., fraud prevention and password analysis) for storing passwords in a plaintext file, particularly as it was a direct breach of Equifax Ltd’s own Cryptology Standards, and the stated aims could be achieved by other more secure means;
      • failing to address known IT vulnerabilities, including those identified and reported to senior employees. In particular, Equifax had been warned about a critical vulnerability in its systems by the U.S. Department of Homeland Security in March 2017. This vulnerability was given a score of 10.0 on the Common Vulnerability Scoring System (“CVSS”). A CVSS score of 10.0 is the highest score, indicating a critical vulnerability that requires immediate attention. Equifax Inc. failed to patch all vulnerable systems and this vulnerability in its consumer-facing disputes portal was exploited by the cyber attack; and
      • not having fully up-to-date software, failing to undertake sufficient and regular system scans, and failing to ensure appropriate network segregation (some UK data was stored together with U.S. data, making it difficult to differentiate).

Since the breach occurred prior to May 25, 2018, it was dealt with in accordance the Act. While the Equifax fine represents the maximum available under the Act, the aggravating factors identified by the ICO including the number of affected data subjects, the type of data at risk, and the multiple, systematic and serious inadequacies, it is likely that this fine would have been considerably more had the EU General Data Protection Regulation been in force when the breach occurred.

New Federal Credit Freeze Law Eliminates Fees, Provides for Year-Long Fraud Alerts

Effective September 21, 2018, Section 301 of the Economic Growth, Regulatory Relief, and Consumer Protection Act (the “Act”) requires consumer reporting agencies to provide free credit freezes and year-long fraud alerts to consumers throughout the country. Under the Act, consumer reporting agencies must each set up a webpage designed to enable consumers to request credit freezes, fraud alerts, extended fraud alerts and active duty fraud alerts. The webpage must also give consumers the ability to opt out of the use of information in a consumer report to send the consumer a solicitation of credit or insurance. Consumers may find links to these webpages on the Federal Trade Commission’s Identity Theft website.

The Act also enables parents and guardians to freeze their children’s credit if they are under age 16. Guardians or conservators of incapacitated persons may also request credit freezes on their behalf.

Section 302 of the Act provides additional protections for active duty military. Under this section, consumer reporting agencies must offer free electronic credit monitoring to all active duty military.

For more information, read the FTC’s blog post.

Apple to Require Privacy Policies for All New Apps and App Updates

On August 30, 2018, Apple Inc. announced a June update to its App Store Review Guidelines that will require each developer to provide its privacy policy as part of the app review process, and to include in such policy specific content requirements. Effective October 3, 2018, all new apps and app updates must include a link to the developer’s privacy policy before they can be submitted for distribution to users through the App Store or through TestFlight external testing.

The privacy policy must detail what data the app gathers, how the data will be collected and how such data will be used. The policy also must confirm that third parties with whom an app shares user data will provide the same or equal protection of user data as stated in the app’s privacy policy and the App Store Review Guidelines. Lastly, the policy must explain the developer’s data retention policy, as well as include information on how users can revoke consent to data collection or request deletion of the collected user data. Developers only will be able to edit an app’s privacy policy when submitting a new version of the app.

Canadian Regulator Seeks Public Comment on Breach Reporting Guidance

As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018. 

Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.

The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.

Software Company Settles with New Jersey AG Over Data Breach

On September 7, 2018, the New Jersey Attorney General announced a settlement with data management software developer Lightyear Dealer Technologies, LLC, doing business as DealerBuilt, resolving an investigation by the state Division of Consumer Affairs into a data breach that exposed the personal information of car dealership customers in New Jersey and across the country. The breach occurred in 2016, when a researcher exposed a gap in the company’s security and gained access to unencrypted files containing names, addresses, social security numbers, driver’s license numbers, bank account information and other data belonging to thousands of individuals, including at least 2,471 New Jersey residents.

To resolve the investigation, DealerBuilt agreed to undertake a number of changes to its security practices to help prevent similar breaches from occurring in the future, including:

  • the creation of an information security program to be implemented and maintained by a chief security officer;
  • the maintenance and implementation of encryption protocols for personal information stored on laptops or other portable devices or transmitted wirelessly;
  • the maintenance and implementation of policies that clearly define which users have authorization to access its computer network;
  • the maintenance of enforcement mechanisms to approve or disapprove access requests based on those policies; and
  • the maintenance of data security assessment tools, including vulnerability scans.

In addition to the above, DealerBuilt agreed to an $80,784 settlement amount, comprised of $49,420 in civil penalties and $31,364 in reimbursement of the Division’s attorneys’ fees, investigative costs and expert fees.

Read the consent order resolving the investigation.

NIST Launches Privacy Framework Effort

On September 4, 2018, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) announced a collaborative project to develop a voluntary privacy framework to help organizations manage privacy risk. The announcement states that the effort is motivated by innovative new technologies, such as the Internet of Things and artificial intelligence, as well as the increasing complexity of network environments and detail of user data, which make protecting individuals’ privacy more difficult. “We’ve had great success with broad adoption of the NIST Cybersecurity Framework, and we see this as providing complementary guidance for managing privacy risk,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.

The goals for the framework stated in the announcement include providing an enterprise-level approach that helps organizations prioritize strategies for flexible and effective privacy protection solutions and bridge gaps between privacy professionals and senior executives so that organizations can respond effectively to these challenges without stifling innovation. To kick off the effort, the NIST has scheduled a public workshop on October 16, 2018, in Austin, Texas, which will occur in conjunction with the International Association of Privacy Professionals’ “Privacy. Security. Risk. 2018” conference. The Austin workshop is the first in a series planned to collect current practices, challenges and requirements in managing privacy risks in ways that go beyond common cybersecurity practices.

In parallel with the NIST’s efforts, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) is “developing a domestic legal and policy approach for consumer privacy.” The announcement stated that the NTIA is coordinating its efforts with the department’s International Trade Administration “to ensure consistency with international policy objectives.”

Uber Data Breach Class Action Must Proceed to Arbitration

On September 5, 2018, the U.S. District Court for the Central District of California held that a class action arising from a 2016 Uber Technologies Inc. (“Uber”) data breach must proceed to arbitration. The case was initially filed after a 2016 data breach that affected approximately 600,000 Uber drivers and 57 million Uber customers. Upon registration with Uber, the drivers and customers entered into a service agreement that contained an arbitration provision. Based on this provision, the defendants moved to compel arbitration. They argued that the provision’s express language delegated the threshold issue of whether the case should be arbitrated (also called an issue of “substantive arbitrability”) to an arbitrator, not to the court. The plaintiffs countered, arguing that the arbitration clause was both inapplicable to the 2016 data breach and unconscionable, and that Uber customers did not receive reasonable notice of the electronic terms agreement when they registered.

The court rejected each of the plaintiffs’ arguments. First, citing Mohammed v. Uber Techs., Inc., 848 F.3d 1201, 1209 (9th Cir. 2016), the court held that the agreement’s language “clearly and unmistakably” delegated to the arbitrator the threshold and substantive issue of whether the 2016 breach was one that should be arbitrated. Second, whether the arbitration provision was unconscionable was similarly a question of substantive arbitrability “expressly delegated to the arbitrator.” Third, the court noted that the plaintiffs offered no evidence of confusion or lack of notice, and that many other courts had found similar electronic notice to be reasonable.

The case has been stayed pending completion of the arbitration.

Belgium Publishes Law Adapting the Belgian Legal Framework to the GDPR

On September 5, 2018, the Law of 30 July 2018 on the Protection of Natural Persons with regard to the Processing of Personal Data (the “Law”) was published in the Belgian Official Gazette.

This is the second step in adapting the Belgian legal framework to the EU GDPR after the Law of 3 December 2017 Creating the Data Protection Authority, which reformed the Belgian Data Protection Authority.

The Law is available in French and Dutch.

EU Begins Formal Approval for Japan Adequacy Decision

On September 5, 2018, the European Commission (the “Commission”) announced in a press release the launch of the procedure to formally adopt the Commission’s adequacy decision with respect to Japan.

The press release notes that the EU-Japan talks on personal data protection were completed in July 2018, and announces the publication of the draft adequacy decision and related documents which, among other things, set forth the additional safeguards Japan will accord EU personal data that is transferred to Japan. According to the release, Japan is undertaking a similar formal adoption process concerning the reciprocal adequacy findings between the EU and Japan.

The adequacy decision intends to ensure that Japan provides privacy protections for EU personal data that are “essentially equivalent” to the EU standard. The key elements of the agreement include:

  • Specific safeguards to be applied by Japan to bridge the difference between EU and Japanese standards on issues such as sensitive data, onward transfer of EU data to third countries, and the right to access and rectification.
  • Enforcement by the Japan Personal Information Protection Commission.
  • Safeguards concerning access to EU personal data by Japanese public authorities for law enforcement and national security purposes.
  • A complaint-handling mechanism.

The press release also notes that the adequacy decision will complement the EU-Japan Economic Partnership Agreement by supporting free data flows between the EU and Japan and providing for privileged access to 127 million Japanese consumers.

Finally, the press release also outlines the next four steps in the formal approval process:

  • Opinion from the European Data Protection Board.
  • Consultation of a committee composed of representatives from the EU Member States (comitology procedure).
  • Update of the European Parliament Committee on Civil Liberties, Justice and Home Affairs.
  • Adoption of the adequacy decision by the College of Commissioners.

CCPA Amended: Enforcement Delayed, Few Substantive Changes Made

On August 31, 2018, the California State Legislature passed SB-1121, a bill that delays enforcement of the California Consumer Privacy Act of 2018 (“CCPA”) and makes other modest amendments to the law. The bill now goes to the Governor for signing. The provisions of the CCPA will become operative on January 1, 2020. As we have previously reported, the CCPA introduces key privacy requirements for businesses. The Act was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. The CCPA’s hasty passage resulted in a number of drafting errors and inconsistencies in the law, which SB-1121 seeks to remedy. The amendments to the CCPA are primarily technical, with few substantive changes.

Key amendments to the CCPA include:

  • Enforcement:
    • The bill extends by six months the deadline for the California Attorney General (“AG”) to draft and adopt the law’s implementing regulations, from January 1, 2020, to July 1, 2020. (CCPA § 1798.185(a)).
    • The bill delays the AG’s ability to bring enforcement actions under the CCPA until six months after publication of the implementing regulations or July 1, 2020, whichever comes first. (CCPA § 1798.185(c)).
    • The bill limits the civil penalties the AG can impose to $2,500 for each violation of the CCPA or up to $7,500 per each intentional violation, and states that a violating entity will be subject to an injunction. (CCPA § 1798.155(b)).
  • Definition of “personal information”: The CCPA includes a number of enumerated examples of “personal information” (“PI”), including IP address, geolocation data and web browsing history. The amendment clarifies that the listed examples would constitute PI only if the data “identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.” (CCPA § 1798.140(o)(1)).
  • Private right of action:
    • The amendments clarify that a consumer may bring an action under the CCPA only for a business’s alleged failure to “implement and maintain reasonable security procedures and practices” that results in a data breach. (CCPA § 1798.150(c)).
    • The bill removes the requirement that a consumer notify the AG once the consumer has brought an action against a business under the CCPA, and eliminates the AG’s ability to instruct a consumer to not proceed with an action. (CCPA § 1798.150(b)).
  • GLBA, DDPA, CIPA exemptions: The original text of the CCPA exempted information subject to the Gramm-Leach-Bliley Act (“GLBA”) and Driver’s Privacy Protection Act (“DPPA”), only to the extent the CCPA was “in conflict” with either statute. The bill removes the “in conflict” qualification and clarifies that data collected, processed, sold or disclosed pursuant to the GLBA, DPPA or the California Information Privacy Act is exempt from the CCPA’s requirements. The revisions also exempt such information from the CCPA’s private right of action provision. (CCPA §§ 1798.145(e), (f)).
  • Health information:
    • Health care providers: The bill adds an exemption for HIPAA-covered entities and providers of health care governed by the Confidentiality of Medical Information Act, “to the extent the provider or covered entity maintains patient information in the same manner as medical information or protected health information,” as described in the CCPA. (CCPA § 1798.145(c)(1)(B)).
    • PHI: The bill expands the category of exempted protected health information (“PHI”) governed by HIPAA and the Health Information Technology for Economic and Clinical Health Act to include PHI collected by both covered entities and business associates. The original text did not address business associates. (CCPA § 1798.145(c)(1)(A)).
    • Clinical trial data: The bill adds an exemption for “information collected as part of a clinical trial” that is subject to the Federal Policy for the Protection of Human Subjects (also known as the Common Rule) and is conducted in accordance with specified clinical practice guidelines. (CCPA § 1798.145(c)(1)(C)).
  • Notice of right of deletion: The original text of the CCPA stated that a business must disclose on its website or in its privacy policy a consumer’s right to request the deletion of her PI. The bill modifies this requirement, stating that a business must disclose the right to deletion “in a form that is reasonably accessible to consumers.” (CCPA § 1798.105(b)).
  • First Amendment protection: The bill adds a provision to the CCPA, which states that the rights afforded to consumers and obligations imposed on businesses under the CCPA do not apply if they “infringe on the noncommercial activities of a person or entity” as described in Art. I, Section 2(b) of the California constitution, which addresses activities related to the free press. This provision is designed to prevent First Amendment challenges to the law. (CCPA § 1798.150(k)).
  • Preemption:
    • The bill adds to the CCPA’s preemption clause that the law will not apply in the event its application is preempted by, or in conflict with, the U.S. Constitution. The CCPA previously referenced only the California Constitution. (CCPA § 1798.196).
    • Certain provisions of the CCPA supersede and preempt laws adopted by local entities regarding the collection and sale of a consumer’s PI by a business. The bill makes such provisions of the Act operative on the date the bill becomes effective.

The California State Legislature is expected to consider more substantive changes to the law when it reconvenes in January 2019.

Senate Commerce Committee Members Rumored to be Discussing Online Privacy Bill

On August 29, 2018, Bloomberg Law reported that four Senate Commerce Committee members are discussing a potential online privacy bill. The bipartisan group consists of Senators Jerry Moran (R-KS), Roger Wicker (R-MS), Richard Blumenthal (D-CT) and Brian Schatz (D-HI), according to anonymous Senate aides.

Specific details of the possible bill are unknown. The proposal may compete with a bill being developed by Senate Commerce Committee Chairman John Thune (R-SD), and is a further indication of increased Congressional interest in enacting a broad online privacy bill. Such interest sharpened in a year of increased scrutiny and legal developments in the privacy arena, including the European Union’s General Data Protection Regulation and the recently enacted California Consumer Privacy Act of 2018.

Alongside these reported Congressional efforts, the Trump Administration, through the National Economic Council and the Commerce Department, is said developing an online privacy proposal to send to Congress.

California AG Voices Concern About State’s New Privacy Law

On August 22, 2018, California Attorney General Xavier Becerra raised significant concerns regarding the recently enacted California Consumer Privacy Act of 2018 (“CCPA”) in a letter addressed to the CCPA’s sponsors, Assemblyman Ed Chau and Senator Robert Hertzberg. Writing to “reemphasize what [he] expressed previously to [them] and [state] legislative leaders and Governor Brown,” Attorney General Becerra highlighted what he described as five primary flaws that, if unresolved, will undermine the intention behind and effective enforcement of the CCPA.

Most of the issues Attorney General Becerra pointed to were those he claimed impose unnecessary and/or onerous obligations on the Attorney General’s Office (“AGO”). For example, the CCPA requires the AGO to provide opinions, warnings and an opportunity to cure to a business before the business can be held accountable for a CCPA violation. Attorney General Becerra said that this effectively requires the AGO to provide unlimited legal counsel to private parties at taxpayer expense, and creates a potential conflict of interest by requiring the AGO to advise parties who may be violating Californians’ privacy rights.

In a similar vein, Attorney General Becerra noted that the CCPA gives consumers a limited right to sue if they become victims of a data breach, but otherwise does not include a private right of action for consumers to seek remedies to protect their privacy. That framework, Attorney General Becerra wrote, substantially increases the AGO’s need for enforcement resources. Likewise, the CCPA requires private plaintiffs to notify the Attorney General before filing suit. Attorney General Becerra criticized this requirement as both without use, since only courts may decide the merits of a case, and a drain on personnel and administrative resources.

Attorney General Becerra also pointed out that the CCPA’s civil penalty provisions purport to amend and modify the Unfair Competition Law’s civil penalty provision. The latter, however, was enacted by voters through a ballot proposition and thus cannot be amended through legislation. For that reason, Attorney General Becerra argued, the CCPA’s civil penalty provision is likely unconstitutional (the letter noted that the AGO has offered “corrective language” that replaces the CCPA’s current penalty provision with a stand-alone enforcement proposition).

Additionally, Attorney General Becerra took issue with the CCPA’s provision that the AGO has one year to conduct rulemaking for the CCPA. Attorney General Becerra noted that the CCPA did not provide resources for the AGO to carry out the rulemaking nor its implementation thereafter; the Attorney General called the existing deadline “simply unattainable.”

Plaintiffs File Class Action Lawsuit Against Nielsen Over Alleged False and Misleading Statements

On August 28, 2018, plaintiffs filed a class action lawsuit against Nielsen Holdings PLC (“Nielsen”) and some of its officers and directors for making allegedly materially false and misleading statements to investors about the impact of privacy regulations and third-party business partners’ privacy policies on the company’s revenues and earnings. The case was filed in the United States District Court for the Southern District of New York. 

The complaint alleges that Nielsen made false and/or misleading statements and/or failed to disclose that: (1) Nielsen recklessly disregarded its readiness for and the true risks of privacy-related regulations and policies, including the EU General Data Protection Regulation (“GDPR”), on its current and future financial and growth prospects; (2) Nielsen’s financial performance was far more dependent on Facebook and other third-party large data set providers than previously disclosed, and privacy policy changes affected the scope and terms of access Nielsen would have had to third-party data; and (3) access to Facebook and other third-party provider data was becoming increasingly restricted for Nielsen and Nielsen clients. Plaintiffs allege that, as a result, Nielsen’s public statements were materially false and misleading at all relevant times.

The complaint maintains that, because of Nielsen’s “material misrepresentations and omissions, Nielsen stock traded at artificially inflated prices.” The complaint further alleges that when Nielsen published its financial results for the second quarter of 2018 announcing that it missed revenue and earnings targets, its stock plummeted, which caused substantial harm to the plaintiffs who were investors in Nielsen stock. In that announcement, Nielsen cited the impact of the GDPR on the company’s results and announced that its CEO and Executive Chairman, Mitch Barns, would retire from the company at the end of 2018.

Read the complaint.

Sixth Circuit Declines Reconsideration of American Tooling Center’s “Spoofing” Win

Recently, the Sixth Circuit rejected Travelers Casualty & Surety Company’s request for reconsideration of the court’s July 13, 2018, decision confirming that the insured’s transfer of more than $800,000 to a fraudster after receipt of spoofed emails was a “direct” loss that was “directly caused by” the use of a computer under the terms of American Tooling Company’s (“ATC’s”) crime policy. In doing so, the court likewise confirmed that intervening steps by the insured, such as following the directions contained in the bogus emails, did not break the causal chain so as to defeat coverage for “direct” losses.

We’ve previously reported on ATC decisions on Hunton’s Insurance Recovery blog.

Second Circuit Stands By Medidata “Spoofing” Decision

As reported on Hunton’s Insurance Recovery blog, the Second Circuit has rejected Chubb subsidiary Federal Ins. Co.’s request for reconsideration of the court’s July 6, 2018, decision, confirming that the insurer must cover Medidata’s $4.8 million loss under its computer fraud insurance policy. In July, the court determined that the loss resulted directly from the fraudulent emails. The court again rejected the insurer’s argument that the fraudster did not directly access Medidata’s computer systems. But the court again rejected that argument, finding that access indeed occurred when the “spoofing” code in emails sent to Medidata employees ended up in Medidata’s computer system.

View the Second Circuit’s summary order. Prior posts on the Medidata litigation and decisions are available through the following links:

July 10, 2018, Hunton Insurance Recovery Practice Head Explains Why Medidata Decision Affirming Phishing Coverage is “Common Sense”

July 9, 2018, 2nd Cir. Affirms Medidata’s Spoofing Loss is Covered Under Crime Policy’s Computer Fraud Provision

July 27, 2017, Hunton Insurance Head Walter Andrews Comments on Medidata Coverage Win

July 24, 2017, Chubb Owes $4.8M for Medidata Social Engineering Loss