- Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
- Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
- Gather your own evidence to strengthen your case. For example:
- If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
- If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
- If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
- Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
- Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
- Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
- Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
- Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
- Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
- Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
- Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
- Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
- Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
- Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
- Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
- Boundary/borderline cases, when decisions about which level is appropriate are arbitrary but the implications can be significant;
- Dynamics - something that is a medium level right now may turn into a high or a low at some future point, perhaps when certain event occurs;
- Context e.g. determining the sensitivity of information for deliberate internal distribution is not the same as for unauthorized access, especially external leakage and legal discovery (think: internal email);
- Dependencies and linkages e.g. an individual data point has more value as part of a time sequence or data set ...
- ... and aggregation e.g. a structured and systematic compilation of public information aggregated from various sources can be sensitive;
- Differing perspectives, biases and prejudices, plus limited knowledge, misunderstandings, plain mistakes and secret agendas of those who classify stuff, almost inevitably bringing an element of subjectivity to the process despite the appearance of objectivity;
- And the implicit "We've classified it and [maybe] done something about securing it ... so we're done here. Next!". It's dismissive.
- Sensitivity, confidentiality or privacy expectations;
- Source e.g. was it generated internally, found on the web, or supplied by a third party?;
- Trustworthiness, credibility and authenticity - could it have been faked?;
- Accuracy and precision which matters for some applications, quite a lot really;
- Criticality for the business, safety, stakeholders, the world ...;
- Timeliness or freshness, age and history, hinting at the information lifecycle;
- Extent of distribution, whether known and authorized or not;
- Utility and value to various parties - not just the current or authorized possessors;
- Probability and impact of various incidents i.e. the information risks;
"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
"In many cases, policies are written in such a difficult way that they simply cannot be effectively absorbed by employees. Instead of communicating risks, dangers and good practices in clear and comprehensive instructions, businesses often give employees multipage documents that everyone signs but very few read – and even less understand."
- Lack of scope: ‘security policies’ are typically restricted to IT/cyber security matters, leaving substantial gaps, especially in the wider aspects of information risk and security such as human factors, fraud, privacy, intellectual property and business continuity.
- Lack of consistency: policies that were drafted by various people at various times for various reasons, and may have been updated later by others, tend to drift apart and become disjointed. It is not uncommon to find bald contradictions, gross discrepancies or conflicts. Security-related obligations or expectations are often scattered liberally across the organization, partly on the corporate intranet, partly embedded in employment contracts, employee handbooks, union rulebooks, printed on the back of staff/visitor passes and so on.
- Lack of awareness: policies are passive, formal and hence rather boring written documents - dust-magnets. They take some effort to find, read and understand. Unless they are accompanied by suitable standards, procedures, guidelines and other awareness materials, and supported by structured training, awareness and compliance activities to promote and bring them to life, employees can legitimately claim that they didn’t even know of their existence - which indeed they often do when facing disciplinary action.
- Lack of accountability: if it is unclear who owns the policies and to whom they apply, noncompliance is the almost inevitable outcome. This, in turn, makes it risky for the organization to discipline, sack or prosecute people for noncompliance, even if the awareness, compliance and enforcement mechanisms are in place. Do your policies have specific owners and explicit responsibilities, including their promotion through awareness and training? Are people - including managers - actually held to account for compliance failures and incidents?
- Lack of compliance: policy compliance and enforcement activities tend to be minimalist, often little more than sporadic reviews and the occasional ticking-off. Circulating a curt reminder to staff shortly before the auditors arrive, or shortly after a security incident, is not uncommon. Policies that are simply not enforced for some reason are merely worthless, whereas those that are literally unenforceable (including those where strict compliance would be physically impossible or illegal) can be a liability: management believes they have the information risks covered while in reality they do not. Badly-written, disjointed and inconsistent security policies are literally worse than useless.
If you wait to become 800-171 compliant, you won’t win contracts. That was the message we wanted to make loud and clear to over 200 federal contractors during last week’s Washington Technology (WT) webcast, Inside NIST 800-171: Cyber Requirements and the Risk of Non-Compliance. Currently, all DoD contractors that handle, process or store sensitive types […]… Read More
The post When it Comes to NIST 800-171 Compliance – There’s ‘On Time’ and There’s ‘Lombardi Time’ appeared first on The State of Security.
The whole point of IT security is to minimize risk, and risk is, ultimately, a financial reality. A well-run organization practices risk mitigation by not only using the best tools, services and methods for maximizing data security, but also increasingly by augmenting great security with the right cyber insurance.
As we know, the cyberthreat landscape is in a constant state of change. It’s a contest between evolving threats on the one side, and the security knowledge, options, resources, products and services on the other. The insurance landscape is also in a constant state of change. Yet too many organizations treat this kind of insurance as either unnecessary, or as a necessary, but generic, turn-key, set-it-and-forget-it checkbox item. In fact, it’s an important, complicated and necessary financial service that needs to be frequently reviewed, reconsidered and updated.
With new and evolving threats to your organization’s financial well-being, it’s time to rethink what you know about cyber insurance.
Why Most Companies Aren’t Covered
Cyber insurance is a relatively new phenomenon for most companies. Only 38 percent of organizations are covered by data insurance, according to Spiceworks, a social exchange for IT services. Of those covered, around 45 percent have had coverage for less than two years and only 24 percent have been covered for more than five years. Furthermore, only 11 percent of those without insurance plan to buy a policy within the next two years.
That means knowledge about and experience with insurance is understandably incomplete at most organizations. As a result, corporate leadership is often unsure about its value or about the specifics of coverage.
Unfamiliarity with the finer points of insurance is also evident in the Spiceworks survey. Of the organizations not covered, the top reasons for not yet purchasing cyber insurance are that it just isn’t a priority at the organization (41 percent), a lack of budget (40 percent), a lack of knowledge about insurance (36 percent), and it’s simply not required by regulations at the organization (34 percent).
This lack of understanding is very troublesome given the average total cost of a data breach ranges from $2.2 million to $6.9 million, according to the “2018 Cost of a Data Breach Study” from the Ponemon Insitute and IBM Security. For bigger breaches at larger companies, the cost can soar into the hundreds of millions of dollars.
A wide gap exists between the actual need for insurance and the perceived need. It’s time to change that.
Insurance Against Hacks? You Don’t Know the Half of It
Most people in the industry would say that the point of cyber insurance is to protect against the financial hit from an attack, right? This may be true, but not always.
Verizon’s “2018 Data Breach Investigations Report” investigated more than 53,000 incidents and more than 2,000 confirmed breaches. They found that around 73 percent of data breaches took place because of external attackers, while 28 percent involved employees and other insiders.
Unfortunately, insurance coverage sometimes focuses on external hacks to the exclusion of “inside jobs,” accidents, service provider errors and other non-hacking causes.
Going back to the Spiceworks report, policies can vary greatly: Liability is covered by 78 percent of cyber insurance policies, electronic data by 75 percent and legal or investigative fees by 69 percent. But only around 52 percent of those policies cover loss of income or cyber extortion losses, and only 35 percent cover damage to reputation.
In addition, according to U.K. insurance governance company Mactavish, many insurance policies contain eight major flaws:
- They cover attacks or hacks, but may not cover accidents and errors;
- They cover only costs required by law, but may not cover the total incident costs;
- Coverage is limited to the time of the network interruption, but may not cover business disruption;
- They may limit or exclude systems delivered by outsourced service providers;
- They may exclude software or systems in development or beta;
- They may not cover incidents caused by contractors;
- Notification requirements may be too complicated; and
- They may only cover insurer-appointed advisers and specialists.
When considering your options for cyber insurance, keep an eye out for these common exceptions to ensure you’re picking the plan that best fits your business needs.
How Compliance Complicates Coverage
In addition to focusing on data breaches, organizations must pay attention to a complex and evolving regulatory environment. Enterprises now face a new world of regulatory compliance around privacy, from the General Data Protection Regulation (GDPR) to the California Consumer Protection Act (CCPA), which will go into effect on Jan. 1, 2020.
It’s tempting to respond to this by saying, “We’ll just comply, of course, and all will be well.” But it’s not that simple. Fines for noncompliance could be enormous, and companies can be fined for not only violations or potential violations of user privacy, but also for how personal data is collected, stored, processed and even how the collection is communicated to the public.
All this is new, and it’s likely that in the coming years, many organizations will be slapped with hefty fines for misunderstanding the laws’ fine print, how they express and organize their privacy policies, how user data is processed, and other peripheral or secondary matters.
Bringing it back to cyber insurance, many policies will not cover fines or other costs if the violation is around the processing of data or communication of policy. Some U.S. states even ban insurance coverage for regulatory fines of any kind, and insurance companies strike that coverage in those states. Compliance is becoming an increasingly relevant aspect of insurance, but many insurance policies just don’t fully cover it.
It’s Often a Matter of Interpretation
One problem with an unsophisticated approach to insurance is that organizations can accept policies that don’t cover them. Another problem is having a different interpretation of those policies than the provider, which can be a costly misunderstanding.
One interesting example is what I call the “act of war” clause: Many policies will cover a breach, unless that breach is the result of an “act of war” by a nation state.
That sounds reasonable. The trouble is, some of the most sophisticated and damaging exploits are developed by these threat actors. Some are created by one government, modified by another, then deployed by who-knows. This could provide a loophole for insurance providers that don’t want to pay up. They can argue that a hack enabled by malware developed by a foreign government means the attack was an “act of war,” and therefore not covered under the policy.
How to Find Cyber Insurance Coverage That Fits
The important takeaway here is to not make assumptions about coverage. Read the fine print. Pay special attention to liabilities around compliance, including fines.
Ideally, the right insurance offers cyber risk mitigation that offsets some or all of the costs when recovering from a breach or other security event. The right policy will compensate for not only lost business during business or network interruption, but also lawsuits and even extortion costs.
It’s also important to understand that insurance won’t cover you if you’re not protecting yourself with great security software, systems and policies. If your company is negligent with security, the insurance companies won’t pay.
First, make sure you’ve got strong cybersecurity systems, tools and procedures in place. Then, shop around for the cyber insurance plan that works best for you — and read the fine print. Negotiate for a policy that truly and fully covers all possible financial loss for everything having to do with data — from attacks to accidents to compliance. Lastly, review your coverage regularly as cyber risks evolve.
Eleven organizations are asking major retailers in the United States to stop selling Internet-connected devices that don’t meet minimum security and privacy requirements.
SecurityWeek RSS Feed
Eleven organizations are asking major retailers in the United States to stop selling Internet-connected devices that don’t meet minimum security and privacy requirements.
I’ve been working in the security industry for mumble mumble years, and one recurring problem I’ve noticed is that security is often considered an add-on to business initiatives. This is neither new, nor surprising. And while the “customer-first” approach is not really a new talking point for most companies, “customer-obsessed” became a major business initiative for many in 2018. This is due to a number of factors — increased brand visibility via social media, changing buyer behaviors and evolving data privacy legislation, to name a few — and doesn’t show any signs of changing in 2019.
What Does It Mean to Be Customer-First?
Contrary to what many businesses seem to believe, customer obsession doesn’t mean sending six emails in two weeks to make sure your customer is happy with his or her purchase and requesting a good review or rating. Being customer-first simply means listening to your customers’ needs. It requires you to quickly adjust and react to meet those needs — or, ideally, anticipate them and proactively offer solutions to your customers’ issues.
Most of all, customer obsession requires trust. To build trust among your end users, security must be the foundation of every customer-first initiative. In fact, I’d argue that organizations must be security-obsessed to effectively deliver on their customer-first plans.
Prioritize Security to Build Customer Trust
The benefits of a customer-first business approach are clear: increased loyalty to your brand, revenue gains, etc. It is also apparent why security is so important: No organization wants to suffer the consequences of a data breach. However, by looking deeper into what a security-first to customer-first culture looks like, you’ll quickly uncover the complexity of this issue.
First, there is a distinct difference between checking the boxes of your security requirements (i.e., compliance) and truly making your customers’ welfare a top priority. Of course, adherence to security and privacy regulations is essential. Without these standardized compliance policies, companies could measure success in a variety of ways, which would look different to everyone. And if we’re being honest, meeting compliance regulations is often more about avoiding penalties than improving your business.
Second, your brand is more than just your product or service; it encompasses the way your company looks, feels, talks and spends money and is representative of its culture and beliefs. In other words, your brand is about the way people feel when they interact with your company. According to Forrester Research, today’s buyers are increasingly looking at these other characteristics when they make decisions about the products or services they use.
This is where security becomes essential. If you want to instill trust among your end users, you need to go beyond standard compliance measures. Security must become a foundation of your company culture and your customer-first initiatives. It must be threaded into every business initiative, corporate policy, department and individual. This means technology purchases should be made with your end users’ security in mind, as well as your employee data and corporate assets.
It also means evaluating your business partners and the policies they have in place to ensure they fall within your standards. For example, are you considering moving critical business technology to the cloud as part of your digital transformation initiatives? If so, what do you know about your cloud provider’s security precautions? Are you working with advertisers or marketing organizations that interact with your end users? If so, do you know how they handle your customers’ and prospects’ personal data?
How to Develop a Strong Security Culture
Operating a business that is customer-first is ambitious. It’s also really, really hard. By making security a cultural tenet throughout your organization, you communicate to your customers that your brand is trustworthy, your business has integrity and that they matter to you. So how do you do it?
Design collaboration into your security strategy with open solutions. The threat-solution cycle is a familiar one: A new security event occurs, the news covers it, a new company emerges to solve the problem, your company deploys the solution and then a new security event occurs. The entire industry is stuck in a vicious cycle that we, as vendors, have created. To break this cycle we need to take a page from our adversaries. Share intelligence with our peers and our competitors. Learn from other industries. Use open technology that integrates multiple sources of data. Only then are we equipped to uncover risks to our customers that hide among the chaos.
Build Security Muscle Memory
Many organizations are spending a lot of money on security awareness training, which is great. However, the best training is useless if employees are bypassing security measures for convenience. Make security processes required, enforceable and, above all, easily incorporated into the daily life of your users.
Shift Your Perspective
Security strategy is often an afterthought to business initiatives that cut costs, increase revenue and improve efficiency. Security is, after all, a cost. But a good security culture can set your company apart. It can be the champion or the killer for your brand, particularly in an era where customers’ buying motivations have shifted.
Right now, brand loyalty is an asset. A recent Harris Poll survey found that 75 percent of respondents will not buy from a company, no matter how great the products are, if they don’t trust it to protect their data. Stability, integrity and corporate responsibility are key factors in purchasing decisions. Making security a strategic pillar of your company’s brand is a tremendous responsibility, but one that will go a long way toward establishing trust among your users.
The Best Way to Grow Your Business
A customer-first approach is, arguably, the business initiative that can impact your bottom line the most. Understanding and proactively addressing your customers’ security and privacy concerns shows that you’re not just trying to sell a product or service, but that you are responsible with their data and operate with integrity. In an era where brand integrity matters, security-first is the best way to grow your business.
The post Why You Need a Security-First Culture to Deliver on Your Customer-First Goals appeared first on Security Intelligence.
In June 2017, the cybersecurity world changed. As soon as NotPetya began infecting systems in Ukraine and spreading across Europe and beyond, it became clear that the intent of this worm wasn’t espionage, distributing malware or holding data for ransom. Rather, it was designed to destroy data, shut down systems and create havoc.
One of the most severely impacted organizations was global shipping giant Maersk, which transports 20 percent of the world’s trade goods. When Maersk’s systems went down, it sent shockwaves around the world and caused security observers to shudder. NotPetya was apparently a cyberweapon launched against Ukraine, but a far greater number of countries and organizations became collateral damage.
It was a wake-up call for Maersk, according to Andy Powell, who joined the company as its new chief information security officer (CISO) in June 2018, a year after the NotPetya attack.
“What Maersk was very strong at was our ability to recover,” Powell said in a fireside chat with IBM Security General Manager Mary O’Brien on Tuesday, the opening night of the 2019 IBM Think conference. “Balancing business resilience with preventative measures means that any company can address some of these high-end attacks, but you’ve got to accept that some of them are going to get through. And therefore, you need to be able to recover your business.”
While cybersecurity inevitably changed in the wake of NotPetya, it’s continuing a rapid transformation as businesses digitize and create ever more data. O’Brien and Powell discussed these profound shifts during their chat, along with Kevin Baker, CISO of Westfield Insurance, who underscored the impacts of digital transformation on data security, risk and compliance.
Lessons in Resiliency and Agile Security
In the age of cloud and connected everything, the volume of data being produced has exploded, along with opportunities for greater insights, innovation and new business models. This digital transformation has broad implications for security.
“Our clients want to know where their containers are, they want to know what part of the process is involved, they want to know information around what they’re moving,” Powell said. “We can provide that as part of the transformation.”
To secure digital innovation for clients, alongside its legacy systems, Maersk’s security team has taken an agile approach. Security is frequently seen as a roadblock to innovation, Powell said. Bringing together project teams and the security organization helps speed innovations to market by building security into the process from the beginning.
“The reality is the security people need to be working with them in those teams to actually integrate security from day one, and that’s starting to really pay off, because we’re no longer seen as the outsiders,” Powell said. “We’re seen as somebody who is prepared to adopt the culture and work with them. That teamed approach is very important.”
Focus on Data Security, Risk and Compliance
Ohio-based Westfield Insurance, with $4.9 billion in assets, has been in business since 1848. That means “a lot of data,” Baker said during the Think fireside chat.
“Because of digitization, it’s a veritable explosion of data. Our job is to know what data we have, where it is, how many copies of it we have, where it’s moving, who can access it and what the criticality of that data is so we can focus on data that has a regulatory import,” Baker said.
Baker’s team focuses on governance and risk, monitoring existing regulations like the New York Department of Financial Services (NYDFS) cybersecurity regulation. And they look to the horizon for emerging compliance risks, such as California’s data privacy law, which will take effect in January 2020.
The California Consumer Privacy Act (CCPA) follows in the footsteps of the European Union (EU)’s General Data Protection Regulation (GDPR) with strict data privacy mandates, including a “right to be forgotten,” whereby companies will be required to destroy certain types of customer data.
“‘Forget me’ is a new capability that we have to solve for,” Baker said. “So we’re looking for ways that we can tag the data, move the security control down at the data element, and use the same tagging and process in multiple ways. It’s more than data classification, but it starts there.”
How Can Digital Transformation Help Reduce Complexity?
Digital transformation in business — through the adoption of technologies such as the cloud, artificial intelligence, and mobile and smart devices — has had major implications for the security industry as well. Although security products have made strides in protecting businesses beyond the traditional firewall, complexity is a hidden cost of innovation.
“We believe the No. 1 challenge is the complexity that we — the vendors and our clients — have jointly created,” O’Brien said during her chat at the IBM Think conference, her first as IBM Security general manager. “We got here because we let the latest threat of the day or requirement drive our technology and our strategy. So every time there was a new attack, a new merger, a new regulation, we created a new tool.”
The second problem of security innovation, O’Brien added, is that these products are created, purchased and deployed in silos. They are not integrated and don’t naturally talk to each other. According to O’Brien, it’s time to eliminate this complexity to enable business innovation and transformation.
This past October, IBM Security launched IBM Security Connect, a simple, open and connected cloud platform that can automatically access security data no matter where it resides. This enables security teams to take advantage of existing investments, from IBM or other vendors, without compromising effectiveness.
“You have insights today, but not total insights,” O’Brien said. “But because Connect can tap into your existing data wherever it is, you will see the full picture of your security situation without having to migrate your data or manually integrate it.”
For his part, Baker said limiting the number of tools but integrating them across multiple vendor systems is key to making strides toward his team’s data security goals.
“We elected to use not more security tools, but fewer security tools. We chose tools that were on their own pretty powerful, things like IBM’s QRadar and Guardium. Then we integrated that with other vendors,” Baker explained. “We use these tools to create our own link and do our own analysis. Not just the net-new data, but even the legacy data, and then to analyze that data as a single unit, to track the most critical data. We know that we can’t track it all. We need to zero in on what’s important.”
There was no shortage of talking points on data protection in 2018, from concerns over data risk and compliance requirements to the challenges of operational complexities. When we surveyed some of the most prominent trends and themes from the last year, three topics stood out among the many facets of these core cybersecurity challenges: regulatory compliance, data breach protection and risk management.
As we settle into 2019, let’s take a closer look at what we learned in the past year and explore how organizations around the world can improve their data security posture in the long term.
Navigating Your GDPR Compliance Journey
When the General Data Protection Regulation (GDPR) took effect last May, companies were seeking guidance and best practices to address their compliance challenges. Although this sense of urgency is beginning to diminish, the demand for data privacy controls will only increase as organizations across industries and geographies adjust to the post-GDPR world.
In January 2020, the California Consumer Privacy Act (CCPA) will go into effect, and Brazil’s data protection law, Lei Geral de Proteção de Dados Pessoais (LGPDP), will kick in the following month. Many of the processes and requirements — not to mention the benefits — associated with GDPR compliance will be highly relevant to organizations’ preparations for these new regulations. In the year ahead, security teams should continue to focus on:
- GDPR readiness: Complying with GDPR can require changes across nearly every aspect of your business, from customer communications to social media interactions and data protection processes for handling and storing personal and financial information. Analyze your GDPR readiness and kick-start compliance with this five-phase GDPR action plan.
- How to report a breach: The GDPR requires companies to report a breach within 72 hours of their becoming aware of it, where feasible — an unprecedented timeline. Be sure to understand the requirements for reporting a breach, from the root cause to the assessment of the scope and the mitigation action plan.
- GDPR and business success: Beyond the challenges and demands of compliance, the GDPR can be good for your business. When managed appropriately, compliance can help drive the organization to a more robust and future-proof security posture.
Data Protection Is a Hot Topic as Breaches Soar
Given that 27 percent of organizations will experience a recurring material breach in the next two years — coupled with the rapid proliferation of attack vectors such as the internet of things (IoT) — it’s no surprise that data security was top of mind for security professionals in 2018. Below are some of the salient themes:
- Avoiding breaches: Data breaches are on the rise, due in part to an increase in the number of attack vectors created by complex IT environments. Yet many of these breaches are preventable. While every organization’s challenges are different, some of the most common data security mistakes can put enterprise and customer data at serious risk.
- Responsibility: Who is responsible for data risk management? Blamestorming — the unpleasant, often futile process of pointing fingers — often follows a breach. By determining who is ultimately accountable before a breach, the C-suite can help prevent a breach in the first place and avoid the blamestorming.
- Maintaining control over data: With the increasing number of ransomware variants, it’s critical to augment ongoing user education with technical controls and processes for optimal protection. Yet these measures can only do so much; technologies and processes that deliver preventive protection and instant remediation can help you maintain control of your data in the face of an attack.
Gain the Upper Hand Through Risk Management
Hand in hand with concerns about breaches, organizations are proactively seeking ways to understand, reduce and mitigate the risks that lead to these breaches. The third most popular topic covered a variety of risk mitigation and management themes that can help organizations on their journey toward smarter data protection, including:
- Formalizing processes: Proactively finding and protecting the crown jewels is the only pre-emptive advantage organizations have in the battle of the breach. Creating and deploying formal risk management processes can help organizations evaluate information assets and the vulnerabilities that threaten to compromise them.
- Structured versus unstructured data: Both structured and unstructured data are core business assets. That’s why it’s important to understand the differences between them and key considerations for assessing the risk levels for both structured and unstructured data when building a data protection strategy.
As you grapple with today’s data privacy, protection and risk management challenges — and prepare for tomorrow’s — these lessons, best practices and expert opinions from 2018 can help guide your security strategy and improve your data protection posture in 2019 and beyond.
The post What Have We Learned About Data Protection After Another Year of Breaches? appeared first on Security Intelligence.
Data breaches and privacy violations are now commonplace. Unfortunately, the consequences for US companies involved can be complicated. A company’s obligation to a person affected by a data breach depends in part on the laws of the state where the person resides. A person may be entitled to free credit monitoring for a specified period of time or may have the right to be notified of the breach sooner than somebody living in another state. … More
The post Is 2019 the year national privacy law is established in the US? appeared first on Help Net Security.
The Swiss government last week announced the launch of a public bug bounty program for its electronic voting systems, with rewards of up to $50,000.
A survey conducted by the Ponemon Institute on behalf of security solutions provider TUV Rheinland OpenSky analyzes the security, safety and privacy challenges and concerns related to the convergence between information technology (IT), operational technology (OT), and industrial internet of things (IIoT).
Huawei's top executive in Europe brushed off Western critics and defended the company's track record against accusations that it could serve as front for Chinese spying.
There have been more than 59,000 personal data breaches reported to European data protection regulators in the first eight months following the enforcement of GDPR. From available data, the precise figure is calculated at 59,430.
- Information risks and information opportunities are the possibilities of information being exploited in a negative and positive sense, respectively. The negative sense is the normal/default meaning of risk in our field, in other words the possibility of harmful consequences arising from incidents involving information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. This blog piece is an example of positively exploiting information: I am deliberately sharing information in order to inform, stimulate and educate people, for the benefit of the wider ISO27k user community (at least, that's my aim!).
- Business risks and business opportunities arise from the use of information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. to harm or further the organization’s business objectives, respectively. The kind of manipulative social engineering known as ‘marketing’ and ‘advertising’ is an example of the beneficial use of information for business purposes. The need for the organization to address its information-related compliance obligations is an example that could be a risk (e.g. being caught out and penalized for noncompliance) or an opportunity (e.g. not being caught and dodging the penalties) depending on circumstances.
- The ISMS itself is subject to risks and opportunities. Risks here include sub-optimal approaches and failure to gain sufficient support from management, leading to lack of resources and insufficient implementation, severely curtailing the capability and effectiveness of the ISMS, meaning that information risks are greater and information opportunities are lower than would otherwise have been achieved. Opportunities include fostering a corporate security culture through the ISMS leading to strong and growing support for information risk management, information security, information exploitation and more.
- There are further risks and opportunities in a more general sense. The possibility of gaining an ISO/IEC 27001 compliance certificate that will enhance organization’s reputation and lead to more business, along with the increased competence and capabilities arising from having a compliant ISMS, is an example of an opportunity that spans the 3 perspectives above. ‘Opportunities for improvement’ involve possible changes to the ISMS, the information security policies and procedures, other controls, security metrics etc. in order to make the ISMS work better, where ‘work better’ is highly context-dependent. This is the concept of continuous improvement, gradual evolution, maturity, proactive governance and systematic management of any management system. Risks here involve anything that might prevent or slow down the ongoing adaptation and maturity processes, for example if the ISMS metrics are so poor (e.g. irrelevant, unconvincing, badly conceived and designed, or the measurement results are so utterly disappointing) that management loses confidence in the ISMS and decides on a different approach, or simply gives up on the whole thing as a bad job. Again, the opportunities go beyond the ISMS to include the business, its information, its objectives and constraints etc.
By passing the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020, the Golden State is taking a major step in the protection of consumer data. The new law gives consumers insight into and control of their personal information collected online. This follows a growing number of privacy concerns around corporate access to and sales of personal information with leading tech companies like Facebook and Google. The bill was signed by … More
The UK data protection regulator (the Information Commissioner's Office – ICO) launched a wide-ranging investigation into the use of personal information for political purposes following the Facebook/Cambridge Analytica affair. It resulted in the publication of a lengthy report titled 'Democracy disrupted? Personal information and political influence' in July 2018, and a fine on Facebook set at the maximum amount possible – £500,000 ($645,000).
SecurityWeek RSS Feed
Microsoft has announced a number of new capabilities and improvements for tools used by enterprise administrators. New Microsoft 365 security and compliance centers The new Microsoft 365 security center allows security administrators and other risk management professionals to manage and take full advantage of Microsoft 365 intelligent security solutions for identity and access management, threat protection, information protection, and security management. The new Microsoft 365 compliance center allows compliance, privacy, and risk management professionals to … More
The post Microsoft rolls out new tools for enterprise security and compliance teams appeared first on Help Net Security.
A U.S. judge has rejected the settlement between Yahoo and users impacted by the massive data breaches suffered by the company, citing, among other things, inadequate disclosure of the settlement fund and high attorney fees.
SecurityWeek RSS Feed
Co-authored by Fabrizio Petriconi.
In the ever-expanding digital ecosystem, having secure and efficient access to resources is critical to both using and delivering services. But if you’re a gatekeeper managing a large number of identities and resources, your primary concern is who has access and how that access is being used.
Identity governance is the intelligent management of user identities to support enterprise IT and regulatory compliance. By collecting and analyzing identity data, you can improve visibility into access, prioritize compliance actions with insights based on risks and make better decisions with clear, actionable intelligence.
Certify Access to Reduce Risk
If you use a business-activity-based approach to risk modeling, you’ll make life a bit easier for your auditors, risk compliance managers and, ultimately, yourself. The core aspects of identity management include automatic and manual provisioning, tracking user roles and life cycles, and understanding business workflow.
Most importantly, establishing accurate access certification at the start — and then continuously reviewing it — can help with your risk modeling efforts. You’ll want to prevent users from accumulating unnecessary privileges, so even if you have had an identity management solution in place for years, it’s a good idea to use certification campaigns as a cleaning tool to ensure everyone is only accessing what they need to do their jobs.
How to Avoid Common Access Certification Issues
It takes a certain amount of diligence for access certification to be useful. Approvers are often overwhelmed by too many certification requests, or those certifications are complex and difficult to parse out. It’s easy to see why an approver might simply “select all,” click “approve,” and conclude his or her activity.
Obviously, this approach should be avoided, and in some countries, it is not compliant with regulations. Let’s look at some recommendations for both static, or predefined, cadences and dynamic events, which occur in response to specific activities such as hiring, job shifts and similar user changes.
Recommendations for Static Events
Once a year, conduct a complete certification in which each manager certifies all the rights of the members of their team.
Group or divide access for certain applications or business areas to simplify and focus the reviewer’s attention.
Do not validate access assigned by automatic and/or default policies.
Delegate campaigns with a very technical and complicated access to skilled reviewers with subject-matter expertise.
Activate specific campaigns that include only different and nonhomogeneous users (for example, based on the same duties or departmental membership).
Recommendations for Dynamic Events
On a quarterly basis, delta certifications are available where managers only certify changes in authorizations from the last quarter.
Activate continuous campaigns to control access to specific events, such as moving a user from one department to another or changing business functions.
Improve the Content of Your Access Certification Campaigns
As noted, when a certification tool does not offer simple language descriptions that clearly explain the business relevance of roles, users, access permissions and resources involved in the process, approvers may not know what they are certifying.
To create quality descriptions, you should:
Rely on system owners, since they are the ones who have a thorough understanding of their resources.
Use definitions of rules with an explicit name. For example, if a role is assigned to a manager of engineering, use the definition “manager_of_engineering” and not simply “mgr” or “L3mgr.” This can be done manually or using role-mining techniques — that is, the tool itself proposes a name based on the attributes of the identity, department location or similar information.
Highlight the business activities to which users are contributing.
Get It Right
In any case, even after taking all the necessary precautions, access certification can be complex and time-consuming. It’s probably clear by now that to be effective in activating certification campaigns, you need to not only activate the technical solution, but also establish a compliance-oriented culture. Educating approvers on the importance of access certification is also critical to maintain regulatory compliance.
When you consider the commitment of stakeholders and adopt and enforce industry best practices, intelligent identity governance enables you to streamline full provisioning and self-service requests, eliminate manual audits, quickly identify compliance violations and risky behavior, and automate the myriad labor-intensive processes associated with managing user identities. With the digital ecosystem expanding every day, business and security leaders need this level of visibility and control to make better decisions about who can access what data and systems on enterprise networks.
Commercial and open-source system configurations such as Windows, Linux and Oracle do not always have all the necessary security measures in place to be deployed immediately into production. These configurations often have features and functionalities enabled by default, which can make them less secure, especially given the sophistication and resourcefulness of today’s cybercriminals.
A system hardening program can help address this issue by disabling or removing unnecessary features and functionalities. This enables security teams to proactively minimize vulnerabilities, enhance system maintenance, support compliance and, ultimately, reduce the system’s overall attack surface.
Unfortunately, many companies lack a formal system hardening program because they have neither an accurate IT asset inventory nor the resources to holistically maintain or even begin a program. An ideal system hardening program can successfully track, inventory and manage the various platforms and assets deployed within an IT environment throughout their life cycles. Without this information, it is nearly impossible to fully secure configurations and verify that they are hardened.
Planning and Implementing Your System Hardening Program
System hardening is more than just creating configuration standards; it also involves identifying and tracking assets in an environment, establishing a robust configuration management methodology, and configuring and maintaining system parameters to expected values. To manage and promote system hardening throughout your organization, start by initiating an enterprisewide program plan. Most companies are engaged in various stages of a plan, but suffer from inconsistent approaches and execution.
A plan builds on the premise that hardening standards will address the most common platforms, such as Windows, Linux and Oracle, and IT asset classes, such as servers, databases, network devices and workstations. These standards will generally address approximately 80 percent of the platforms and IT asset classes deployed in an environment; the remaining 20 percent may be unique and require additional research or effort to validate the most appropriate hardening standard and implementation approach. By adopting the 80/20 rule, hardening will become more consistent, provide better coverage and increase the likelihood of continued success.
Let’s take a closer look at the components of a system hardening program plan and outline the steps you can take to get started on your hardening journey, gain companywide support of your strategy and see the plan through to completion.
1. Confirm Platforms and IT Asset Classes
First thing’s first: Determine the types of platforms and IT asset classes deployed within your environment. For example, list and document the types of server versions, such as Windows 2016, Windows 2012 R2, Red Hat Enterprise Linux or Ubuntu, and the types of desktop versions, such as Windows 7 and Apple iOS. Then, list the types of database versions, such as MySQL, Oracle 12c and MongoDB. The IT asset inventory should be able to report on the data needed to create the platform and IT asset class list. However, some companies struggle to maintain an IT asset inventory that accurately accounts for, locates and tracks the IT assets in their environment.
If there isn’t an up-to-date IT asset inventory to report from, review network vulnerability scan reports to create a list of platforms and asset classes. The scan reports will help verify and validate existing platforms and IT asset classes in your environment, as well as devices that may be unique to your company or industry. Interviewing IT tower leads can also support this information-gathering exercise, as can general institutional knowledge about what is deployed.
2. Determine the Scope of Your Project
Once you’ve documented the platforms and IT asset classes, you can determine the full scope of the system hardening program. From a security perspective, all identified platforms and IT asset classes should be in scope, but if any platform or IT asset class is excluded, document a formalized rationale or justification for the exception.
Any platform or IT asset class not included in the hardening scope will likely increase the level of risk within the environment unless compensating controls or countermeasures are implemented.
3. Establish Configuration Standards
Next, develop new hardening builds or confirm existing builds for all in-scope platforms and IT asset classes. Create this documentation initially from industry-recognized, authoritative sources. The Center for Internet Security (CIS), for example, publishes hardening guides for configuring more than 140 systems, and the Security Technical Implementation Guides (STIGs) — the configuration standards for U.S. Department of Defense systems — can be universally applied. Both of these sources are free to the public. It is generally best to apply one set of hardening standards from an industry-recognized authority across all applicable platforms and IT asset classes whenever possible.
This is the step in the plan where you’ll reference the in-scope listing of all platforms and IT asset classes. For each line item on the list, there should be a corresponding hardening standard document. Start with the industry-recognized source hardening standards and customize them as necessary with the requisite stakeholders.
As an example, let’s say the Microsoft Windows Server 2008 platform needs a hardening standard and you’ve decided to leverage the CIS guides. First, download the Microsoft Windows Server 2008 guide from the CIS website. After orienting the Windows Server team to the overall program plan objectives, send the hardening guide for review in advance of scheduled meetings. Then, walk through the hardening guide with the Windows Server team to determine whether the configuration settings are appropriate.
During these discussions, the team should be able to verify which configuration settings are currently in place, what is not in place, and what may violate company policy for pre- and postproduction server images. If there are hardening guide configuration settings that are not already in place, conduct formal testing to ensure that these changes will not degrade performance, lead to outages or cause other problems within the production environment.
Let’s take the configuration setting “Cryptographic Services to Automatic,” a Microsoft Windows Server 2008 hardening standard from the CIS guide, for example. If this configuration setting is not already in place, can it be implemented? If it cannot be implemented, document the reason why it causes problems as determined through testing, whether it violates company policy or anything else that’s applicable. Note this particular configuration setting as an exception in the overall hardening standard documentation for future reference.
4. Implement Your System Hardening Standards
After you’ve established the hardening build and maintenance documentation and conducted any necessary configuration testing, implement the hardening standards accordingly. The preproduction “golden,” or base, images should be hardened initially to proactively disable or remove unnecessary features prior to deploying in production. Starting with the preproduction images should be less time- and labor-intensive because only one image per platform typically needs to be hardened, removing the need for a change management process or scheduled downtime.
Once a particular platform image is hardened, that image can be used to re-image the postproduction platforms already deployed in the environment. The hardened configuration changes can be deployed with configuration management tools, depending on the platform. For example, the Windows team can implement a vast array of configuration settings throughout the environment it manages with Group Policy. If you cannot make automated hardening changes globally for some or all platforms, you’ll need to physically visit these systems individually and manually apply the configuration changes.
5. Monitor and Maintain Your Program
An effective system hardening program requires the support of all IT and security teams throughout the company. The success of such a program has as much to do with people and processes as it does with technology. Since system hardening is inherently interdisciplinary and interdepartmental, a variety of skill sets are needed to carry it out. Hardening is a team effort that requires extensive coordination.
It’s important to appoint a hardening lead to ensure accountability and responsibility for the management and oversight of the program. This individual should possess the drive to achieve results, a knack for problem-solving, and the ability to direct others in collaboration and teamwork. The system hardening lead is ultimately responsible for the success of the program and should provide the focus, support and momentum necessary to achieve its objectives.
Still, accountability for hardening-related activities should be formally assigned to the teams best suited to ensure their completion and maintenance. The information security team should help facilitate improvements when gaps are identified and serve in a governance role by monitoring the hardening practices of all teams, challenging poor processes and approaches, and verifying compliance against hardening standards. If configuration management tools are not available, verify compliance using vulnerability scans.
All this complexity demands a great deal of synchronization. The roles and responsibilities must be clearly delineated so teams can focus their efforts on activities that truly advance the hardening program plan.
System Hardening Has Never Been So Crucial
Implementing and managing an effective system hardening program requires leadership, security knowledge and execution. Obtaining executive commitment, management support and sufficient investment for the program is also crucial. If you carefully choose a combination of easy-to-implement platforms and IT asset classes and more challenging, longer-term hardening efforts, you’ll see incremental improvements in program execution and support.
Companies everywhere and across industries face an ever-accelerating rate of change in both the threat and technology landscapes, making system hardening more crucial than ever. A hardening program isn’t built in a day, but an effective, thoughtfully constructed plan can significantly lower your company’s risk posture.
The post How to Build a System Hardening Program From the Ground Up appeared first on Security Intelligence.
The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. Data Privacy Day began in the United States and Canada in January 2008 as an extension of the Data Protection Day celebration in Europe. Data Privacy Day is observed annually on Jan. 28. Cindy Provin, CEO, nCipher Security These high profile policy developments are sending a signal that the days of using personal data for commercial advantage … More
In May 2017, the Equifax data breach compromised critical credit and identity data for 56 percent of American adults, 15 million UK citizens and 20,000 Canadians. The Ponemon Institute estimates that the total cost to Equifax could approach $600M in direct expenses and fines. That doesn’t include the cost of the security upgrades required to […]… Read More
The post Regulatory Fines, Prison Time Render “Check Box” Security Indefensible appeared first on The State of Security.
Organizations worldwide that invested in maturing their data privacy practices are now realizing tangible business benefits from these investments, according to Cisco’s 2019 Data Privacy Benchmark Study. The study validates the link between good privacy practice and business benefits as respondents report shorter sales delays as well as fewer and less costly data breaches. Business benefits of privacy investments The GDPR, which focused on increasing protection for EU residents’ privacy and personal data, became enforceable … More
The post GDPR-ready organizations see lowest incidence of data breaches appeared first on Help Net Security.
A survey of 600 data center experts from APAC, Europe and North America reveals that two in five organizations that store their data in-house spend more than $100,000 storing useless IT hardware that could pose a security or compliance risk. Astonishingly, 54 percent of these companies have been cited at least once or twice by regulators or governing bodies for noncompliance with international data protection laws. Fines of up to $1.5 million could be issued … More
The post Organizations waste money storing useless IT hardware appeared first on Help Net Security.
The U.S. Department of Homeland Security (DHS) on Tuesday issued an emergency directive instructing federal agencies to prevent and respond to DNS hijacking attacks.
Russia's media watchdog Roskomnadzor launched "administrative proceedings" Monday against US social media giants Facebook and Twitter, accusing them of not complying with Russian law, news agencies reported.
On 21 January 2019, the French National Data Protection Commission (CNIL) imposed a financial penalty of €50 million against Google, in accordance with the GDPR. This is the first time that the CNIL applies the new sanction limits provided by the GDPR. The amount decided and the publicity of the fine are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Here are some reactions … More
The post Industry reactions to Google’s €50 million GDPR violation fine appeared first on Help Net Security.
The Payment Card Industry Security Standards Council (PCI SSC) this week announced new security standards for the design, development and maintenance of payment software.
Onapsis, a company specializing in cybersecurity and compliance solutions for enterprise resource planning (ERP) products, on Wednesday announced that it has entered a definitive agreement to acquire competitor Virtual Forge.
The U.S. Department of Health and Human Services (“HHS”) recently announced the publication of “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients” (the “Cybersecurity Practices”). The Cybersecurity Practices were developed by the Healthcare & Public Health Sector Coordinating Councils Public Private Partnership, a group comprised of over 150 cybersecurity and healthcare experts from government and private industry.
The Cybersecurity Practices are currently composed of four volumes: (1) the Main Document, (2) a Technical Volume of cybersecurity practices for small healthcare organizations, (3) a Technical Volume of cybersecurity practices for medium and large healthcare organizations, and (4) a Resources and Templates Volume. The Cybersecurity Practices also will include a Cybersecurity Practices Assessments Toolkit, but that is still under development.
The Main Document provides an overview of prominent cyber attacks against healthcare organizations and statistics on the costs of such attacks—such as that in 2017, cyber attacks cost small and medium-sized businesses an average of $2.2 million—and lists the five most common cybersecurity threats that impact the healthcare industry: (1) email phishing attacks, (2) ransomware attacks, (3) loss or theft of equipment or data, (4) insider, accidental or intentional data loss and (5) attacks against connected medical devices that may affect patient safety. The Main Document describes real world scenarios exemplifying each threat, lists “Threat Quick Tips,” analyzes the vulnerabilities that lead to such threats, discusses the impact of such threats and provides practices for healthcare organizations (and their employees) to consider to counter such threats. The Main Document concludes by noting that it is essential for healthcare organizations and government to distribute “relevant, actionable information that mitigates the risk of cyber-attacks” and argues for a “culture change and an acceptance of the importance and necessity of cybersecurity as an integrated part of patient care.”
The two Technical Volumes list the following 10 cybersecurity practices for small and medium and large healthcare organizations:
- email protection systems;
- endpoint protection systems;
- access management;
- data protection and loss prevention;
- asset management;
- network management;
- vulnerability management;
- incident response;
- medical device security; and
- cybersecurity policies.
The Technical Volumes also list cybersecurity sub-practices and advice for healthcare organizations to follow, with the noted distinction that small healthcare organizations are focused on cost-effective solutions while medium and large organizations may have more “complicated ecosystems of IT assets.”
Finally, the Resources and Template Volume maps the 10 cybersecurity practices and sub-practices to the NIST Cybersecurity Framework. It also provides templates such as a Laptop, Portable Device, and Remote Use Policy and Procedure, Security Incident Response Plan, an Access Control Procedure, and a Privacy and Security Incident Report.
In announcing the Cybersecurity Practices, HHS Acting Chief Information Security Officer stated that cybersecurity is “the responsibility of every organization working in healthcare and public health. In all of our efforts, we must recognize and leverage the value of partnerships among government and industry stakeholders to tackle the shared problems collaboratively.”
- What do you do in the area of X
- Tell me about X
- Show me the policies and procedures relating to X
- Show me the documentation arising from or relating to X
- Show me the X system from the perspectives of a user, manager and administrator
- Who are the users, managers and admins for X
- Who else can access or interact or change X
- Who supports X and how good are they
- Show me what happens if X
- What might happen if X
- What else might cause X
- Who might benefit or be harmed if X
- What else might happen, or has ever happened, after X
- Show me how X works
- Show me what’s broken with X
- Show me how to break X
- What stops X from breaking
- Explain the controls relating to X
- What are the most important controls relating to X, and why is that
- Talk me through your training in X
- Does X matter
- In the grand scheme of things, is X important relative to, say, Y and Z
- Is X an issue for the business, or could it be
- Could X become an issue for the business if Y
- Under what circumstances might X be a major problem
- When might X be most problematic, and why
- How big is X - how wide, how heavy, how numerous, how often ...
- Is X right, in your opinion
- Is X sufficient and appropriate, in your opinion
- What else can you tell me about X
- Talk me through X
- Pretend I am clueless: how would you explain X
- What causes X
- What are the drivers for X
- What are the objectives and constraints relating to X
- What are the obligations, requirements and goals for X
- What should or must X not do
- What has X achieved to date
- What could or should X have achieved to date
- What led to the situation involving X
- What’s the best/worst thing about X
- What’s the most/least successful or effective thing within, about or without X
- Walk or talk me through the information/business risks relating to X
- What are X’s strengths and weaknesses, opportunities and threats
- What are the most concerning vulnerabilities in X
- Who or what might threaten X
- How many changes have been made in X
- Why and how is X changed
- What is the most important thing about X
- What is the most valuable information in X
- What is the most voluminous information in X
- How accurate is X …
- How complete is X …
- How up-to-date is X …
- … and how do you know that (show me)
- Under exceptional or emergency conditions, what are the workarounds for X
- Over the past X months/years, how many Ys have happened … how and why
- If X was compromised in some way, or failed, or didn’t perform as expected etc., what would/might happen
- Who might benefit from or be harmed by X
- What has happened in the past when X failed, or didn’t perform as expected etc.
- Why hasn’t X been addressed already
- Why didn’t previous efforts fix X
- Why does X keep coming up
- What might be done to improve X
- What have you personally tried to address X
- What about your team, department or business unit: what have they done about X
- If you were the Chief Exec, Managing Director or god, what would you do about X
- Have there been any incidents caused by or involving X and how serious were they
- What was done in response – what changed and why
- Who was involved in the incidents
- Who knew about the incidents
- How would we cope without X
- If X was to be replaced, what would be on your wishlist for the replacement
- Who designed/built/tested/approved/owns X
- What is X made of: what are the components, platforms, prerequisites etc.
- What versions of X are in use
- Show me the configuration parameters for X
- Show me the logs, alarms and alerts for X
- What does X depend on
- What depends on X
- If X was preceded by W or followed by Y, what would happen to Z
- Who told you to do ... and why do you think they did that
- How could X be done more efficiently/effectively
- What would be the likely or possible consequences of X
- What would happen if X wasn’t done at all, or not properly
- Can I have a read-only account on system X to conduct some enquiries
- Can I have a full-access account on test system X to do some audit tests
- Can I see your test plans, cases, data and results
- Can someone please restore the X backup from last Tuesday
- Please retrieve tape X from the store, show me the label and lend me a test system on which I can explore the data content
- If X was so inclined, how could he/she cause chaos, or benefit from his/her access, or commit fraud/theft, or otherwise exploit things
- If someone was utterly determined to exploit, compromise or harm X, highly capable and well resourced, what might happen, and how might we prevent them succeeding
- If someone did exploit X, how might they cover their tracks and hide their shenanigans
- If X had been exploited, how would we find out about it
- How can you prove to me that X is working properly
- Would you say X is top quality or perfect, and if not why not
- What else is relevant to X
- What has happened recently in X
- What else is going on now in X
- What are you thinking about or planning for the mid to long term in relation to X
- How could X be linked or integrated with other things
- Are there any other business processes, links, network connections, data sources etc. relating to X
- Who else should I contact about X
- Who else ought to know about the issues with X
- A moment ago you/someone else told me about X: so what about Y
- I heard a rumour that Y might be a concern: what can you tell me about Y
- If you were me, what aspects of X would concern you the most
- If you were me, what else would you ask, explore or conclude about X
- What is odd or stands out about X
- Is X good practice
- What is it about X that makes you most uncomfortable
- What is it about this audit that makes you most uncomfortable
- What is it about me that makes you most uncomfortable
- What is it about this situation that makes you most uncomfortable
- What is it about you that makes me most uncomfortable
- Is there anything else you’d like to say
On December 20, 2018, the Department of Commerce updated its frequently asked questions (“FAQs”) on the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks (collectively, the “Privacy Shield”) to clarify the effect of the UK’s planned withdrawal from the EU on March 29, 2019. The FAQs provide information on the steps Privacy Shield participants must take to receive personal data from the UK in reliance on the Privacy Shield after Brexit.
The deadline for implementing the steps identified in the FAQs depends on whether the UK and EU are able to finalize an agreement for the UK’s withdrawal from the EU. To the extent the UK and EU reach an agreement regarding withdrawal, thereby implementing a Transition Period in which EU data protection law will continue to apply to the UK, Privacy Shield participants will have until December 31, 2020, to implement the relevant changes to their public-facing Privacy Shield commitments described in the FAQs and below. To the extent no such agreement is reached, participants must implement the changes by March 29, 2019.
According to the FAQs, a Privacy Shield participant who would like to continue to receive personal data from the UK following the relevant deadline (as described above) must update any language regarding its public commitment to comply with the Privacy Shield to include an affirmative statement that its commitment under the Privacy Shield will extend to personal data received from the UK in reliance on the Privacy Shield. In addition, Privacy Shield participants who plan to receive Human Resources (“HR”) data from the UK in reliance on the Privacy Shield must also update their HR Privacy Policies. The FAQs further state that if a Privacy Shield participant opts to make such public commitments to continue receiving UK personal data in reliance on the Privacy Shield, the participant will be required to cooperate and comply with the UK Information Commissioner’s Office with regard to any such personal data received.
- A small engineering company is in a different position to, say, a large charity, a government department or a multinational: its complexity, information risks, information security controls and other factors vary;
- A company in a heavily-regulated industry such as healthcare, finance or defense is probably more compliance-driven, its management and workforce more comfortable with structured and systematic ways of working than, say, a retailer or farmers' cooperative;
- An organization that is 'surrounded' or owned by ISO27k-certified organizations may be under more pressure to implement than a pioneer, especially if there are commercial pressures or contractual/regulatory obligations in this area (e.g. for privacy reasons);
- A patently insecure organization that has suffered one or more serious infosec incidents, breaches, compliance failures etc. is likely to be under more intense pressure to reform and 'get secure' than one which is (or believes itself to be) relatively secure, doing OK at the moment but maybe looking into ISO27k as a strategic opportunity, supporting other initiatives and complementing other management systems maybe;
- A mature, specialized, narrowly-focused, relatively simple and stable organization (such as a steel mill) probably needs far less flexibility in its ISMS than one which is highly dynamic, growing fast, chasing different markets and proactively innovating (such as manufacturer of IoT things).
- Documentation e.g.:
- Sets of ISO27k and possibly other standards (the core set of ISO/IEC 27000, 27001, 27002, 27003 and 27005 are almost universally recommended);
- Generic template/skeleton ISMS documentation such as scope, SoA, RTP etc.;
- Generic infosec policies and procedures etc.;
- Generic project/program plans, frameworks etc.;
- Generic, structured methods/approaches etc.;
- Tailored documentation to suit the general type/size of business, industry etc.;
- Bespoke or heavily customized documentation, competently tailored to suit a particular organization;
- ISMS-related consultancy-type services of various kinds e.g.:
- Training and awareness services for individuals, teams or the entire organization;
- Help with the program and project governance and management aspects e.g. planning, resourcing, metrics, targets, project risk management;
- Mentoring, guidance and advice for the CISO/ISM, ISMS implementation project manager/team and perhaps others e.g. senior management, risk management, IT audit, IT, Facilities, HR, Operations, Privacy ...;
- All manner of gap analyses, reviews, audits, benchmarks etc. to assess and report on the current situation and help determine future directions, priorities etc.;
- Full-time hands-on ISMS project and program management leading to permanent ISM and CISO roles;
- Part-time local and/or remote support, advice, mentoring etc. for the permanent on-site team - including perhaps assistance with the recruitment and training of such a team;
- Business development consultancy e.g. help to re-position and market the organization as an ISO27k-certified secure, trustworthy, reliable supplier or whatever;
- Systems e.g.:
- IT systems specifically supporting an ISO27k ISMS, or any kind of ISMS, or more generally information risk and security-related;
- Document Management Systems, possibly pre-loaded with [generic but hopefully customizable, relevant and suitable] ISO27k ISMS documentation;
- Learning Management Systems, possibly pre-loaded with ISO27k-related training materials, courses, tests etc.;
- Private, hybrid or public cloud-based apps;
- Structured methods, frameworks and approaches in this area, with or without IT components;
- Something else!
but you do Y
Longer term, I'd like to push ISO27k further into the realms of assurance and accountability, and beef-up its advice on governance, information risk management, business continuity, and business for that matter. The business context and objectives for information security would be fascinating to explore and elaborate further on. One day maybe. I've learnt to pick my battles though: it takes a winning strategy to succeed in war.
“Achieving operational resiliency requires a connected view of risk to see the big picture of how risk interconnects and impacts the organization and its processes. A key aspect of this is the close relationship between operational risk management (ORM) and business continuity management (BCM). It baffles me how these two functions operate independently in most organizations when they have so much synergy.”
- Information risk, information security, information technology: the link is glaringly obvious, and yet usually the second words are emphasized leaving the first woefully neglected;
- Risk and reward, challenge and opportunity: these are flip sides of the same coin that all parts of the business should appreciate. Management is all about both minimizing the former and maximizing the latter. Business is not a zero-sum game: it is meant to achieve objectives, typically profit and other forms of successful outcomes. And yes, that includes information security!
- Business continuity involves achieving resilience for critical business functions, activities, systems, information flows, supplies, services etc., often by mitigating risks through suitable controls. The overlap between BCM, [information] risk management and [information] security is substantial, starting with the underlying issue of what 'critical' actually means to the organization;
- Human Resources, Training, Health and Safety and Information Risk and Security are all concerned with people, as indeed is Management. People are tricky to direct and control. People have their own internal drivers and constraints, their biases and prejudices, aims and objectives. Taming the people without destroying the sparks of creativity and innovation that set us apart from the robots is a common challenge ... and, before long, taming those robots will be the next common challenge.
- People and organizations were free to do exactly as they wish without fear of anyone spotting and reacting to their activities;
- Machines operated totally autonomously, with nobody monitoring or controlling them;
- Organizations, groups and individuals acted with impunity, doing whatever they felt like without any guidance, direction or limits, nobody checking up on them or telling them what to do or not to do;
- Compliance was optional at best, and governance was conspicuously absent.
On November 9, 2018, Serbia’s National Assembly enacted a new data protection law. The Personal Data Protection Law, which becomes effective on August 21, 2019, is modeled after the EU General Data Protection Regulation (“GDPR”).
As reported by Karanovic & Partners, key features of the new Serbian law include:
- Scope – the Personal Data Protection Law applies not only to data controllers and processors in Serbia but also those outside of Serbia who process the personal data of Serbian citizens.
- Database registration – the Personal Data Protection Law eliminates the previous requirement for data controllers to register personal databases with the Serbian data protection authority (“DPA”), though they will be required to appoint a data protection officer (“DPO”) to communicate with the DPA on data protection issues.
- Data subject rights – the new law expands the rights of data subjects to access their personal data, gives subjects the right of data portability, and imposes additional burdens on data controllers when a data subject requests the deletion of their personal data.
- Consent – the Personal Data Protection Law introduces new forms of valid consent for data processing (including oral and electronic) and clarifies that the consent must be unambiguous and informed. The prior Serbian data protection law only recognized handwritten consents as valid.
- Data security – the new law requires data controllers to implement and maintain safeguards designed to ensure the security of personal data.
- Privacy by Design – the new law obligates data controllers to implement privacy by design when developing new products and services and to conduct data protection impact assessments for certain types of data processing.
- Data transfers – the Personal Data Protection Law expands the ways in which personal data may be legally transferred from Serbia. Previously, data controllers were required to obtain the approval of the Serbian DPA for any transfers of personal data to non-EU countries. The new law permits personal data transfers based on standard contractual clauses and binding corporate rules approved by the Serbian DPA. Organizations can also transfer personal data to countries deemed to provide an adequate level of data protection by the EU or the Serbian DPA or when the data subject consents to the transfer.
- Data breaches – like the GDPR, the new law requires data controllers to notify the Serbian DPA within 72 hours of a data breach and will require them to notify individuals if the data breach is likely to result in a high risk to the rights and freedoms of individuals. Data processors must also notify the relevant data controllers in the event of a data breach.
The new law also imposes penalties for noncompliance, but these are significantly lower than those contained in the GDPR. The maximum fines in the new Serbian law are only 17,000 Euros, while the maximum fines in the GDPR can reach up to 20 million Euros or 4% of an organization’s annual global turnover.
"Public companies must assess and calibrate internal accounting controls for the risk of cyber frauds. Companies are now on notice that they must consider cyber threats when devising and maintaining a system of internal accounting controls."
"The commission made it clear that public companies subject to Section 13(b)(2)(B) of the Securities Exchange Act — the federal securities law provision covering internal controls — have an obligation to assess and calibrate internal accounting controls for the risk of cyber frauds and adjust policies and procedures accordingly."
Turns out there's more to this:
"As the report warns, companies should be proactive and take steps to consider cyber scams. Specific measures should include:
- Identify enterprise-wide cybersecurity policies and how they intersect with federal securities laws compliance
- Update risk assessments for cyber-breach scenarios
- Identify key controls designed to prevent illegitimate disbursements, or accounting errors from cyber frauds, and understand how they could be circumvented or overridden. Attention should be given to controls for payment requests, payment authorizations, and disbursements approvals — especially those for purported “time-sensitive” and foreign transactions — and to controls involving changes to vendor disbursement data.
- Evaluate the design and test the operating effectiveness of these key controls
- Implement necessary control enhancements, including training of personnel
While it’s not addressed in the report, companies could be at risk for disclosure failures after a cyber incident, and CEOs and CFOs are in the SEC’s cross-hairs due to representations in Section 302 Certifications. Therefore, companies should also consider disclosure controls for cyber-breaches."
- Monitor activities, potentially with data analytic tools, for potential illegitimate disbursements
"hello, my prey.
I write you since I attached a trojan on the web site with porn which you have visited.My malware captured all your private data and switched on your camera which recorded the act of your wank. Just after that the malware saved your contact list.I will erase the compromising video records and data if you pay me 350 EURO in bitcoin. This is wallet address for payment : [string redacted]
I give you 30h after you view my message for making the transaction.As soon as you read the message I'll know it immediately.It is not necessary to tell me that you have paid to me. This wallet address is connected to you, my system will delete everything automatically after transfer confirmation.If you need 48h just Open the calculator on your desktop and press +++If you don't pay, I'll send dirt to all your contacts. Let me remind you-I see what you're doing!You can visit the police office but anyone can't help you.
If you try to cheat me , I'll see it immediately!
I don't live in your country. So anyone can not track my location even for 9 months.Goodbye for now. Don't forget about the disgrace and to ignore, Your life can be destroyed."
- One step back from the information security controls controls are the information risks. The controls help address the risks by avoiding, reducing or limiting the number and severity of incidents affecting or involving information: but what information needs to be protected, and against what kinds of incident? Without knowing that, I don't see how you can decide which controls are or are not appropriate, nor evaluate the controls in place.
- Two steps back takes us to the organizational or business context for information and the associated risks. Contrast, say, a commercial airline company against a government department: some of their information is used for similar purposes (i.e. general business administration and employee comms) but some is quite different (e.g. the airline is heavily reliant on customer and engineering information that few government departments would use if at all). Risks and controls for the latter would obviously differ ... but less obviously there are probably differences even in the former - different business priorities and concerns, different vulnerabilities and threats. The risks, and hence the controls needed, depend on the situation.
- First, I find it helps to start any new role deliberately and consciously “on receive” i.e. actively listening for the first few weeks at least, making contacts with your colleagues and sources and finding out what matters to them. Try not to comment or criticize or commit to anything much at this stage, although that makes it an interesting challenge to get people to open up! Keep rough notes as things fall into place. Mind-mapping may help here.
- Explore the information risks of most obvious concern to your business. Examples:
- A manufacturing company typically cares most about its manufacturing/factory production processes, systems and data, plus its critical supplies and customers;
- A services company typically cares most about customer service, plus privacy;
- A government department typically cares most about ‘not embarrassing the minister’ i.e. compliance with laws, regs and internal policies & procedures;
- A healthcare company typically cares most about privacy, integrity and availability of patient/client data;
- Any company cares about strategy, finance, internal comms, HR, supply chains and so on – general business information – as well as compliance with laws, regs and contracts imposed on it - but which ones, specifically, and to what extent?;
- Any [sensible!] company in a highly competitive field of business cares intensely about protecting its business information from competitors, and most commercial organizations actively gather, assess and exploit information on or from competitors, suppliers, partners and customers, plus industry regulators, owners and authorities;
- Not-for-profit organizations care about their core missions, of course, plus finances and people and more (they are business-like, albeit often run on a shoestring);
- A mature organization is likely to have structured and stable processes and systems (which may or may not be secure!) whereas a new greenfield or immature organization is likely to be more fluid, less regimented (and probably insecure!);
- Keep an eye out for improvement opportunities - a polite way of saying there are information risks of concern, plus ways to increase efficiency and effectiveness – but don’t just assume that you need to fix all the security issues instantly: it’s more a matter of first figuring out you and your organization’s priorities. Being information risk-aligned suits the structured ISO27k approach. It doesn’t hurt to mention them to the relevant people and chat about them, but be clear that you are ‘just exploring options’ not ‘making plans’ at this stage: watch their reactions and body language closely and think on;
- Consider the broader historical and organizational context, as well as the specifics. For instance:
- How did things end up the way they are today? What most influenced or determined things? Are there any stand-out issues or incidents, or current and future challenges, that come up often and resonate with people?
- Where are things headed? Is there an appetite to ‘sort this mess out’ or conversely a reluctance or intense fear of doing anything that might rock the boat? Are there particular drivers or imperatives or opportunities, such as business changes or compliance obligations? Are there any ongoing initiatives that do, could or should have an infosec element to them?
- Is the organization generally resilient and strong, or fragile and weak? Look for examples of each, comparing and contrasting. A SWOT or PEST analysis generally works for me. This has a bearing on the safe or reckless acceptance of information and other risks;
- Is information risk and security an alien concept, something best left to the grunts deep within IT, or a broad business issue? Is it an imposed imperative or a business opportunity, a budget black hole (cost centre) or an investment (profit centre)? Does it support and enable the business, or constrain and prevent it?
- Notice the power and status of managers, departments and functions. Who are the movers and shakers? Who are the blockers and naysayers? Who are the best-connected, the most influential, the bright stars? Who is getting stuff done, and who isn’t? Why is that?
- How would you characterize and describe the corporate culture? What are its features, its high and low points? What elements or aspects of that might you exploit to further your objectives? What needs to change, and why? (How will come later!)
- Dig out and study any available risk, security and audit reports, metrics, reviews, consultancy engagements, post-incident reports, strategies, plans (departmental and projects/initiatives), budget requests, project outlines, corporate and departmental mission statements etc. There are lots of data here and plenty of clues that you should find useful in building up a picture of What Needs To Be Done. Competent business continuity planning, for example, is also business-risk-aligned, hence you can’t go far wrong by emphasizing information risks to the identified critical business activities. At the very least, obtaining and discussing the documentation is an excellent excuse to work your way systematically around the business, meeting knowledgeable and influential people, learning and absorbing info like a dry sponge.
- Build your team. It may seem like you’re a team of 1 but most organizations have other professionals or people with an interest in information risk and security etc. What about IT, HR, legal/compliance, sales & marketing, production/operations, research & development etc.? Risk Management, Business Continuity Management, Privacy and IT Audit pro’s generally share many of your/our objectives, at least there is substantial overlap (they have other priorities too). Look out for opportunities to help each other (give and take). Watch out also for things, people, departments, phrases or whatever to avoid, at least for now.
- Meanwhile, depending partly on your background, it may help to read up on the ISO27k and other infosec standards plus your corporate strategies, policies, procedures etc., not just infosec. Consider attending an ISO27k lead implementer and/or lead auditor training course, CISM or similar. There’s also the ISO27k FAQ, ISO27k Toolkit and other info from ISO27001security.com, plus the ISO27k Forum archive (worth searching for guidance on specific issues, or browsing for general advice). If you are to become the organization’s centre of excellence for information risk and security matters, it’s important that you are well connected externally, a knowledgeable expert in the field. ISSA, InfraGard, ISACA and other such bodies, plus infosec seminars, conferences and social media groups are all potentially useful resources, or a massive waste of time: your call.
PS For the more cynical among us, there’s always the classic three envelope approach.
On November 6, 2018, the French Data Protection Authority (the “CNIL”) published its own guidelines on data protection impact assessments (the “Guidelines”) and a list of processing operations that require a data protection impact assessment (“DPIA”). Read the guidelines and list of processing operations (in French).
The Guidelines aim to complement guidelines on DPIA adopted by the Article 29 Working Party on October 4, 2017, and endorsed by the European Data Protection Board (“EDPB”) on May 25, 2018. The CNIL crafted its own Guidelines to specify the following:
- Scope of the obligation to carry out a DPIA. The Guidelines describe the three examples of processing operations requiring a DPIA provided by Article 35(3) of the EU General Data Protection Regulation (“GDPR”). The Guidelines also list nine criteria the Article 29 Working Party identified as useful in determining whether a processing operation requires a DPIA, if that processing does not correspond to one of the three examples provided by the GDPR. In the CNIL’s view, as a general rule a processing operation meeting at least two of the nine criteria requires a DPIA. If the data controller considers that processing meeting two criteria is not likely to result in a high risk to the rights and freedoms of individuals, and therefore does not require a DPIA, the data controller should explain and document its decision for not carrying out a DPIA and include in that documentation the views of the data protection officer (“DPO”), if appointed. The Guidelines make clear that a DPIA should be carried out if the data controller is uncertain. The Guidelines also state that processing operations lawfully implemented prior to May 25, 2018 (e.g., processing operations registered with the CNIL, exempt from registration or recorded in the register held by the DPO under the previous regime) do not require a DPIA within a period of 3 years from May 25, 2018, unless there has been a substantial change in the processing since its implementation.
- Conditions in which a DPIA is to be carried out. The Guidelines state that DPIAs should be reviewed regularly—at minimum, every three years—to ensure that the level of risk to individuals’ rights and freedoms remains acceptable. This corresponds to the three-year period mentioned in the draft guidelines on DPIAs adopted by the Article 29 Working Party on April 4, 2017.
- Situations in which a DPIA must be provided to the CNIL. The Guidelines specify that data controllers may rely on the CNIL’s sectoral guidelines (“Referentials”) to determine whether the CNIL must be consulted. If the data processing complies with a Referential, the data controller may take the position that there is no high residual risk and no need to seek prior consultation for the processing from the CNIL. If the data processing does not fully comply with the Referential, the data controller should assess the level of residual risk and the need to consult the CNIL. The Guidelines note that the CNIL may request DPIAs in case of inspections.
CNIL’s List of Processing Operations Requiring a DPIA
The CNIL previously submitted a draft list of processing operations requiring a DPIA to the EDPB for its opinion. The CNIL adopted its final list on October 11, 2018, based on that opinion. The final list includes 14 types of processing operations for which a DPIA is mandatory. The CNIL provided concrete examples for each type of processing operation, including:
- processing operations for the purpose of systematically monitoring the employees’ activities, such as the implementation of data loss prevention tools, CCTV systems recording employees handling money, CCTV systems recording a warehouse stocking valuable items in which handlers are working, digital tachograph installed in road freight transport vehicles, etc.;
- processing operations for the purpose of reporting professional concerns, such as the implementation of a whistleblowing hotline;
processing operations involving profiling of individuals that may lead to their exclusion from the benefit of a contract or to the contract suspension or termination, such as processing to combat fraud of (non-cash) means of payment;
- profiling that involves data coming from external sources, such as a combination of data operated by data brokers and processing to customize online ads;
- processing of location data on a large scale, such as a mobile app that enables to collect users’ geolocation data, etc.
The CNIL’s list is non-exhaustive and may be regularly reviewed, depending on the CNIL’s assessment of the “high risks” posed by certain processing operations.
The CNIL is expected to soon publish its list of processing operations for which a DPIA is not required.
On October 17, 2018, the French data protection authority (the “CNIL”) published a press release detailing the rules applicable to devices that compile aggregated and anonymous statistics from personal data—for example, mobile phone identifiers (i.e., media access control or “MAC” address) —for purposes such as measuring advertising audience in a given space and analyzing flow in shopping malls and other public areas. Read the press release (in French).
The CNIL observed that more and more companies use such devices. In shopping malls, these devices can (1) compile traffic statistics and determine how many individuals have visited a shopping mall over a limited time range; (2) model the routes that individuals take through the shopping mall; and/or (3) calculate the rate of repeating visitors. In public areas, they can (1) determine how many individuals walked past an audience measuring device (e.g., an advertising panel); (2) determine the routes taken by these individuals from one advertising panel to another; (3) estimate the amount of time individuals stand in line; (4) assess the number of vehicles driving on a road, etc.
Against that background, the CNIL identified the three following scenarios:
Scenario 1 – When data is anonymized at short notice (i.e., within minutes of collecting the data)
The CNIL defines anonymization as a specific data processing operation which renders individuals no longer identifiable. (Such processing must comply with various criteria set forth in Opinion 05/2014 of the former Article 29 Working Party on anonymization techniques. According to the CNIL, this includes ensuring a high collision rate between several individuals—for instance, in the context of MAC-based audience measurement devices, the processing must allow multiple MAC addresses to match the result of single-identifier processing.)
In this scenario, anonymization must be performed promptly, i.e., within minutes of collecting the data. In the CNIL’s view, this reduces the risk that an individual would be able to access identifying data. To that end, CNIL recommends anonymizing the data within 5 minutes. After that period, no identifying data should be retained.
The CNIL noted that data controllers may rely on their legitimate interest as a legal basis for the processing under the EU General Data Protection Regulation (“GDPR”). The CNIL recommended, however, that data controllers provide notice to individuals, using a layered approach in accordance with the guidelines of the former Article 29 Working Party on transparency under the GDPR. The CNIL provided an example of a notice that would generally satisfy the first layer of a layered privacy notice, though emphasized that notice should be tailored to the processing—particularly with respect to the individuals’ data protection rights. Since the data is anonymized, individuals cannot exercise their rights of access to and rectification of their personal data, and restriction to the processing of their data. Therefore, the notice does not have to mention these rights. However, individuals must be able to object to the collection of their data, and the notice should refer to that right of (prior) objection.
Scenario 2 – When data is immediately pseudonymized and then anonymized or deleted within 24 hours
In this second scenario, data controllers may rely on their legitimate interest as a legal basis for the processing provided that they:
- Provide prior notice to individuals;
- Implement mechanisms to allow individuals to object to the collection of their data (i.e., prior objection to the processing). These mechanisms should be accessible, functional, easy to use and realistic;
- Set up procedures to allow individuals to exercise their rights of access, rectification and objection after data has been collected; and
- Implement appropriate technical measures to protect the data, including a reliable pseudonymization process of MAC addresses (with the deletion of the raw data and the use of a salt or key). The pseudonymized data must be anonymized or deleted at the end of the day.
Further, the CNIL recommended using multiple modalities to provide notice to individuals, such as posting a privacy notice at entry and exit points of the shopping mall, on Wi-Fi access points, on every advertising device (e.g., on every advertising panel when the processing is carried out on the street), on the website of the shopping mall, or through a specific marketing campaign.
With respect to the individuals’ data protection rights, the CNIL made it clear that individuals who pass audience measuring devices must be able to object to the collection and further processing of their personal data. Companies wishing to install such a device must implement technical solutions that allow individuals to easily exercise this right of objection both a priori and a posteriori: these solutions must not only allow individuals to obtain the deletion of the data already collected (i.e., to exercise their right of objection a posteriori) but also prevent any further collection of their personal data (prior objection). In the CNIL’s view, the right of objection can be exercised using one of the following means:
- Through a dedicated website or app on which individuals enter their MAC address to object to the processing. (The data controller is responsible for explaining to individuals how to obtain their MAC address so that they can effectively object to the processing of their data.) If an individual exercises his/her right of objection via this site or app, the data controller must delete all the data already collected and must no longer collect any data associated with that MAC address; or
- Through a dedicated Wi-Fi network that allows the automatic collection of the devices’ MAC address for the purposes of objecting to the processing. If an individual exercises his/her right of objection via this network, the data controller must delete all the data that has been already pseudonymized and must not further collect the MAC address. The CNIL recommended using a clear and explicit name for that network such as “wifi_tracking_optout”.
According to the CNIL, data controllers should not recommend that individuals turn off the Wi-Fi feature of their phone to avoid being tracked. Such a recommendation is inadequate for purposes of enabling individuals to exercise of their right of objection.
Scenario 3 – All other cases
In the CNIL’s view, if the device implemented by the data controller does not strictly comply with the conditions listed in the two previous scenarios, the processing may only be implemented with the individuals’ consent. The CNIL stated that individuals must be able to withdraw consent, and that withdrawing consent should be as simple as granting consent. Individuals should also be able to exercise all the other GDPR data protection rights. In terms of notice, the CNIL recommended providing notice using multiple modalities (as in the second scenario).
Data Protection Impact Assessment and CNIL’s Authorization
The CNIL also reported that, in all the above scenarios, the processing will require a data protection impact assessment to be carried out prior to the implementation of the audience/traffic measuring devices, in so far as such devices assist in the systematic monitoring of individuals through an innovative technical solution.
Additionally, the CNIL’s prior authorization may be required in certain cases.
On October 29, 2018, the Office of the Privacy Commissioner of Canada (the “OPC”) released final guidance (“Final Guidance”) regarding how businesses may satisfy the reporting and record-keeping obligations under Canada’s new data breach reporting law. The law, effective November 1, 2018, requires organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) to (1) report to the OPC breaches of security safeguards involving personal information “that pose a real risk of significant harm” to individuals, (2) notify affected individuals of the breach and (3) keep records of every breach of security safeguards, regardless of whether or not there is a real risk of significant harm.
As we previously reported, the OPC had published draft guidance for which it had requested public comment. Like the draft version, the Final Guidance includes information regarding how to assess the risk of significant harm, and regarding notice, reporting and recordkeeping requirements (i.e., timing, content and form). The Final Guidance adds a requirement that a record must also include either sufficient detail for the OPC to assess whether an organization correctly applied the real risk of significant harm standard, or a brief explanation as to why the organization determined there was not a real risk of significant harm.
The Final Guidance additionally clarifies the following:
- Who is responsible for reporting and keeping records of the breach? Businesses subject to PIPEDA requirements must report breaches of security safeguards involving personal information “under its control.”
- Who is “in control” of personal information? The Final Guidance notes that in general, when an organization (the “principal”) provides personal information to a third party processor (the “processor”), the principal may reasonably be found to be in control of the personal information it has transferred to the processor, triggering the reporting and record-keeping obligations of a breach that occurs with the processor. On the other hand, if the processor uses or discloses the same personal information for other purposes, it is no longer simply processing the personal information on behalf of the principal; it is instead acting as an organization “in control” of the information, and would thereby have the obligation to notify, report, and record. The Final Guidance acknowledges that determining who has personal information “under its control” must be assessed on a case-by-case basis, taking into account any relevant contractual arrangements and “commercial realities” between organizations, such as shifting roles and evolving business models. The Final Guidance recommends that principals ensure “sufficient contractual arrangements [are] in place with the processor to address compliance” with the PIPEDA breach reporting, notification and record-keeping obligations.
- When do other entities besides affected individuals and the OPC need to be notified? If a breach triggers notification due to a real risk of significant harm, “any government institutions or organizations that the organization believes… may be able to reduce the risk of harm… or mitigate the harm” resulting from the breach must also be notified.
Though the privacy commissioner called the new law a “step in the right direction,” the commissioner also voiced concerns about the law, including that: (1) breach reports to the OPC do not contain the information that would allow for the regulator to assess the quality of an organization’s data security safeguards; (2) the lack of financial sanctions for inadequate data security safeguards misses an opportunity to incentivize organizations to prevent breaches; and (3) the government has not provided the OPC with enough resources to “analyze breach reports, provide advice and verify compliance.”
At its October monthly meeting, the Federal Energy Regulatory Commission (the “Commission”) adopted new reliability standards addressing cybersecurity risks associated with the global supply chain for Bulk Electric System (“BES”) Cyber Systems. The new standards expand the scope of the mandatory and enforceable cybersecurity standards applicable to the electric utility sector. They will require electric utilities and transmission grid operators to develop and implement plans that include security controls for supply chain management for industrial control systems, hardware, software and services.
These standards have been in development for some time. The North American Electric Reliability Corporation (“NERC”) proposed them in September 2017 in response to an earlier Commission directive which identified potential supply chain threats to the utility sector. The reliability standards focus on the following four security objectives: (1) software integrity and authenticity; (2) vendor remote access protections; (3) information system planning and (4) vendor risk management and procurement controls. The new standards will become effective on the first day of the first calendar quarter that is 18 months following the effective date of Order No. 850 (which will be 60 days after its publication in the Federal Register).
In addition to adopting NERC’s proposed standards, the Commission also directed NERC to expand them to include Electronic Access Control and Monitoring Systems (“EACMS”) associated with “medium” and “high” impact BES Cyber Systems within the scope of the supply chain risk management standards. NERC and others had opposed this expansion but were overruled by the Commission. NERC has 24 months to develop and file EACMS rules. By contrast, FERC decided not to require NERC to develop additional rules that would apply to Physical Access Control Systems (“PACS”) or Protected Cyber Assets (“PCAs”) at this time. Instead, NERC must study the cybersecurity supply chain risks presented by PACS and PCAs and report back to the Commission as part of a broader supply chain risk study.
Recently, the French Data Protection Authority (the “CNIL”) published a statistical review of personal data breaches during the first four months of the EU General Data Protection Regulation’s (“GDPR”) entry into application. View the review (in French).
Types of breaches
Between May 25 and October 1, 2018, the CNIL received 742 notifications of personal data breaches that affected 33,727,384 individuals located in France or elsewhere. Of those, 695 notifications were related to confidentiality breaches. In the CNIL’s view, this high proportion of confidentiality breaches may be explained by several reasons:
- In many cases, personal data breaches are the result of lack of confidentiality of personal data in addition to integrity and/or availability issues.
- Organizations often have the means to retrieve data within the 72-hour time limit after an integrity or availability breach.
Business areas affected
The accommodation and food services sector is the sector in which the highest number of breaches were observed, with 185 notifications. This is due to a specific case, where a booking service provider was affected by a data breach. That service provider immediately notified all its customers of the breach and took measures to help them comply with their obligations. As part of these measures, the service provider (1) reminded its customers of the context and the breach notification obligations, (2) provided them with a list of the supervisory authorities to be contacted depending on the country of establishment of each customer, a list of the data subjects to be contacted and a template letter, and (3) implemented a dedicated hotline. According to the CNIL, these measures reflect best practices that should be implemented by a service provider when affected by a personal data breach.
Cause of the breaches
More than half of the notified breaches (421 notifications) were due to hacking via malicious software or phishing. 62 notified breaches were related to data sent to the wrong recipients, 47 notified breaches were due to lost or stolen devices, and 41 notified breaches were due to the unintentional publication of information. Most breaches were therefore the result of hacking and intentional theft attributable to a malicious third party, or employees’ unintentional mistakes. In all other cases, the causes of the breach were unknown or undetermined by the notifying data controller, or the breach was the result of internal malicious actions. The CNIL advised that businesses should think about data security at the outset of their project, regularly run security updates on operating systems, application servers, or databases, and regularly inform staff of the risks and challenges raised by data security. This will help prevent the majority of these incidents.
The CNIL also reported that it will adopt an aggressive approach when the data controller does not comply with its obligation to notify the breach within 72 hours after having become aware of it. Failure to comply with that obligation may lead to a fine of up to €10 million or 2 percent of the total worldwide annual revenues. Conversely, if the CNIL receives the notification in a timely manner, the CNIL will adapt an approach that aims at helping the professionals involved take all the necessary measures to limit the consequences of a breach.
When necessary, the CNIL will contact organizations for the purposes of:
- Verifying that adequate measures have been taken before or after the breach. In this respect, the CNIL may advise the data controller on any needed improvements, e.g. use of an appropriate encryption algorithm or the best way to manage passwords. The CNIL may also refer data controllers to the relevant police services or to the web platform to file a complaint.
- Assessing the necessity to notify affected data subjects. For each notification, the CNIL assesses the risks to data subjects and may recommend notifying them of the breach. Since May 25, 2018, the CNIL’s injunction power has been used only once to order a data controller to notify affected data subjects. The CNIL did so by serving formal notice on the data controller, and the latter complied with the notice served.
On October 23, 2018, the 40th International Conference of Data Protection and Privacy Commissioners (the “Conference”) released a Declaration on Ethics and Protection in Artificial Intelligence (“the Declaration”). In it, the Conference endorsed several guiding principles as “core values” to protect human rights as the development of artificial intelligence (“AI”) continues apace. Key principles include:
- AI and machine learning technologies should be designed, developed and used in the context of respect for fundamental human rights and in accordance with the “fairness principle,” including by considering the impact of AI on society at large.
- AI systems’ transparency and intelligibility should be improved.
- AI systems should be designed and developed responsibly, which entails proceeding from the principles of “privacy by default” and “privacy by design.”
- Unlawful biases and discrimination that may result from the use of data in AI should be reduced and mitigated.
The Conference called for the establishment of international common governance principles on AI in line with these concepts. As an initial step toward that goal, the Conference announced a permanent working group on Ethics and Data Protection in Artificial Intelligence.
The Declaration’s authors are the French Commission Nationale de l’Informatique et des Libertés, the European Data Protection Supervisor and the Italian Garante per la protezione dei dati personali. It was co-sponsored by fifteen other organizations from nations across the world.
On October 19, 2018, European Commissioner for Justice, Consumers and Gender Equality Věra Jourová and U.S. Secretary of Commerce Wilbur Ross issued a joint statement regarding the second annual review of the EU-U.S. Privacy Shield framework, taking place in Brussels beginning October 18. The statement highlights the following:
- a significant number of companies – over 4,000 –have become Privacy Shield-certified since the inception of the framework in 2016;
- the appointment of three new members to the U.S. Privacy and Civil Oversights Board (“PCLOB”), as well as the PCLOB’s declassification of its report on a presidential directive that extended certain signals intelligence privacy protections to foreign citizens;
- the regulators’ ongoing review of the functioning of the Privacy Shield Ombudsperson Mechanism, and the need for the U.S. to promptly appoint a permanent Under Secretary;
- recent privacy incidents affecting U.S. and EU residents, with both U.S. and EU regulators reaffirming the “need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy;” and
- the Commerce Department’s promise to revoke the certification of companies that do not comply with the Privacy Shield’s principles.
The European Commission plans to publish a report on the functioning of the Privacy Shield by the end of 2019.
Recently, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement and record settlement of $16 million with Anthem, Inc. (“Anthem”) following Anthem’s 2015 data breach. That breach, affecting approximately 79 million individuals, was the largest breach of protected health information (“PHI”) in history.
Three years ago, in February 2015, OCR opened a compliance review of Anthem, the nation’s second largest health insurer, following media reports that Anthem had suffered a significant cyberattack. In March 2015, Anthem submitted a breach report to OCR detailing the cyberattack, indicating that it began after at least one employee responded to a spear phishing email. Attackers were able to download malicious files to the employee’s computer and gain access to other Anthem systems that contained individuals’ names, Social Security numbers, medical identification numbers, addresses, dates of birth, email addresses and employment information.
OCR investigated Anthem and found that it may have violated the HIPAA Privacy and Security Rules by failing to:
- conduct an accurate and thorough risk analysis of the risks and vulnerabilities to the confidentiality, integrity and availability of electronic PHI (“ePHI”);
- implement procedures to regularly review records of information system activity;
- identify and respond to the security incident;
- implement sufficient technical access procedures to protect access to ePHI; and
- prevent unauthorized access to ePHI.
The resolution agreement requires Anthem to pay $16 million to OCR and enter into a Corrective Action Plan that obligates Anthem to:
- conduct a risk analysis and submit it to OCR for review and approval;
- implement a risk management plan to address and mitigate the risks and vulnerabilities identified in the risk analysis;
- revise its policies and procedures to specifically address (1) the regular review of records of information system activity and (2) technical access to ePHI, such as network or portal segmentation and the enforcement of password management requirements, such as password age;
- distribute the policies and procedures to all members of its workforce within 30 days of adoption;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of two years.
In announcing the settlement with Anthem, OCR Director Roger Severino noted that the record-breaking settlement with Anthem was merited, as the company had experienced the largest health data breach in U.S. history. “Unfortunately, Anthem failed to implement appropriate measures for detecting hackers who had gained access to their system to harvest passwords and steal people’s private information.” Severino continued, “We know that large health care entities are attractive targets for hackers, which is why they are expected to have strong password policies and to monitor and respond to security incidents in a timely fashion or risk enforcement by OCR.”
The $16 million settlement with Anthem almost triples the previous record of $5.55 million, which OCR imposed in 2016 against Advocate Health Care Network. The settlement also comes two months after a U.S. District Court granted final approval of Anthem’s record $115 million class action settlement related to the breach.
On October 11, 2018, the French data protection authority (the “CNIL”) announced that it adopted two referentials (i.e., guidelines) on the certification of the data protection officer (“DPO”). View the announcement (in French). As a practical matter, both referentials are intended to apply to DPOs located in France or who speak French. The referentials include:
- a certification referential that sets forth the conditions regarding the admissibility of DPO applications, and lists 17 qualifications that the DPO must have in order to be certified as a DPO by a certification body approved by the CNIL; and
- an accreditation referential that outlines the criteria organizations must satisfy in order to be accredited by the CNIL as certification bodies.
The French Data Protection Act, as amended on June 20, 2018 to supplement the GDPR, allows the CNIL to draft certification criteria and approve certification bodies for the purpose of certifying individuals as DPOs.
The CNIL adopted the referentials for the certification of DPOs on this basis, following a public consultation held from May 23, 2018 to June 22, 2018. The CNIL received about 200 contributions from DPOs (or prospective DPOs), data controllers and data processors in different industries, as well as certification bodies. According to the CNIL, this consultation helped it strike “the most appropriate balance” between the knowledge and skills that a DPO must have and the expectations of privacy professionals.
Certification of the DPO
The certification of a DPO based on the standards of the CNIL’s referential is not a prerequisite in order to be appointed as a DPO with the CNIL and fulfill the responsibilities of a DPO. It is a purely voluntary process to assist in demonstrating compliance with the GDPR requirements. Article 37(5) of the GDPR requires that the DPO “shall be designated on the basis of professional qualities and, in particular, expert knowledge of data protection law and practices and the ability to fulfill the [DPO] tasks.”
In the CNIL’s view, the certificate is a vote of confidence not only for the organization that has a certified DPO, but also for its clients, vendors, employees or agents, since that organization will be able to demonstrate that the DPO has the required level of expertise and skills.
The certification will only be available to individuals (and not to legal persons). The CNIL will not grant the certification: it will be issued by certification bodies when the CNIL accredits the first certification bodies in 2019.
Prerequisites to Certification And Certification Criteria
To be eligible for certification, candidates will need to fulfill one of the following conditions:
- professional experience of at least 2 years in projects, activities or tasks related to data protection and the tasks of a DPO; or
- professional experience of at least 2 years in any field, with at least 35 hours of data protection training administered by a training body.
Candidates also will need to successfully complete a written test that will consist of at least 100 multiple choice questions, 30% of which will be presented in the form of case studies. These questions aim to test skills listed in the CNIL’s DPO certification referential, which include knowledge of fundamental data protection principles, the ability to draft and implement data protection policies, and the ability to assist with data protection impact assessments, among many other skills.
Successful candidates will obtain a certification that will be valid for three years, which may be renewed provided that the DPO passes the test again at the end of this three-year term. As the test will be available in French only, this voluntary certification mechanism is intended to apply to DPOs in France or French-speaking DPOs.
On October 5, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP hosted a workshop on how to implement, demonstrate and incentivize accountability under the EU General Data Protection Regulation (“GDPR”), in collaboration with AXA in Paris, France. In addition to the workshop, on October 4, 2018, CIPL hosted a Roundtable on the Role of the Data Protection Office (“DPO”) under the GDPR at Mastercard and a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity.
Roundtable on the Role of the DPO Under the GDPR
On October 4, 2018, CIPL hosted a Roundtable on the Role of the DPO under the GDPR. The industry-only session consisted of an open discussion among CIPL members who have firsthand experience in carrying out the role and tasks of a DPO in diverse and complex multinational organizations. Following opening remarks by CIPL president Bojana Bellamy, participants discussed practical challenges, best practices and solutions to the effective exercise of the DPO’s functions. The Roundtable addressed issues such as the position of the DPO in the organization, independence and conflict of interests and rights, duties and liability of the DPO. View the full program and discussion topics as listed in the program agenda.
CIPL Pre-Workshop Dinner at Chanel School of Fashion
On the evening of October 4, 2018, CIPL hosted a pre-workshop dinner at the Chanel School of Fashion, sponsored by Nymity. The event brought together CIPL members and data protection authorities (“DPAs”) in advance of CIPL’s all day accountability workshop. During the dinner, remarks were given by Bojana Bellamy, as well as Anna Pouliou, Head of Privacy at Chanel and Terry McQuay, Nymity president and sponsor of the event.
CIPL Workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR
On October 5, 2018, CIPL hosted an all day workshop on How to Implement, Demonstrate and Incentivize Accountability Under the GDPR, in collaboration with AXA. CIPL’s two newest papers on the Central Role of Accountability in Data Protection formed the basis of the program, placing an emphasis on how accountability enables effective data protection and trust in the digital society, and on the need for DPAs to encourage and incentivize accountability. Over 100 CIPL members and invited guests attended the session, including over 10 data privacy regulators.
Following opening remarks by Emmanuel Touzeau, Group Communication and Brand Director – GDPR Sponsor at AXA and CIPL’s Bojana Bellamy, introductory scene setting keynotes by Peter Hustinx, Former European Data Protection Supervisor and Patrick Rowe, Deputy General Counsel at Accenture laid the foundation for the day’s discussions.
The first panel on “Accountability under the GDPR” featured a wide ranging discussion by DPAs and industry experts on the important role of accountability in data protection. The meaning of accountability and its role in enabling effective privacy protections for individuals while ensuring innovation by organizations informed the discussion, along with dialogue around the key elements of accountability and how specific requirements of the GDPR map to these core elements. An important topic of discussion during this session concerned how to reconcile the need for proactive engagement between companies and DPAs with enforcement practices.
The second panel on “How to Demonstrate Accountability Internally and Externally” progressed the discussion from what constitutes accountability to how to implement and demonstrate it in practice, both within an organization and externally to DPAs. Participants also discussed whether accountability should be showcased proactively and how it can be demonstrated by participation in accountability schemes such as Binding Corporate Rules and future GDPR certifications and codes of conduct.
The final session of the day on “Best Practices: How are DPAs Incentivizing Accountability?” considered how DPAs can incentivize accountability under the GDPR. A wide range of incentives that are – or could be – used to encourage organizations to implement strong accountability measures were discussed, along with those that feature in CIPL’s paper on incentivizing accountability.
The workshop formed part of CIPL’s ongoing work around the concept of accountability in data protection and reaching consensus on its essential elements. View the full workshop agenda. CIPL’s papers on The Case for Accountability: How it Enables Effective Data Protection and Trust in Digital Society and Incentivizing Accountability: How Data Protection Authorities and Law Makers Can Encourage Accountability are the latest papers in this initiative and form the foundations for more work on accountability to follow from CIPL .
On September 26, 2018, the SEC announced a settlement with Voya Financial Advisers, Inc. (“Voya”), a registered investment advisor and broker-dealer, for violating Regulation S-ID, also known as the “Identity Theft Red Flags Rule,” as well as Regulation S-P, the “Safeguards Rule.” Together, Regulations S-ID and S-P are designed to require covered entities to help protect customers from the risk of identity theft and to safeguard confidential customer information. The settlement represents the first SEC enforcement action brought under Regulation S-ID.
I. The Identity Theft Red Flags Rule
Regulation S-ID covers SEC-registered broker-dealers, investment companies and investment advisors and mandates a written identity theft program, including policies and procedures designed to:
- identify relevant types of identity theft red flags;
- detect the occurrence of those red flags;
- respond appropriately to the detected red flags; and
- periodically update the identity theft program.
Covered entities are also required to ensure the proper administration of their preventative programs.
II. The Safeguards Rule
Rule 30(a) of Regulation S-P requires financial institutions to adopt written policies and procedures that address administrative, technical and physical safeguards to protect customer records and information. It further requires that those policies and procedures be reasonably designed to (1) ensure the security and confidentiality of customer records and information; (2) protect against anticipated threats or hazards to the security or integrity of customer records and information; and (3) protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.
III. The Voya Violations
According to the SEC’s order, cyber intruders successfully impersonated Voya contractor-representatives, gaining access to a web portal that housed the personally identifiable information (“PII”) of approximately 5,600 Voya customers. Over a six-day period, intruders called Voya’s service call center and requested that three representatives’ passwords be reset; the intruders then used the temporary passwords to create new customer profiles and access customer information and documents. The order indicated that, in two of the three cases, the phone number used to call the Voya service center had previously been flagged as associated with fraudulent activity.
Three hours after the first fraudulent reset, the targeted representative allegedly notified technical support that they had not requested the reset. While Voya did take some steps in response, the order found that those steps did not include terminating the fraudulent login sessions or imposing safeguards sufficient to prevent intruders from obtaining passwords for two additional representative accounts over the next several days.
The SEC determined that Voya violated the Identity Theft Red Flags Rule because, while it had adopted an Identity Theft Prevention Program in 2009, it did not review and update this program in response to changes in the technological environment. The SEC also found that Voya failed to provide adequate training to its employees. Finally, the SEC found that Voya’s Identity Theft Program lacked reasonable policies and procedures to respond to red flags. In addition to these violations, the SEC determined that Voya violated the Safeguards Rule by failing to adopt written policies and procedures reasonably designed to safeguard customer records and information.
IV. Aftermath and Implications
While neither admitting nor denying the SEC’s findings, Voya agreed to a $1 million fine to settle the enforcement action and will engage an independent consultant to evaluate its policies and procedures for compliance with the Safeguards Rule, Identity Theft Red Flags Rule and related regulations. The SEC additionally ordered that Voya cease and desist from committing any violations of Regulations S-ID and S-P.
The Voya settlement demonstrates that the SEC is focused on protecting consumer information, and ensuring that broker-dealers, investment companies and investment advisors comply with Regulation S-ID. The Voya settlement also provides that having policies and procedures designed to protect customer information alone may not suffice; entities subject to Regulation S-ID should frequently evaluate the adequacy of their policies and procedures designed to identify and address “red flags,” and they should ensure that all relevant employees receive comprehensive training on identify theft. Such entities must also ensure that their compliance program is frequently updated to address changes in technology and corresponding changes to the risk environment.
The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic.
The Draft Report is designed “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their IoT devices throughout their lifecycles.” According to the Draft Report, “[m]any organizations are not necessarily aware they are using a large number of IoT devices. It is important that organizations understand their use of IoT because many IoT devices affect cybersecurity and privacy risks differently than conventional IT devices do.”
The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy.
In order to address those considerations and risk mitigation goals, the Draft Report provides the following recommendations:
- Understand the IoT device risk considerations and the challenges they may cause to mitigating cybersecurity and privacy risks for devices in the appropriate risk mitigation areas.
- Adjust organizational policies and processes to address the cybersecurity and privacy risk mitigation challenges throughout the IoT device lifecycle.
- Implement updated mitigation practices for the organization’s IoT devices as you would any other changes to practices.
Comments are due by October 24, 2018.
Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.
What is a Blockchain?
A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.
Scope of the CNIL’s Assessment
The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.
- DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
- Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).
In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.
Role of Actors in a Blockchain Network
The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.
If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.
According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.
How to Minimize Risks To Data Subjects
- Assessing the appropriateness of using blockchain
As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.
If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.
- Choosing the right format under which the data will be recorded
As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.
In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.
The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.
With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).
In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”
How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights
According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)
However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.
In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.
With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.
The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.
In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.
On September 27, 2018, the Federal Trade Commission announced a settlement agreement with four companies – IDmission, LLC, (“IDmission”) mResource LLC (doing business as Loop Works, LLC) (“mResource”), SmartStart Employment Screening, Inc. (“SmartStart”), and VenPath, Inc. (“VenPath”) – over allegations that each company had falsely claimed to have valid certifications under the EU-U.S. Privacy Shield framework. The FTC alleged that SmartStart, VenPath and mResource continued to post statements on their websites about their participation in the Privacy Shield after allowing their certifications to lapse. IDmission had applied for a Privacy Shield certification but never completed the necessary steps to be certified.
In addition, the FTC alleged that both VenPath and SmartStart failed to comply with a provision under the Privacy Shield requiring companies that cease participation in the Privacy Shield framework to affirm to the Department of Commerce that they will continue to apply the Privacy Shield protections to personal information collected while participating in the program.
As part of the proposed settlements with the FTC, each company is prohibited from misrepresenting their participation in any privacy or data security program sponsored by the government or any self-regulatory or standard-setting organization and must comply with FTC reporting requirements. Further, VenPath and SmartStart must either (1) continue to apply the Privacy Shield protections to personal information collected while participating in the Privacy Shield, (2) protect it by another means authorized by the Privacy Shield framework, or (3) return or delete the information within 10 days of the FTC’s order.
“Companies need to know that if they fail to honor their Privacy Shield commitments, or falsely claim participation in the Privacy Shield framework, we will hold them accountable,” said Andrew Smith, director of the FTC’s Bureau of Consumer Protection. “We have now brought enforcement actions against eight companies related to the Privacy Shield, and we will continue to aggressively enforce the Privacy Shield and other cross-border privacy frameworks.”
Update: On November 19, 2018, the Commission voted to give final approval to the settlements with the four companies.
On September 25, 2018, the French Data Protection Authority (the “CNIL”) published the first results of its factual assessment of the implementation of the EU General Data Protection Regulation (GDPR) in France and in Europe. When making this assessment, the CNIL first recalled the current status of the French legal framework, and provided key figures on the implementation of the GDPR from the perspective of privacy experts, private individuals and EU supervisory authorities. The CNIL then announced that it will adopt new GDPR tools in the near future. Read the full factual assessment (in French).
Upcoming Consolidation of the French Legal Framework
The French Data Protection Act (“the Act”) and its implementing Decree were amended by a law and Decree published respectively on June 21 and August 3, 2018, in order to bring French law in line with the GDPR and implement the EU Data Protection Directive for Police and Criminal Justice Authorities. However, some of the provisions of the Act still remain unchanged and are no longer applicable. In addition, the Act does not mention all new obligations imposed by the GDPR or the new rights of data subjects, and is therefore incomplete. The CNIL recalled that an ordinance is expected to be adopted by the end of this year to re-write the Act and facilitate readability of the French data protection framework.
Gradual Rolling Out of the GDPR by Privacy Experts
The CNIL noted that 24,500 organizations have appointed a data protection officer (“DPO”), which represents 13,000 DPOs. In comparison, only 5,000 DPOs were appointed under the previous data protection framework. Since May 25, 2018, the CNIL has also received approximately 7 data breach notifications a day, totaling more than 600 data breach notifications, which affected 15 million individuals. The CNIL continues to receive a large number of authorization requests in the health sector (more than 100 requests filed since May 25, 2018, in particular for clinical trial purposes).
Individuals’ Unprecedented GDPR Awareness
Since May 25, 2018, the CNIL has received 3,767 complaints from individuals. This represents an increase of 64% compared to the number of complaints received during the same period in 2017, and can be explained by the widespread media coverage of the GDPR and cases such as Cambridge Analytica. EU supervisory authorities are currently handling more than 200 cross-border complaints under the cooperation procedure provided for by the GDPR, and the CNIL is a supervisory authority concerned for most of these cases.
Effective European Cooperation Under the GDPR
The CNIL recalled that a total of 18 GDPR guidelines have been adopted at the EU level and 7 guidelines are currently being drawn up by the European Data Protection Board (“EDPB”) (e.g., guidelines on the territorial scope of the GDPR, data transfers and video surveillance). Further, the IT platform chosen to support cooperation and consistency procedures under the GDPR has been effective since May 25, 2018. With respect to Data Protection Impact Assessments (“DPIAs”), the CNIL has submitted to the EDPB a list of processing operations requiring a DPIA. Once validated by the EDPB, this list and additional guidelines will be published by the CNIL.
In terms of the CNIL’s upcoming actions or initiatives, the CNIL announced that it will shortly propose the following new tools:
- “Referentials” (i.e., guidelines) relating to the processing of personal data for HR and customer management purposes. These referentials are intended to update the CNIL’s well established doctrine in light of the new requirements of the GDPR. The draft referentials will be open for public consultation. Once finalized, the CNIL announced its intention to promote those referentials at the EU level.
- A Model Regulation regarding biometric data. According to Article 9(4) of the GDPR, EU Member States may maintain and introduce further conditions, including limitations, with regard to the processing of biometric data. France introduced such conditions by amending the French Data Protection Act in order to allow the processing of biometric data for the purposes of controlling access to a company’s premises and/or devices and apps used by staff members to perform their job duties if that processing complies with the CNIL’s Model Regulation. Compliance with that Model Regulation constitutes an exception from the prohibition to process biometric data.
- A first certification procedure. In May 2018, the CNIL launched a public consultation on the certification of the DPO, which ended on June 22, 2018. The CNIL will finalize the referentials relating to the certification of the DPO by the end of this month.
- Compliance packs. The CNIL confirmed that it will continue to adopt compliance packs, (i.e., guidelines for a particular sector or industry). The CNIL also announced its intention to promote some of these compliance packs at the EU level (such as the compliance pack on connected vehicles) in order to develop a common European doctrine that could be endorsed by the EDPB.
- Codes of conduct. A dozen codes of conduct are currently being prepared, in particular codes of conduct on medical research and cloud infrastructures.
- A massive open online course. This course will help participants familiarize themselves with the fundamental principles of the GDPR.
As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018.
Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.
The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.
On September 4, 2018, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) announced a collaborative project to develop a voluntary privacy framework to help organizations manage privacy risk. The announcement states that the effort is motivated by innovative new technologies, such as the Internet of Things and artificial intelligence, as well as the increasing complexity of network environments and detail of user data, which make protecting individuals’ privacy more difficult. “We’ve had great success with broad adoption of the NIST Cybersecurity Framework, and we see this as providing complementary guidance for managing privacy risk,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.
The goals for the framework stated in the announcement include providing an enterprise-level approach that helps organizations prioritize strategies for flexible and effective privacy protection solutions and bridge gaps between privacy professionals and senior executives so that organizations can respond effectively to these challenges without stifling innovation. To kick off the effort, the NIST has scheduled a public workshop on October 16, 2018, in Austin, Texas, which will occur in conjunction with the International Association of Privacy Professionals’ “Privacy. Security. Risk. 2018” conference. The Austin workshop is the first in a series planned to collect current practices, challenges and requirements in managing privacy risks in ways that go beyond common cybersecurity practices.
In parallel with the NIST’s efforts, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) is “developing a domestic legal and policy approach for consumer privacy.” The announcement stated that the NTIA is coordinating its efforts with the department’s International Trade Administration “to ensure consistency with international policy objectives.”
On August 31, 2018, the California State Legislature passed SB-1121, a bill that delays enforcement of the California Consumer Privacy Act of 2018 (“CCPA”) and makes other modest amendments to the law. The bill now goes to the Governor for signing. The provisions of the CCPA will become operative on January 1, 2020. As we have previously reported, the CCPA introduces key privacy requirements for businesses. The Act was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. The CCPA’s hasty passage resulted in a number of drafting errors and inconsistencies in the law, which SB-1121 seeks to remedy. The amendments to the CCPA are primarily technical, with few substantive changes.
Key amendments to the CCPA include:
- The bill extends by six months the deadline for the California Attorney General (“AG”) to draft and adopt the law’s implementing regulations, from January 1, 2020, to July 1, 2020. (CCPA § 1798.185(a)).
- The bill delays the AG’s ability to bring enforcement actions under the CCPA until six months after publication of the implementing regulations or July 1, 2020, whichever comes first. (CCPA § 1798.185(c)).
- The bill limits the civil penalties the AG can impose to $2,500 for each violation of the CCPA or up to $7,500 per each intentional violation, and states that a violating entity will be subject to an injunction. (CCPA § 1798.155(b)).
- Definition of “personal information”: The CCPA includes a number of enumerated examples of “personal information” (“PI”), including IP address, geolocation data and web browsing history. The amendment clarifies that the listed examples would constitute PI only if the data “identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.” (CCPA § 1798.140(o)(1)).
- Private right of action:
- The amendments clarify that a consumer may bring an action under the CCPA only for a business’s alleged failure to “implement and maintain reasonable security procedures and practices” that results in a data breach. (CCPA § 1798.150(c)).
- The bill removes the requirement that a consumer notify the AG once the consumer has brought an action against a business under the CCPA, and eliminates the AG’s ability to instruct a consumer to not proceed with an action. (CCPA § 1798.150(b)).
- GLBA, DDPA, CIPA exemptions: The original text of the CCPA exempted information subject to the Gramm-Leach-Bliley Act (“GLBA”) and Driver’s Privacy Protection Act (“DPPA”), only to the extent the CCPA was “in conflict” with either statute. The bill removes the “in conflict” qualification and clarifies that data collected, processed, sold or disclosed pursuant to the GLBA, DPPA or the California Information Privacy Act is exempt from the CCPA’s requirements. The revisions also exempt such information from the CCPA’s private right of action provision. (CCPA §§ 1798.145(e), (f)).
- Health information:
- Health care providers: The bill adds an exemption for HIPAA-covered entities and providers of health care governed by the Confidentiality of Medical Information Act, “to the extent the provider or covered entity maintains patient information in the same manner as medical information or protected health information,” as described in the CCPA. (CCPA § 1798.145(c)(1)(B)).
- PHI: The bill expands the category of exempted protected health information (“PHI”) governed by HIPAA and the Health Information Technology for Economic and Clinical Health Act to include PHI collected by both covered entities and business associates. The original text did not address business associates. (CCPA § 1798.145(c)(1)(A)).
- Clinical trial data: The bill adds an exemption for “information collected as part of a clinical trial” that is subject to the Federal Policy for the Protection of Human Subjects (also known as the Common Rule) and is conducted in accordance with specified clinical practice guidelines. (CCPA § 1798.145(c)(1)(C)).
- First Amendment protection: The bill adds a provision to the CCPA, which states that the rights afforded to consumers and obligations imposed on businesses under the CCPA do not apply if they “infringe on the noncommercial activities of a person or entity” as described in Art. I, Section 2(b) of the California constitution, which addresses activities related to the free press. This provision is designed to prevent First Amendment challenges to the law. (CCPA § 1798.150(k)).
- The bill adds to the CCPA’s preemption clause that the law will not apply in the event its application is preempted by, or in conflict with, the U.S. Constitution. The CCPA previously referenced only the California Constitution. (CCPA § 1798.196).
- Certain provisions of the CCPA supersede and preempt laws adopted by local entities regarding the collection and sale of a consumer’s PI by a business. The bill makes such provisions of the Act operative on the date the bill becomes effective.
The California State Legislature is expected to consider more substantive changes to the law when it reconvenes in January 2019.
Recently, the Department of Commerce updated its frequently asked questions (“FAQs”) on the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks (collectively, the “Privacy Shield”) to provide additional clarification on a wide range of topics, including transfers of personal information to third parties, the application of the Privacy Shield Principles to data processors, and the relation of the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”) to the Privacy Shield. Certain key insights from the updated FAQs are outlined below:
- Data processors. When responding to individuals seeking to exercise their rights under the Privacy Shield Principles, the FAQs state that a processor should respond pursuant to the instructions of the EU data controller. For example, in order to comply with the Choice Principle, a Privacy Shield-certified organization acting as a processor could, pursuant to the EU controller’s instructions, put individuals in contact with the controller that provides a choice mechanism or offer a choice mechanism directly.
- Onward transfers. The FAQs also provide additional guidance for organizations preparing to come into compliance with the Accountability for Onward Transfer Principle. For example, the FAQs state that organizations may use contracts that fully reflect the requirements of the relevant standard contractual clauses adopted by the European Commission to fulfill the Accountability for Onward Transfer Principle’s contractual requirements.
- CLOUD Act. The FAQs state that the CLOUD Act, which involves data transfers for law enforcement purposes, does not conflict with the Privacy Shield, which is unaffected by the enactment of the law.
View the full Privacy Shield FAQs.
On August 15, 2018, U.S. District Judge Lucy Koh signed an order granting final approval of the record $115 million class action settlement agreed to by Anthem Inc. in June 2017. As previously reported, Judge Koh signed an order granting preliminary approval of the settlement in August 2017.
The settlement arose out of a 2015 data breach that exposed the personal information of more than 78 million individuals, including names, dates of birth, Social Security numbers and health care ID numbers. The terms of the settlement include, among other things, the creation of a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers.
“The Court finds that the Settlement is fair, adequate, and reasonable,” Judge Lucy Koh wrote in her opinion.
Under the $115 million settlement, $51 million will go to the victims. Of the $51 million, $17 million is earmarked for credit-monitoring services, $15 million will go to customers who suffered out-of-pocket costs from the data breach, and $13 million will go to customers who demonstrate that they already have credit-monitoring services. The judge awarded the plaintiffs’ attorneys $31.05 million in legal fees. Additionally, the consulting firm appointed to administer the settlement received $23 million.
The settlement also requires Anthem to make certain changes to its data security systems and cybersecurity practices, including adopting encryption protocols for sensitive data, for at least three years.
The case is In re Anthem, Inc. Data Breach Litig., N.D. Cal., No. 15-md-02617, final approval 8/15/18.
As reported in BNA Privacy Law Watch, a California legislative proposal would allocate additional resources to the California Attorney General’s office to facilitate the development of regulations required under the recently enacted California Consumer Privacy Act of 2018 (“CCPA”). CCPA was enacted in June 2018 and takes effect January 1, 2020. CCPA requires the California Attorney General to issue certain regulations prior to the effective date, including, among others, (1) to update the categories of data that constitute “personal information” under CCPA, and (2) certain additional regulations governing compliance (such as how a business may verify a consumer’s request made pursuant to CCPA). The proposal, which was presented in two budget bills, would allocate $700,000 and five staff positions to the California Attorney General’s office to aid in the development of the required regulations. The legislature is expected to pass the relevant funding measure by August 31, 2018. California Attorney General Xavier Becerra has stated that he expects his office will issue its final rules under CCPA in June 2019.
On August 13, 2018, the Federal Trade Commission approved changes to the video game industry’s safe harbor guidelines under the Children’s Online Privacy Protection Act (“COPPA”) Rule. COPPA’s “safe harbor” provision enables industry groups to propose self-regulatory guidelines regarding COPPA compliance for FTC approval.
The Entertainment Software Ratings Board (“ESRB”)’s proposed modifications were opened to a comment and notice period between April and May of this year. The ESRB’s suggestions elicited five comments from individuals and consumer advocates, including that the ESRB retain language from the existing program that defines street-level geolocation information as personal information and data. The FTC approved changes to the ESRB’s COPPA safe harbor program by 5-0 vote; in announcing its decision, the FTC noted that the approved, revised guidelines include certain changes addressing issues raised by the commenters.
On August 3, 2018, California-based Unixiz Inc. (“Unixiz”) agreed to shut down its “i-Dressup” website pursuant to a consent order with the New Jersey Attorney General, which the company entered into to settle charges that it violated the Children’s Online Privacy Protection Act (“COPPA”) and the New Jersey Consumer Fraud Act. The consent order also requires Unixiz to pay a civil penalty of $98,618.
The charges stemmed from a 2016 data breach in which hackers compromised more than 2.2 million unencrypted usernames and passwords, including those associated with over 24,000 New Jersey residents’ accounts. The New Jersey Attorney General alleged that Unixiz had actual knowledge that the i-Dressup website (which allowed users to “dress, style and make-up animated characters in various outfits” and featured children’s games) had collected the personal information of over 10,000 children and failed to obtain verifiable parental consent for such collection, in violation of COPPA. The New Jersey Attorney General further alleged that the data breach resulted from Unixiz’s failure to appropriately safeguard user account information. Pursuant to the terms of the consent order, Unixiz agreed to pay a $98,618 civil penalty (suspended to $34,000 if, after two years, Unixiz undertakes certain steps to safeguard users’ personal information). Unixiz also agreed to shut down the i-Dressup website, comply with all applicable state and federal laws (including the New Jersey Consumer Fraud Act and COPPA), and implement policies and procedures to safeguard user account information.
On July 19, 2018, the French Data Protection Authority (“CNIL”) announced that it served a formal notice to two advertising startups headquartered in France, FIDZUP and TEEMO. Both companies collect personal data from mobile phones via software development kit (“SDK”) tools integrated into the code of their partners’ mobile apps—even when the apps are not in use—and process the data to conduct marketing campaigns on mobile phones.
The SDK technology enables TEEMO to collect mobile advertising IDs and geolocation data of users every five minutes. This information is then correlated with the users’ interests determined by TEEMO’s retail partners and used to send targeted ads on the users’ mobile phones. The SDK technology installed by FIDZUP in partners’ mobile apps collects MAC addresses and advertising IDs of mobile phones. In parallel, FIDZUP has installed in its partners’ sale points FIDZBOX devices which collect data relating to MAC addresses and WiFi signal strength of users’ mobile phones. The data is then processed by the company to send targeted, geolocated ads on users’ mobile phones whenever users walk by a sale point of FIDZUP’s partners.
A Breach of the Obligation to Obtain User’s Consent
Despite their claims, the CNIL found that the two companies do not obtain users’ consent in accordance with French data protection law and the EU General Data Protection Regulation (“GDPR”). The inspections carried out by the CNIL on several mobile apps revealed that:
- Concerning TEEMO, users are not informed when downloading mobile apps that an SDK that will collect their data is integrated into the apps.
The CNIL also found that it was not possible to download the apps without the SDK technology.
Finally, the CNIL noted that, when users’ consent is sought for the processing of their geolocation data when installing the app, that consent is limited to the use of the data by the app. Consent is not sought for the collection of the data for marketing purposes via the SDK tools.
The CNIL therefore concluded that the data processed by TEEMO and FIDZUP for targeted marketing purposes is in fact processed without the users’ knowledge and consent in breach of French law and the GDPR.
A Breach of the Obligation to Define an Adequate Retention Period
The CNIL also found that TEEMO retains geolocation data for 13 months. In the CNIL’s view, this retention period is disproportionate in relation to the purpose of the processing. The CNIL stressed that use of geolocation devices are especially intrusive as they constantly track users in real time.
The CNIL’s Requests
The CNIL ordered TEEMO and FIDZUP to obtain users’ valid consent within three months (e.g., via a pop-up containing specific information and a tick-box to signify consent). The CNIL also ordered TEEMO to define a retention period for geolocation data that is proportionate to the purpose of the processing. Failure to do so within the prescribed time limit may result in sanctions, including a fine.
On July 10, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the European Data Protection Board (the “EDPB”) on its draft guidelines on certification and identifying certification criteria in accordance with Articles 42 and 43 of the GDPR (the “Guidelines”). The Guidelines were adopted by the EDPB on May 25, 2018, for public consultation.
CIPL highlights in its comments that in order to achieve the goals of certifications under the GDPR (i.e., to use certifications as an accountability tool to demonstrate compliance and as a cross-border data transfer mechanism), certification mechanisms should be based on a harmonized EU-wide minimum GDPR certification standard or template which is adaptable to different contexts. Such a baseline standard will enable both EU-wide general GDPR certifications as well as more narrow GDPR certifications customized for specific products, services, processes, industry sectors and/or jurisdictions. CIPL recommends that the EU-wide baseline standard should be developed by the European Commission and/or the EDPB in collaboration with certification bodies and industry.
In addition to basing GDPR certifications on a harmonized EU-wide minimum GDPR certification standard, CIPL underlines in its comments that certification mechanisms should:
- permit the certifying of entire organizational privacy management programs, in addition to specific products, services and processes;
- enable interoperability as much as possible with other, similar EU accountability schemes as well as other certification schemes in other countries and regions, such as the APEC Cross-Border Privacy Rules ( “CBPR”) and Privacy Recognition for Processors (“PRP”);
- be construed on the basis of a holistic approach which enables both national or EU compliance and cross border compliance as part of one set of certification criteria.
To read the above recommendations in more detail, along with CIPL’s other recommendations on certification and identifying certification criteria in accordance with articles 42 and 43 of the GDPR, view the full paper.
These comments follow CIPL’s related consultation response to the Article 29 Working Party’s Draft Guidelines on the Accreditation of Certification Bodies under the GDPR.
CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 92 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics prioritized by the EDPB.
On July 27, 2018, the Justice BN Srikrishna committee, formed by the Indian government in August 2017 with the goal of introducing a comprehensive data protection law in India, issued a report, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians (the “Committee Report”), and a draft data protection bill called the Personal Data Protection Bill, 2018 (the “Bill”). Noting that the Indian Supreme Court has recognized the right to privacy as a fundamental right, the Committee Report summarizes the existing data protection framework in India, and recommends that the government of India adopt a comprehensive data protection law such as that proposed in the Bill.
The Bill would establish requirements for the collection and processing of personal data, including particular limitations on the processing of sensitive personal data and the length of time in which personal data may be retained. The Bill would require organizations to appoint a Data Protection Officer and require annual third-party audits of the organization’s processing of personal data. Further, the Bill would require organizations to implement certain information security safeguards, including (where appropriate) de-identification and encryption, as well as safeguards to prevent misuse, unauthorized access to, modification, disclosure or destruction of personal data. The Bill also would require regulator notification and, in certain circumstances, individual notification in the event of a data breach. Noncompliance with the Bill would result in penalties up to 50 million Rupees (approximately USD $728,000), or two percent of global annual turnover of the preceding financial year, whichever is higher.
The Bill has been submitted for consideration to the Ministry of Electronics and Information Technology and is expected to be introduced in Parliament at a later date.
In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information (“ePHI”). The guidance, along with additional resources identified by OCR, are outlined below:
- Identifying software vulnerabilities. Every HIPAA-covered entity is required to perform a risk analysis that identifies risks and vulnerabilities to the confidentiality, integrity and availability of ePHI. Such entities must also implement measures to mitigate risks identified during the risk analysis. In its guidance, OCR indicated that mitigation activities could include installing available patches (where reasonable and appropriate) or, where patches are unavailable (such as in the case of obsolete or unsupported software), reasonable compensating controls, such as restricting network access.
- Patching software. Patches may be applied to software and firmware on a wide range of devices, and the installation of vendor patches is typically routine. The installation of such updates, however, may result in unexpected events due to the interconnected nature of computer programs and systems. OCR recommends that organizations install patches for identified vulnerabilities in accordance with their security management processes. In order to help ensure the protection of ePHI during patching, OCR also identifies common steps in patch management as including evaluation, patch testing, approval, deployment, verification and testing.
In addition to the information contained in the guidance, OCR identified a number of additional resources, which are listed below:
On July 23, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP issued two new discussion papers on the Central Role of Organizational Accountability in Data Protection. The goal of these discussion papers is to show that organizational accountability is pivotal to effective data protection and essential for the digital transformation of the economy and society, and to emphasize how its many benefits should be actively encouraged and incentivized by data protection authorities (“DPAs”), and law and policy makers around the globe.
The Case for Accountability: How it Enables Effective Data Protection and Trust in Digital Society
The first discussion paper explains how accountability provides the necessary framework and tools for scalable compliance, fosters corporate digital responsibility beyond pure legal compliance, and empowers and protects individuals. It also details the benefits of implementing accountability to individuals, regulators and organizations.
Key areas of focus in the paper include:
- the essential elements of accountability and the different approaches to implementing accountability;
- how accountability applies in similar and different ways to data controllers and processors;
- implementing and demonstrating accountability within an organization (e.g., through comprehensive internal privacy programs or verified or certified accountability mechanisms); and
- the benefits of accountability, both by stakeholder and by type.
Incentivizing Accountability: How Data Protection Authorities and Law Makers Can Encourage Accountability
The second discussion paper explains why and how accountability should be specifically incentivized, particularly by DPAs and law makers. It argues that given the many benefits of accountability for all stakeholders, DPAs and law makers should encourage and incentivize organizations to implement accountability. They should not merely rely on the threat of sanctions to ensure legally required accountability, nor should they leave the implementation of heightened accountability (i.e., accountability beyond what is legally required) to various “internal” incentives of the organizations, such as improved customer trust and competitive advantage. Instead, DPAs and law makers should proactively provide additional external incentives, including on the grounds that accountability provides broader benefits to all stakeholders beyond just the organization itself and specifically helps DPAs carry out their many regulatory tasks.
Key areas of focus in the paper include:
- why accountability measures should be incentivized;
- who should incentivized accountability, namely, DPAs and law and policy makers; and
- how accountability should be incentivized, including examples of what the incentives might be.
On July 17, 2018, the European Union and Japan successfully concluded negotiations on a reciprocal finding of an adequate level of data protection, thereby agreeing to recognize each other’s data protection systems as “equivalent.” This will allow personal data to flow safely between the EU and Japan, without being subject to any further safeguards or authorizations.
This is the first time that the EU and a third country have agreed on a reciprocal recognition of the adequate level of data protection. So far, the EU has adopted only unilateral adequacy decisions with 12 other countries—namely, Andorra, Argentina, and Canadian organizations subject to PIPEDA, the Faroe Islands, Guernsey, Israel, the Isle of Man, Jersey, New Zealand, Switzerland, Uruguay and the United States (EU-U.S. Privacy Shield)—which allow personal data to flow safely from the EU to these countries.
On January 10, 2017, the European Commission (“the Commission”) published a communication addressed to the European Parliament and European Council on Exchanging and Protecting Personal Data in a Globalized World. As announced in this communication, the Commission launched discussions on possible adequacy decisions with “key trading partners,” starting with Japan and South Korea in 2017.
The discussions with Japan were facilitated by the amendments made to the Japanese Act on the Protection of Personal Information (Act No. 57 of 2003) that came into force on May 30, 2017. These amendments have modernized Japan’s data protection legislation and increased convergence with the European data protection system.
Key parts of the adequacy finding
Once adopted, the adequacy finding will cover personal data exchanged for commercial purposes between EU and Japanese businesses, as well as personal data exchanged for law enforcement purposes between EU and Japanese authorities, ensuring that in all such exchanges a high level of data protection is applied.
This adequacy finding was decided based on a series of additional safeguards that Japan will apply to EU citizens’ personal data when transferred to their country, including the following measures:
- expanding the definition of sensitive data;
- facilitating the exercise of individuals’ rights of access to and rectification of their personal data;
- increasing the level of protection for onward data transfers of EU data from Japan to a third country; and
- establishing a complaint-handling mechanism, under the supervision of the Japanese data protection authority (the Personal Information Protection Commission), to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities.
The EU and Japan will launch their respective internal procedures for the adoption of the adequacy finding. The Commission is planning on adopting its adequacy decision in fall 2018, following the usual procedure for adopting EU adequacy decisions. This involves (1) the approval of the draft adequacy decision by the College of EU Commissioners; (2) obtaining an opinion from EU Data Protection Authorities within the European Data Protection Board, (3) completing by a comitology procedure, requiring the European Commission to obtain the green light from a committee composed of representatives of EU Member States; and (4) updating the European Parliament Committee on Civil Liberties, Justice and Home Affairs. Once adopted, this will be the first adequacy decision under the EU General Data Protection Regulation.
This post has been updated.
As reported by Mundie e Advogados, on July 10, 2018, Brazil’s Federal Senate approved a Data Protection Bill of Law (the “Bill”). The Bill, which is inspired by the EU General Data Protection Regulation (“GDPR”), is expected to be sent to the Brazilian President in the coming days.
As reported by Mattos Filho, Veiga Filho, Marrey Jr e Quiroga Advogados, the Bill establishes a comprehensive data protection regime in Brazil and imposes detailed rules for the collection, use, processing and storage of personal data, both electronic and physical.
Key requirements of the Bill include:
- National Data Protection Authority. The Bill calls for the establishment of a national data protection authority which will be responsible for regulating data protection, supervising compliance with the Bill and enforcing sanctions.
- Data Protection Officer. The Bill requires businesses to appoint a data protection officer.
- Legal Basis for Data Processing. Similar to the GDPR, the Bill provides that the processing of personal data may only be carried out where there is a legal basis for the processing, which may include, among other bases, where the processing is (1) done with the consent of the data subject, (2) necessary for compliance with a legal or regulatory obligation, (3) necessary for the fulfillment of an agreement, or (4) necessary to meet the legitimate interest of the data controller or third parties. The legal basis for data processing must be registered and documented. Processing of sensitive data (including, among other data elements, health information, biometric information and genetic data) is subject to additional restrictions.
- Consent Requirements. Where consent of the data subject is relied upon for processing personal data, consent must be provided in advance and must be free, informed and unequivocal, and provided for a specific purpose. Data subjects may revoke consent at any time.
- Data Breach Notification. The Bill requires notification of data breaches to the data protection authority and, in some circumstances, to affected data subjects.
- Privacy by Design and Privacy Impact Assessments. The Bill requires organizations to adopt data protection measures as part of the creation of new products or technologies. The data protection authority will be empowered to require a privacy impact assessment in certain circumstances.
- Data Transfer Restrictions. The Bill places restrictions on cross-border transfers of personal data. Such transfers are allowed (1) to countries deemed by the data protection authority to provide an adequate level of data protection, and (2) where effectuated using standard contractual clauses or other mechanisms approved by the data protection authority.
Noncompliance with the Bill can result in fines of up to two percent of gross sales, limited to 50 million reias (approximately USD 12.9 million) per violation. The Bill will take effect 18 months after it is published in Brazil’s Federal Gazette.
Update: The Bill was signed into law in mid-August and is expected to take effect in early 2020.
On July 5, 2018, the European Parliament issued a nonbinding resolution (“the Resolution”) that calls on the European Commission to suspend the EU-U.S. Privacy Shield unless U.S. authorities can “fully comply” with the framework by September 1, 2018. The Resolution states that the data transfer mechanism does not provide the adequate level of protection for personal data as required by EU data protection law. The Resolution takes particular aim at potential access to EU residents’ personal data by U.S. national security agencies and law enforcement, citing the passage of the CLOUD Act as having “serious implications for the EU, as it is far-reaching and creates a potential conflict with the EU data protection laws.”
The Resolution also cites recent revelations surrounding Facebook and Cambridge Analytica, both Privacy Shield-certified companies, as “highlight[ing] the need for proactive oversight and enforcement actions,” including “systematic checks of the practical compliance of privacy policies with the Privacy Shield principles,” and calls on EU data protection authorities to suspend data transfers for non-compliant companies.
On July 2, 2018, the Federal Trade Commission announced that California company ReadyTech Corporation (“ReadyTech”) agreed to settle FTC allegations that ReadyTech misrepresented it was in the process of being certified as compliant with the EU-U.S. Privacy Shield (“Privacy Shield”) framework for lawfully transferring consumer data from the European Union to the United States. The FTC finalized this settlement on October 17, 2018.
To join the Privacy Shield, companies must self-certify to the U.S. Department of Commerce compliance with the Privacy Shield Principles and related requirements. The FTC’s administrative complaint against ReadyTech alleged that ReadyTech, which provides online and instructor-led training, falsely claimed on its website to be in the process of complying with the Privacy Shield. The reality, according to the FTC, is that ReadyTech had begun but failed to complete the process.
This is the FTC’s fourth case enforcing the Privacy Shield. ReadyTech’s settlement agreement provides, in part, that ReadyTech will not misrepresent its participation in any privacy or security program sponsored by a government or any self-regulatory or standard-setting organization.
As reported in BNA Privacy Law Watch, on June 27, 2018, Equifax entered into a consent order (the “Order”) with 8 state banking regulators (the “Multi-State Regulatory Agencies”), including those in New York and California, arising from the company’s 2017 data breach that exposed the personal information of 143 million consumers.
Equifax’s key obligations under the terms of the Order include: (1) developing a written risk assessment; (2) establishing a formal and documented Internal Audit Program that is capable of effectively evaluating IT controls; (3) developing a consolidated written Information Security Program and Information Security Policy; (4) improving oversight of its critical vendors and ensuring that sufficient controls are developed to safeguard information; (5) improving standards and controls for supporting the patch management function, including reducing the number of unpatched systems; and (6) enhancing oversight of IT operations as it relates to disaster recovery and business continuity. The Order also requires Equifax to strengthen its Board of Directors’ oversight over the company’s information security program, including regular Board reviews of relevant policies and procedures.
Equifax must also submit to the Multi-State Regulatory Agencies a list of all remediation projects planned, in process or implemented in response to the 2017 data breach, as well as written reports outlining its progress toward complying with the provisions of the Order.
On June 25, 2018, the New York Department of Financial Services (“NYDFS”) issued a final regulation (the “Regulation”) requiring consumer reporting agencies with “significant operations” in New York to (1) register with NYDFS for the first time and (2) comply with the NYDFS’s cybersecurity regulation. Under the Regulation, consumer reporting agencies that reported on 1,000 or more New York consumers in the preceding year are subject to these requirements, and must register with NYDFS on or before September 1, 2018. The deadline for consumer reporting agencies to come into compliance with the cybersecurity regulation is November 1, 2018. In a statement, Governor Andrew Cuomo said, “Oversight of credit reporting agencies ensures that the personal private information of New Yorkers is less vulnerable to the threat of cyber attacks, providing them with peace of mind about their financial future.”
On June 28, 2018, the Governor of California signed AB 375, the California Consumer Privacy Act of 2018 (the “Act”). The Act introduces key privacy requirements for businesses, and was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. We previously reported on the relevant ballot initiative. The Act will take effect January 1, 2020.
Key provisions of the Act include:
- Applicability. The Act will apply to any for-profit business that (1) “does business in the state of California”; (2) collects consumers’ personal information (or on the behalf of which such information is collected) and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information; and (3) satisfies one or more of the following thresholds: (a) has annual gross revenues in excess of $25 million, (b) alone or in combination annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more consumers, households or devices, or (c) derives 50 percent or more of its annual revenue from selling consumers’ personal information (collectively, “Covered Businesses”).
- Definition of Personal Information. Personal information is defined broadly as “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” This definition of personal information aligns more closely with the EU General Data Protection Regulation’s definition of personal data. The Act includes a list of enumerated examples of personal information, which includes, among other data elements, name, postal or email address, Social Security number, government-issued identification number, biometric data, Internet activity information and geolocation data, as well as “inferences drawn from any of the information identified” in this definition.
- Right to Know
- Upon a verifiable request from a California consumer, a Covered Business must disclose (1) the categories and specific pieces of personal information the business has collected about the consumer; (2) the categories of sources from which the personal information is collected; (3) the business or commercial purposes for collecting or selling personal information; and (4) the categories of third parties with whom the business shares personal information.
- In addition, upon verifiable request, a business that sells personal information about a California consumer, or that discloses a consumer’s personal information for a business purpose, must disclose (1) the categories of personal information that the business sold about the consumer; (2) the categories of third parties to whom the personal information was sold (by category of personal information for each third party to whom the personal information was sold); and (3) the categories of personal information that the business disclosed about the consumer for a business purpose.
- The above disclosures must be made within 45 days of receipt of the request using one of the prescribed methods specified in the Act. The disclosure must cover the 12-month period preceding the business’s receipt of the verifiable request. The 45-day time period may be extended when reasonably necessary, provided the consumer is provided notice of the extension within the first 45-day period. Importantly, the disclosures must be made in a “readily useable format that allows the consumer to transmit this information from one entity to another entity without hindrance.”
- Exemption. Covered Businesses will not be required to make the disclosures described above to the extent the Covered Business discloses personal information to another entity pursuant to a written contract with such entity, provided the contract prohibits the recipient from selling the personal information, or retaining, using or disclosing the personal information for any purpose other than performance of services under the contract. In addition, the Act provides that a business is not liable for a service provider’s violation of the Act, provided that, at the time the business disclosed personal information to the service provider, the business had neither actual knowledge nor reason to believe that the service provider intended to commit such a violation.
- Disclosures and Opt-Out. The Act will require Covered Businesses to provide notice to consumers of their rights under the Act (e.g., their right to opt out of the sale of their personal information), a list of the categories of personal information collected about consumers in the preceding 12 months, and, where applicable, that the Covered Business sells or discloses their personal information. If the Covered Business sells consumers’ personal information or discloses it to third parties for a business purpose, the notice must also include lists of the categories of personal information sold and disclosed about consumers, respectively. Covered Businesses will be required to make this disclosure in their online privacy notice. Covered Businesses must separately provide a clear and conspicuous link on their website that says, “Do Not Sell My Personal Information,” and provide consumers a mechanism to opt out of the sale of their personal information, a decision which the Covered Business must respect. Businesses also cannot discriminate against consumers who opt out of the sale of their personal information, but can offer financial incentives for the collection of personal information.
- Specific Rules for Minors: If a business has actual knowledge that a consumer is less than 16 years of age, the Act prohibits a business from selling that consumer’s personal information unless (1) the consumer is between 13–16 years of age and has affirmatively authorized the sale (i.e., they opt in); or (2) the consumer is less than 13 years of age and the consumer’s parent or guardian has affirmatively authorized the sale.
- Right to Deletion. The Act will require a business, upon verifiable request from a California consumer, to delete specified personal information that the business has collected about the consumer and direct any service providers to delete the consumer’s personal information. However, there are several enumerated exceptions to this deletion requirement. Specifically, a business or service provider is not required to comply with the consumer’s deletion request if it is necessary to maintain the consumer’s personal information to:
- Complete the transaction for which the personal information was collected, provide a good or service requested by the consumer, or reasonably anticipated, within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract with the consumer.
- Detect security incidents; protect against malicious, deceptive, fraudulent or illegal activity; or prosecute those responsible for that activity.
- Debug to identify and repair errors that impair existing intended functionality.
- Exercise free speech, ensure the right of another consumer to exercise his or her right of free speech, or exercise another right provided for by law.
- Comply with the California Electronic Communications Privacy Act.
- Engage in public or peer-reviewed scientific, historical or statistical research in the public interest (when deletion of the information is likely to render impossible or seriously impair the achievement of such research) if the consumer has provided informed consent.
- To enable solely internal uses that are reasonably aligned with the consumer’s expectations based on the consumer’s relationship with the business.
- Comply with a legal obligation.
- Otherwise use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.
- The Act is enforceable by the California Attorney General and authorizes a civil penalty up to $7,500 per violation.
- The Act provides a private right of action only in connection with “certain unauthorized access and exfiltration, theft, or disclosure of a consumer’s nonencrypted or nonredacted personal information,” as defined in the state’s breach notification law, if the business failed “to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.”
- In this case, the consumer may bring an action to recover damages up to $750 per incident or actual damages, whichever is greater.
- The statute also directs the court to consider certain factors when assessing the amount of statutory damages, including the nature, seriousness, persistence and willfulness of the defendant’s misconduct, the number of violations, the length of time over which the misconduct occurred, and the defendant’s assets, liabilities and net worth.
Prior to initiating any action against a business for statutory damages, a consumer must provide the business with 30 days’ written notice of the consumer’s allegations and, if within the 30 days the business cures the alleged violation and provides an express written statement that the violations have been cured, the consumer may not initiate an action for individual statutory damages or class-wide statutory damages. These limitations do not apply to actions initiated solely for actual pecuniary damages suffered as a result of the alleged violation.
Recently, Louisiana amended its Database Security Breach Notification Law (the “amended law”). Notably, the amended law (1) amends the state’s data breach notification law to expand the definition of personal information and requires notice to affected Louisiana residents within 60 days, and (2) imposes data security and destruction requirements on covered entities. The amended law goes into effect on August 1, 2018.
Key breach notification provisions of the amended law include:
- Definition of Personal Information: Under the amended law, “personal information” is now defined as a resident’s first name or first initial and last name together with one or more of the following data elements, when the name or the data element is not encrypted or redacted: (1) Social Security Number; (2) driver’s license number or state identification card number; (3) account number, credit or debit card number, together with any required security code, access code or password that would permit access to the individuals’ financial account; (4) passport number; and (5) biometric data, such as fingerprints, voice prints, eye retina or iris, or other unique biological characteristic, that is used to authenticate the individual’s identity.
- Timing: The amended law requires that notice must be made to affected residents in the most expedient time possible and without unreasonable delay, but no later than 60 days from the discovery of a breach. This timing requirement also applies to third parties who are required to notify the owner or licensee of the personal information of a breach.
- Delays: Under the amended law, entities must provide written notification to the Louisiana Attorney General within the 60-day period if notification is delayed due to (1) the entity’s determination that “measures are necessary to determine the scope of the breach, prevent further disclosures and restore the reasonable integrity of the system” or (2) law enforcement’s determination that notification would impede a criminal investigation. The Attorney General will allow an extension after receiving a written explanation of the reasons for delay.
- Substitute Notification: The amended law lowers the bar for substitute notifications in the form of emails, postings on the website and notifications to major statewide media. Specifically, substitute notifications are permitted if (1) the cost of providing notifications would exceed $100,000 (previously the threshold was $250,000); (2) the number of affected individuals exceeds 100,000 (previously the threshold was 500,000); or (3) the entity does not have sufficient contact information.
- Harm Threshold Documentation: Notification is not required if the entity determines that there is no reasonable likelihood of harm to Louisiana residents. The amended law requires that this written determination and supporting documents must be maintained for five years from the discovery. The Attorney General may request the documentation.
Key data security and destruction provisions of the amended law include:
- “Reasonable” Security Procedures: The amended law creates a new requirement that entities that conduct business in Louisiana or own or license computerized personal information about Louisiana residents must maintain “reasonable security procedures and practices” to protect personal information. In addition, the security procedures and practices must be “appropriate to the nature of the information.” The amended law does not describe specifically what practices would meet these standards.
- Data Destruction Requirement: The amended law creates a new requirement that, when Louisiana residents’ personal information owned or licensed by a business is “no longer to be retained,” “all reasonable steps” must be taken to destroy it. For instance, the personal information must be shredded or erased, or the personal information must be otherwise modified to “make it unreadable or undecipherable.”
Separately, on May 15, 2018, SB127 was signed by the governor and took immediate effect. The bill prohibits credit reporting agencies from charging a fee for placing, reinstating, temporarily lifting or revoking a security freeze.
On June 6, 2018, the U.S. Court of Appeals for the Eleventh Circuit vacated a 2016 Federal Trade Commission (“FTC”) order compelling LabMD to implement a “comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers.” The Eleventh Circuit agreed with LabMD that the FTC order was unenforceable because it did not direct the company to stop any “unfair act or practice” within the meaning of Section 5(a) of the Federal Trade Commission Act (the “FTC Act”).
The case stems from allegations that LabMD, a now-defunct clinical laboratory for physicians, failed to protect the sensitive personal information (including medical information) of consumers, resulting in two specific security incidents. One such incident occurred when a third party informed LabMD that an insurance-related report, which contained personal information of approximately 9,300 LabMD clients (including names, dates of birth and Social Security numbers), was available on a peer-to-peer (“P2P”) file-sharing network.
Following an FTC appeal process, the FTC ordered LabMD to implement a comprehensive information security program that included:
- designated employees accountable for the program;
- identification of material internal and external risks to the security, confidentiality and integrity of personal information;
- reasonable safeguards to control identified risks;
- reasonable steps to select service providers capable of safeguarding personal information, and requiring them by contract to do so; and
- ongoing evaluation and adjustment of the program.
In its petition for review of the FTC order, LabMD asked the Eleventh Circuit to decide whether (1) its alleged failure to implement reasonable data security practices constituted an unfair practice within the meaning of Section 5 of the FTC Act and (2) whether the FTC’s order was enforceable if it does not direct LabMD to stop committing any specific unfair act or practice.
The Eleventh Circuit assumed, for purposes of its ruling, that LabMD’s failure to implement a reasonably designed data-security program constituted an unfair act or practice within the meaning of Section 5 of the FTC Act. However, the court held that the FTC’s cease and desist order, which was predicated on LabMD’s general negligent failure to act, was not enforceable. The court noted that the prohibitions contained in the FTC’s cease and desist orders and injunctions “must be stated with clarity and precision,” otherwise they may be unenforceable. The court found that in LabMD’s case, the cease and desist order contained no prohibitions nor instructions to the company to stop a specific act or practice. Rather, the FTC “command[ed] LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.” The court took issue with the FTC’s scheme of “micromanaging,” and concluded that the cease and desist order “mandate[d] a complete overhaul of LabMD’s data-security program and [said] precious little about how this [was] to be accomplished.” The court also noted that the FTC’s prescription was “a scheme Congress could not have envisioned.”
On May 31, 2018, the Federal Trade Commission published on its Business Blog a post addressing the easily missed data deletion requirement under the Children’s Online Privacy Protection Act (“COPPA”).
The post cautions that companies must review their data policy in order to comply with the data retention and deletion rule. Under Section 312.10 of COPPA, an online service operator may retain personal information of a child “for only as long as is reasonably necessary to fulfill the purposes for which the information was collected.” After that, the operator must delete it with reasonable measures to ensure secure deletion.
The FTC explains that a thorough review of data retention policies is crucial for compliance, as the deletion requirement is triggered without an express request from parents. Companies must verify, among other items, when the data ceases to be necessary for the initial purpose for which it was collected, and what they do with the data at that point. For instance, the FTC illustrates, a subscription-based children’s app provider would want to ask what it does with the data when a parent closes an account, a subscription is not renewed or an account becomes inactive. If the information is still necessary for billing purposes, the company must determine how much longer it needs the information.
The FTC provides the following questions that companies want to ask to ensure compliance:
- What types of personal information do you collect from children?
- What is your stated purpose for collecting the information?
- How long do you need to retain the information for the initial purpose?
- Does the purpose for using the information end with an account deletion, subscription cancellation or account inactivity?
- When it’s time to delete information, are you doing it securely?
On April 11, 2018, Arizona amended its data breach notification law (the “amended law”). The amended law will require persons, companies and government agencies doing business in the state to notify affected individuals within 45 days of determining that a breach has resulted in or is reasonably likely to result in substantial economic loss to affected individuals. The old law only required notification “in the most expedient manner possible and without unreasonable delay.” The amended law also broadens the definition of personal information and requires regulatory notice and notice to the consumer reporting agencies (“CRAs”) under certain circumstances.
Key provisions of the amended law include:
- Definition of Personal Information. Under the amended law, the definition of “personal information” now includes an individual’s first name or initial and last name in combination with one or more of the following “specified data elements:” (1) Social Security number; (2) driver’s license or non-operating license number; (3) a private key that is unique to an individual and that is used to authenticate or sign an electronic record; (4) financial account number or credit or debit card number in combination with any required security code, access code or password that would allow access to the individual’s financial account; (5) health insurance identification number; (6) medical or mental health treatment information or diagnoses by a health care professional; (7) passport number; (8) taxpayer identification or identity protection personal identification number issued by the Internal Revenue Service; and (9) unique biometric data generated from a measurement or analysis of human body characteristics to authenticate an individual when the individual accesses an online account. The amended law also defines “personal information” to include “an individual’s user name or e-mail address, in combination with a password or security question and answer, which allows access to an online account.”
- Harm Threshold. Pursuant to the amended law, notification to affected individuals, the Attorney General and the CRAs is not required if breach has not resulted in or is not reasonably likely to result in substantial economic loss to affected individuals.
- Notice to the Attorney General and Consumer Reporting Agencies. If the breach requires notification to more than 1,000 individuals, notification must also be made to the Attorney General and the three largest nationwide CRAs.
- Timing. Notifications to affected individuals, the Attorney General and the CRAs must be issued within 45 days of determining that a breach has occurred.
- Substitute Notice. Where the cost of making notifications would exceed $50,000, the affected group is bigger than 100,000 individuals, or there is insufficient contact information for notice, the amended law now requires that substitute notice be made by (1) sending a written letter to the Attorney General demonstrating the facts necessary for substitute notice and (2) conspicuously posting the notice on the breached entity’s website for at least 45 days. Under the amended law, substitute notice no longer requires email notice to affected individuals and notification to major statewide media.
- Penalty Cap. The Attorney General may impose up to $500,000 in civil penalties for knowing and willful violations of the law in relation to a breach or series of related breaches. The Attorney General also Is entitled to recover restitution for affected individuals.
On May 1, 2018, the Information Security Technology – Personal Information Security Specification (the “Specification”) went into effect in China. The Specification is not binding and cannot be used as a direct basis for enforcement. However, enforcement agencies in China can still use the Specification as a reference or guideline in their administration and enforcement activities. For this reason, the Specification should be taken seriously as a best practice in personal data protection in China, and should be complied with where feasible.
The Specification constitutes a best practices guide for the collection, retention, use, sharing and transfer of personal information, and for the handling of related information security incidents. It includes (without limitation) basic principles for personal information security, notice and consent requirements, security measures, rights of data subjects and requirements related to internal administration and management. The Specification establishes a definition of sensitive personal information, and provides specific requirements for its collection and use.
Read our previous blog post from January 2018 for a more detailed description of the Specification.
The Belgian Privacy Commission (the “Belgian DPA”) recently released a Recommendation (in French and Dutch) on Data Protection Impact Assessment (“DPIA”) and the prior consultation requirements under Articles 35 and 36 of the EU General Data Protection Regulation (“GDPR”) (the “Recommendation”). The Recommendation aims to provide guidance on the core elements and requirements of a DPIA, the different actors involved and specific provisions.
Key takeaways from the Recommendation are summarized below:
- Why proceed to a DPIA? The Belgian DPA states that the obligation to conduct a DPIA in certain circumstances should be understood in light of two central principles of the GDPR, namely the principle of accountability and the risk-based approach.
- When is a DPIA required? The Belgian DPA indicates that carrying out a DPIA is not mandatory for every processing operation. Instead, a DPIA is only required where a type of processing is “likely to result in a high risk to the rights and freedoms of natural persons.” The Belgian DPA refers to the Guidelines of the Article 29 Working Party (“Working Party”) for such assessment and, in particular, to the nine criteria set out in the Guidelines to consider when determining whether the processing of personal data is likely to create a high risk for the rights and freedoms of individuals. According to the Belgian DPA, if two criteria of this list are detected, a DPIA must be conducted.
- When should a DPIA be conducted? The Belgian DPA stresses that the DPIA must be done before any processing of personal data, and is a tool available to help make decisions concerning the processing.
- What are the essential elements of a DPIA? A DPIA must contain the systematic description of the considered processing as well as the purposes of the processing, including at the minimum a clear description of the processing, personal data involved, categories of recipients and retention period of the data, and finally the material (e.g., software, network, papers, etc.) on which the data are saved. The DPIA must also include an evaluation of the necessity and proportionality of the processing activities with regards to the purposes of the processing, taking into account several criteria. Additionally, the DPIA must include a risk assessment of the whole process of the identification, including the analysis and evaluation of those risks. To conduct such an assessment, companies can chose the method as long as it leads to an objective evaluation of the risks. However, the Belgian DPA recommends favoring existing risk management methods. Finally, the DPIA must include the measures anticipated to address those risks, such as the safeguards, security measures and tools implemented to ensure the protection of the data and compliance with the GDPR.
- Prior consultation of the Supervisory Authorities (“SAs”). The Belgian DPA states that the GDPR requires a prior consultation of the SAs only when the residual risk is high. If the risks can be mitigated, then a prior consultation is not mandatory.
The Belgian DPA also makes additional recommendations, including inter alia:
- Similar or joint processing activities. A single DPIA could be used to assess multiple processing operations that are similar in terms of nature, scope, context, purpose and risks.
- Monitoring and review. The controller should, if necessary, conduct a periodic review of the processing activity to assess whether the processing is consistent with the DPIA that was performed. Such a review must at least take place where there is a modification of the risk resulting from the processing operations.
- Preexistent processing. For processing activities prior to May 25, 2018, conducting a DPIA is only required if the risk(s) change after May 25, 2018 (e.g., a new technology is used or personal data are used for another purpose). However, the Belgian DPA recommends, as a best practice, to also conduct DPIAs for existing processing activities if they are likely to result in a high risk to the rights and freedoms of individuals.
Finally, the Recommendation includes annexes:
- Annex 1: The Belgian DPA recommends some minimal characteristics for appropriate risk management.
- Annex 2: The Belgian DPA provides a draft list of processing activities requiring a DPIA. The list includes, inter alia, processing of biometric data for the purpose of identifying individuals in a public area, collecting personal data from third parties for the purpose of making decisions (including to refuse or terminate) regarding a contract to which an individual is party, large-scale processing of personal data from vulnerable individuals (e.g., children), or large-scale processing of personal data where individuals’ behavior is observed, collected, established or influenced in a systematic manner and using automated means, including for advertising purposes.
- Annex 3: The Belgian DPA provides a draft list of processing activities that are exempt from a DPIA, including, inter alia, processing activities by private entities which are necessary to meet their legal obligations, subject to conditions, the processing of personal data for payroll purposes and HR management, and the processing of personal data for client and vendor management purposes, subject to certain conditions.
As reported in the Hunton Nickel Report:
Recent press reports indicate that a cyber attack disabled the third-party platform used by oil and gas pipeline company Energy Transfer Partners to exchange documents with other customers. Effects from the attack were largely confined because no other systems were impacted, including, most notably, industrial controls for critical infrastructure. However, the attack comes on the heels of an FBI and Department of Homeland Security (“DHS”) alert warning of Russian attempts to use tactics including spearphishing, watering hole attacks, and credential gathering to target industrial control systems throughout critical infrastructure, as well as an indictment against Iranian nationals who used similar tactics to attack private, education, and government institutions, including the Federal Energy Regulatory Commission (“FERC”). These incidents raise questions about cybersecurity across the U.S. pipeline network.
Federal oversight of pipeline safety and security is split respectively between the Department of Transportation’s Pipeline and Hazardous Materials Safety Administration (“PHMSA”) and DHS’s Transportation Safety Administration (“TSA”). PHMSA and TSA signed a Memorandum of Understanding in 2004, which has been continually updated, coordinating activity under their jurisdictions over pipelines. Pipeline security activities within TSA are led by the Pipeline Security Division.
Notably, the Implementing Recommendations of the 9/11 Commission Act of 2007, codified at 6 U.S.C. 1207(f), authorizes TSA to promulgate pipeline security regulations if TSA determines that doing so is necessary. To date, TSA has opted not to issue pipeline security regulations, instead preferring a collaborative approach with industry through its Pipeline Security Guidelines, which were updated just last month. Nevertheless, growing risks are leading to calls for mandatory oil and gas pipeline cybersecurity regulations. Some assert that pipelines should adopt a regime similar to electric grid regulations under Critical Infrastructure Protection (“CIP”) Standards, issued by the North American Electric Reliability Corporation (“NERC”) and approved by FERC.
Containing numerous critical infrastructure elements across a vast and far-flung network that carries a commodity critical to public welfare, the U.S. oil and gas pipeline network shares many characteristics with the electric grid. Furthermore, the growing interconnection of information systems, as well as of the economy in general, increases the potential for events in one sector, particularly energy, to have cross-sectoral impacts. For these reasons, there is legitimate concern that a cyber attack leading to an oil and gas pipeline disruption could have wide-reaching effect, especially in light of the electricity subsector’s growing reliance on natural gas for generation.
However, there are important differences between the electric grid and pipeline system, most notably in the risk of cascading impacts, that may distinguish the appropriateness of regulations that set a baseline for cybersecurity. Events at an individual pipeline can result in cascading disruption both upstream and downstream in the oil and gas production process, as well as beyond to other sectors. Yet the electricity grid, where systems are more closely tied together, is unique in that a localized event can quickly result in a cascading failure of other operational technology across the grid. This was evidenced by blackouts throughout the Northeast in the 1970s and 2003, which were responsible for spurring the creation of NERC and its reliability standards. NERC CIP Standards, which only apply to systems connected to the electric grid, provide assurance to electricity subsector critical infrastructure owners and operators that other connected systems must adhere to a cybersecurity “floor.” The NERC CIP Standards require measures to mitigate risk that the actions of one grid participant will not cascade to damage the systems of others on the grid. This is not the case with oil and gas pipelines, where cybersecurity regulations may actually divert resources from actual operational security and toward pure compliance.
Concerns about electric-gas coordination may point to issues beyond cybersecurity, and toward the electricity subsector’s growing vulnerability to common-mode disruption. As noted by NERC after its GridEx IV security exercise, meeting challenges in the electric grid’s ever-evolving threat environment may require “consider[ing] whether the diversity of fuel sources (today and into the future) presents a vulnerability to common mode failures or disruptions.”
It should be noted that certain oil and gas facilities, though not necessarily pipelines, are already subject to cybersecurity requirements under DHS’s Chemical Facility Anti-Terrorism Standards (“CFATS”). The CFATS Risk Based Performance Standard 8 outlines cybersecurity measures subject to DHS review during a CFATS inspection.
There are, of course, steps that oil and gas companies can take beyond regulatory compliance to mitigate risk of cyber attack. In December 2016, PHMSA and TSA issued a joint notice, after activists tampered with certain pipelines, discussing steps to harden SCADA control systems on pipeline operational technology, including segregating the control system network from the corporate network, limiting remote access to control systems, and enhancing user access controls. In addition, all companies, including those in the oil and gas subsector and beyond, should consider engaging in an efficient and highly tactical incident response planning process that incorporates all necessary stakeholders across the enterprise, including those from both information technology and legal.
Oil and gas companies seeking further risk mitigation should also consider certifying or designating their cybersecurity programs under the Support Anti-Terrorism by Fostering Effective Technologies Act (“SAFETY Act”). Overseen by the DHS SAFETY Act Office, the act provides significant liability protections where an approved technology or service is deployed to counteract an “act of terrorism.” Such an act of terrorism is determined by DHS, need not be politically motivated, and can include a cyber attack. While not necessarily applicable to lower scale cybersecurity incidents (by comparison, the Boston Marathon bombing was not designated an act of terrorism), SAFETY Act liability protections can mitigate risk posed to companies by a cyber attack with catastrophic consequences.
The Federal Trade Commission has modified its 2017 settlement with Uber Technologies, Inc. (“Uber”) after learning of an additional breach that was not taken into consideration during its earlier negotiations with the company. The modifications are based on the fact that Uber failed to notify the FTC of a November 2016 breach, which took place during the time that the FTC was investigating an earlier, 2014 breach. The 2016 breach occurred when intruders used an access key that an Uber engineer had posted on GitHub to download more than 47 million user names, including related email addresses or phone numbers, as well as more than 600,000 drivers’ names and license numbers. The FTC alleged that after Uber learned of the breach, it paid the intruders a $100,000 ransom through its “bug bounty” program. The bug bounty program is intended to reward responsible disclosure of security vulnerabilities.
The revised proposed agreement goes beyond the FTC’s original settlement, which mandated that Uber implement a comprehensive privacy program. The expanded FTC order would require Uber to address:
- software design, development and testing;
- how the company reviews and responds to third-party security vulnerability reports; and
- prevention, detection and response to attacks, intrusions or systems failures.
Uber also would be required to report to the FTC any incident where the company is required to notify any U.S. government entity about the unauthorized access of any consumer’s information.
Update: On October 26, 2018, the FTC gave final approval to the settlement with Uber.
On March 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its draft guidelines on the accreditation of certification bodies under the GDPR (the “Guidelines”). The Guidelines were adopted by the Working Party on February 6, 2018, for public consultation.
CIPL emphasized that one overarching goal for GDPR certifications must be that they are scalable to organizations of all sizes, including micro-, small- and medium-sized enterprises. This goal also must be reflected in the accreditation standards for certification bodies, so that sufficient certification bodies capable of certifying these types of enterprises will be accredited. To that end, CIPL’s comments highlight that the GDPR provides for more than one route towards an appropriate accreditation standard (i.e., accreditation that builds on an existing system of national accreditation bodies that operate under established ISO standards, and accreditation by supervisory authorities). CIPL believes both routes towards accreditation of certification bodies will have useful roles to play within their respective areas of core competency, and that the accreditations by supervisory authorities should specifically ensure flexibility, scalability and interoperability with similar certification schemes in other regions.
In its comments to the Guidelines, CIPL recommends that the Working Party make several changes or clarifications as follows:
- Clarify that when supervisory authorities accredit certification bodies under Art. 43(1)(a), they should do so under a common EU-wide accreditation standard approved by the European Data Protection Board (“EDPB”) that takes into account requirements adopted by the EU Commission (the “Commission”) in accordance with Art. 43(8) of the GDPR;
- Underline the EDPB’s responsibility to establish independent assessment criteria for reviewing supervisory authority-submitted accreditation criteria, in order to maintain comparability and consistency across the EU;
- Clarify that ISO 17065 should be viewed as instructive, but not mandatory, for supervisory authorities, the EDPB or the Commission as they develop or approve accreditation requirements for certification bodies;
- Highlight that the APEC Accountability Agent Recognition Criteria are a good model for consideration in connection with accreditation standards to be developed by supervisory authorities, the EDBP or the Commission;
- Underscore that ISO 17065 should be applied flexibly by national accreditation bodies to further the scalability goals of the GDPR with respect to micro-, small- and medium-sized enterprises and to facilitate consistency with standards developed or approved by the supervisory authorities and/or the EDPB;
- Emphasize that the additional requirements supervisory authorities develop for accreditations by national accreditation bodies under Art. 43(1)(b) should also take into account scalability and the needs of micro-, small- and medium-sized enterprises.
To read the above recommendations in more detail, along with CIPL’s other recommendations on the accreditation of certification bodies under the GDPR, view the full paper.
CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics prioritized by the Working Party.
On March 20, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP issued a factsheet outlining relevant GDPR provisions for negotiations surrounding the proposed ePrivacy Regulation (the “Factsheet”).
The Factsheet is designed for policymakers and other stakeholders involved in the development process of the new ePrivacy Regulation who are not deeply familiar with the GDPR. As the ePrivacy Regulation aims to complement and particularize the GDPR, an understanding of the key concepts and definitions is crucial to ensure the ePrivacy Regulation’s effective development.
The Factsheet explains key GDPR terms and concepts that are pertinent to the ePrivacy Regulation, including:
- The definitions of fundamental GDPR terms, such as personal data, data processing and the role of controllers and processors;
- Information on the territorial scope and reach of the GDPR;
- An outline of the six key data protection principles and the legal bases for processing;
- A description of the various rights of data subjects under the GDPR;
- Information on the risk-based approach and data privacy impact assessments;
- An explanation of privacy by design and by default, and GDPR security requirements; and
- Information on the concept of the lead supervisory authority, remedies and sanctions.
Also included is an annex of the most relevant GDPR recitals and provisions.
To read more about the concepts outlined above and the other relevant GDPR provisions relating to the proposed ePrivacy Regulation, view the full Factsheet.
The Canadian government recently published a cabinet order stating that the effective date for breach notification provisions in the Digital Privacy Act would be November 1, 2018. At that time, businesses that experience a “breach of security safeguards” would be required to notify affected individuals, as well as the Privacy Commissioner and any other organization or government institution that might be able to reduce the risk of harm resulting from the breach.
Canada has had mandatory breach notification regulations at the provincial level, and many companies have also voluntarily reported breaches to the federal Privacy Commissioner, so most organizations should be well-equipped to meet the November 1 compliance deadline.
On March 26, 2018, the U.S. Department of Commerce posted an update on the actions it has taken between January 2017 and March 2018 to support the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks (collectively, the “Privacy Shield”). The update details measures taken in support of commercial and national security issues relating to the Privacy Shield.
With regard to the commercial aspects, the Department of Commerce has taken measures to ensure:
- An enhanced certification process through implementing more rigorous company reviews and reducing opportunities for false claims;
- Additional monitoring of companies through expanded compliance reviews, random spot checks for certified organizations and proactive checks for false claims online;
- Active complaint resolution by confirming a full list of arbitrators to ensure that EU individuals have recourse to arbitration. A call for applications for inclusion on the list of arbitrators for the arbitration mechanism for Swiss individuals is currently open until April 30, 2018;
- Strengthened enforcement through continued oversight by the Federal Trade Commission (“FTC”) and the nomination of four FTC Commissioners. The FTC announced three Privacy Shield-related false claims actions in September 2017; and
- Expanded outreach and education through consistent official reaffirmation of the U.S. commitment to the Privacy Shield, and the development of user friendly guidance on the Privacy Shield Framework for individuals, businesses and authorities.
In relation to national security, the Department of Commerce has taken measures to ensure:
- Robust limitations and safeguards regarding national security endeavors, including confirmation that Presidential Policy Directive 28 remains in place without amendment, and a reaffirmation by the Intelligence Community of its commitment to civil liberties, privacy and transparency through the updating and re-issuing of Intelligence Community Directive 107;
- Independent oversight through the nomination of three individuals to the Privacy and Civil Liberties Oversight Board with the aim of restoring the independent agency to quorum status;
- Individual redress through the creation of the Privacy Shield Ombudsperson mechanism which provides EU and Swiss individuals with an independent review channel in relation to the transfer of their data to the U.S.;
- U.S. legal developments take into account the Privacy Shield framework. Congress, for example, in reauthorizing Foreign Intelligence Surveillance Act’s Section 702, maintained all elements on which the European Commission’s Privacy Shield adequacy determination was based, and enhanced the advisory and oversight functions of the Privacy and Civil Liberties Oversight Board. The government also has informed the European Commission about material developments in the law relevant to Privacy Shield.
On March 26, 2018, the Centre for Information Policy Leadership at Hunton & Williams LLP and AvePoint released its second Global GDPR Readiness Report (the “Report”), detailing the results of a joint global survey launched in July 2017 concerning organizational preparedness for implementing the EU General Data Protection Regulation (“GDPR”). The Report tracks the GDPR implementation efforts of over 235 multinational organizations, and builds on the findings of the first Global GDPR Readiness Report by providing insights on key changes in readiness levels from 2016 to 2017.
Key highlights of the report include:
- Over half of all respondents have committed additional budget to GDPR implementation, with increases ranging from hundreds of thousands of dollars to upwards of $50 million.
- While technology tools and software are the number one priority for GDPR-focused budget spending, continued reliance on manual methods for building and maintaining data processing inventories, as well as low usage rates of automated software to identify and tag data, indicate that much work is still to be done to assess and procure these solutions.
- Almost a quarter of organizations have not yet implemented any processes to update their controller-processor contracts or review or renegotiate existing agreements. Organizations will have to closely look at their contracts ahead of May 25, 2018, to ensure they include the new required terms introduced by the GDPR.
- Despite little information being available on new GDPR transfer mechanisms such as adequate safeguards or certifications, for the second year in a row, respondents indicated that they are likely to use these mechanisms, with almost one-fifth of organizations reporting they will rely on the latter post-GDPR.
- With regard to security, the majority of organizations have put internal reporting procedures and incident response plans in place. However, organizations still have some work to do in implementing other data breach response procedures, such as conducting dry runs and retaining PR and media consultants.
- Legitimate interest remains the area most in need of clarity under the GDPR, followed by data protection impact assessments and risk, breach notification, notice and consent, and privacy by design.
To read more about these highlights and other insights of the study, please view the full report.
As reported in BNA Privacy Law Watch, on March 21, 2018, South Dakota enacted the state’s first data breach notification law. The law will take effect on July 1, 2018, and includes several key provisions:
- Definitions of Personal Information and Protected Information. The law defines personal information as a person’s first name or first initial and last name in combination with any one or more of the following data elements: (1) Social Security Number; (2) driver’s license number or other unique identification number created or collected by a government body; (3) account, credit card or debit card number, in combination with any required security code, access code, password, routing number, PIN or any additional information that would permit access to a person’s financial account; (4) health information; and (5) an identification number assigned to a person by the person’s employer in combination with any required security code, access code, password, or biometric data generated from measurements or analysis of human body characteristics for authentication purposes. The law further defines “protected information” as (1) a username or email address in combination with a password, security question answer, or other information that permits access to an online account; and (2) account number or credit or debit card number, in combination with any required security code, access code, or password that permits access to a person’s financial account. Notably, the definition of “protected information” does not include a person’s name.
- Breach Notification Requirement. The law requires notification to affected individuals (and, in certain circumstances, the Attorney General, as explained below) in the event of unauthorized acquisition of unencrypted computerized data (or encrypted computerized data and the encryption key) by any person that materially compromises the security, confidentiality or integrity of personal information or protected information.
- Content and Method of Notice. The law does not contain content requirements for the notice. Notice may be provided (1) in writing; (2) electronically, if the notice is consistent with the provisions of E-SIGN; or (3) via substitute notice if the cost of providing notice would exceed $250,000, the number of affected individuals exceeds 500,000, or the entity does not have sufficient contact information for affected individuals. Substitute notice must consist of (1) email notice, if the entity has an email address for affected individuals; (2) conspicuous posting on the entity’s website; and (3) notification to statewide media.
- Timing. Notification to affected individuals is required within 60 days of discovery of the breach.
- Harm Threshold. The law contains a harm threshold, pursuant to which notification is not required if, following an appropriate investigation and notice to the Attorney General, the entity reasonably determines that the breach will not likely result in harm to the affected person(s).
- Notice to the Attorney General. The law requires notification to the Attorney General of any breach that exceeds 250 South Dakota residents.
- Notice to the Consumer Reporting Agencies. In the event notification to affected individuals is required, the law also requires notification to the nationwide consumer reporting agencies of the timing, distribution and content of the notice to individuals.
- Penalties for Non-Compliance. A violation of the breach notification law is considered a deceptive act under the state’s consumer protection laws. The South Dakota Attorney General noted that this violation has the effect of creating a private right of action. In addition, the Attorney General is authorized to enforce the breach notification law and may impose a fine of up to $10,000 per day per violation.
With this enactment, Alabama remains the sole U.S. state without a breach notification law.
On March 14, 2018, the Department of Justice and the Securities and Exchange Commission (“SEC”) announced insider trading charges against a former chief information officer (“CIO”) of a business unit of Equifax, Inc. According to prosecutors, the CIO exercised options and sold his shares after he learned of a cybersecurity breach and before that breach was publicly announced. Equifax has indicated that approximately 147.9 million consumers had personal information that was compromised.
Equifax’s board of directors had previously formed a special committee to investigate trades by certain senior executives that occurred after the breach. Although the timing of those trades drew significant scrutiny from the press, investors and others, the special committee concluded that the executives were not aware of the breach when they sold their shares. It does not appear that the special committee’s investigation covered the CIO’s trades.
According to the SEC’s complaint, the CIO—who was the leading candidate to be the company’s next global CIO—allegedly used confidential information entrusted to him in the course of his employment to conclude that Equifax had suffered a serious breach. The SEC’s investigation relied on a detailed analysis of the CIO’s emails and text messages, and also found that the CIO used a search engine to find information on the Internet concerning the September 2015 cybersecurity breach of Experian, another one of the major credit bureaus, and the impact that breach had on Experian’s stock price. The search terms used by the CIO included: (1) “Experian breach”, (2) “Experian stock price 9/15/2015”, and (3) “Experian breach 2015.”
The SEC alleges that shortly after running these internet searches, but before Equifax’s public disclosure of this data breach, the CIO exercised all of his vested Equifax stock options and then sold the underlying shares, receiving proceeds from the sale of over $950,000. According to the SEC, by selling before public disclosure of the Equifax data breach, the CIO also avoided more than $117,000 in losses that he would have suffered had he not sold until after the news of the breach became public.
This case comes on the heels of the SEC’s recently issued interpretive guidance on cybersecurity. In its guidance, the SEC warned that “information about a company’s cybersecurity risks and incidents may be material nonpublic information, and directors, officers, and other corporate insiders would violate the antifraud provisions if they trade the company’s securities in breach of their duty of trust or confidence while in possession of that material nonpublic information.”
These charges are also an important reminder to companies to (1) educate employees on insider trading laws, (2) implement appropriate internal controls and procedures to oversee trading by senior employees and employees who work in sensitive areas, (3) monitor the exercise of company-issued equity awards, and (4) promptly implement blackout periods covering appropriate personnel upon discovery of a cybersecurity incident.
I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.
Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.
Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.
End Dusty Tomes and (most) Out-of-Band Guidance
The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.
Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.
Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.
KPIs as Policies (et al.)
If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.
Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.
Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.
Better Reporting and the Path to Accountability
Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.
This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.
There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...
The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.
On February 12, 2018, the Luxembourg data protection authority (Commission nationale pour la protection des donées, “CNPD”) published on its website (in English and French) a form to be used for the purpose of compliance with data breach notification requirements applicable under the EU General Data Protection Regulation (the “GDPR”). The CNPD also published questions and answers (“Q&As”) regarding the requirements.
Pursuant to the GDPR, data controllers must notify the competent supervisory authority of a data breach within 72 hours of becoming aware of it, if the breach is likely to result in a risk to the rights and freedoms of individuals. Though breach notification is currently not required under the EU Data Protection Directive 95/46/EC, the CNPD has already published the form to assist companies with breach reporting prior to the GDPR coming into force.
For the time being, breach notifications can be sent to email@example.com. Alternative methods are currently under discussion. Notifications will be processed by the CNPD informally until the GDPR becomes directly applicable. Upon receipt, the CNPD will send an acknowledgement of receipt to the data controller, review the form, verify its authenticity and ask the controller any relevant questions, if necessary.
The form provides a series of questions for affected organizations, which are designed to incorporate the requirements of Article 33 of the GDPR. Organizations are not strictly required to use the exact form prepared by the CNPD, but must ensure that any form they do use complies with Article 33 of the GDPR.
In its Q&As, the CNPD also explains that data controllers must document any privacy breach, even those that are not reported to the CNPD. Such documentation must include the facts surrounding the breach, its impact and measures taken to mitigate them. The CNPD may request access to such documentation.
On February 21, 2018, the U.S. Securities and Exchange Commission (“SEC”) published long-awaited cybersecurity interpretive guidance (the “Guidance”). The Guidance marks the first time that the five SEC commissioners, as opposed to agency staff, have provided guidance to U.S. public companies with regard to their cybersecurity disclosure and compliance obligations.
Because the Administrative Procedure Act still requires public notice and comment for any rulemaking, the SEC cannot legally use interpretive guidance to announce new law or policy; therefore, the Guidance is evolutionary, rather revolutionary. Still, it introduces several key topics for public companies, and builds on prior interpretive releases issued by agency staff in the past.
First, the Guidance reiterates public companies’ obligation to disclose material information to investors, particularly when that information concerns cybersecurity risks or incidents. Public companies may be required to make such disclosures in periodic reports in the context of (1) risk factors, (2) management’s discussion and analysis of financial results, (3) the description of the company’s business, (4) material legal proceedings, (5) financial statements, and (6) with respect to board risk oversight. Next, the Guidance addresses two topics not previously addressed by agency staff: the importance of cybersecurity policies and procedures in the context of disclosure controls, and the application of insider trading prohibitions in the cybersecurity context.
The Guidance emphasizes that public companies are not expected to publicly disclose specific, technical information about their cybersecurity systems, nor are they required to disclose potential system vulnerabilities in such detail as to empower threat actors to gain unauthorized access. Nevertheless, the SEC noted that while it may be necessary to cooperate with law enforcement and that ongoing investigation of a cybersecurity incident may affect the scope of disclosure regarding an incident, the mere existence of an ongoing internal or external investigation does not provide a basis for avoiding disclosures of a material cybersecurity incident. The guidance concludes with a reminder that public companies are prohibited in many circumstances from making selective disclosure about cybersecurity matters under SEC Regulation Fair Disclosure.
The Guidance is perhaps most notable for the issues it does not address. In a statement issued coincident with the release of the new guidance, Commissioner Kara Stein expressed disappointment that the Guidance did not go further to highlight four areas where she would have liked to see the SEC seek public comment:
- rules that address improvements to the board’s risk management framework related to cyber risks and threats;
- minimum standards to protect investors’ personally identifiable information, and whether such standards should be required for key market participants, such as broker-dealers, investment advisers and transfer agents;
- rules that would require a public company to provide notice to investors (e.g., a Current Report on Form 8-K) in an appropriate time frame following a cyberattack, and to provide useful disclosure to investors without harming the company competitively; and
- rules that are more programmatic and that would require a public company to develop and implement cybersecurity-related policies and procedures beyond basic disclosure.
Given the intense public and political interest in cybersecurity disclosure by public companies, we anticipate that this latest guidance will not be the SEC’s final word on this critical issue.
On February 13, 2018, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced that it entered into a resolution agreement with the receiver appointed to liquidate the assets of Filefax, Inc. (“Filefax”) in order to settle potential violations of HIPAA. Filefax offered medical record storage, maintenance and delivery services for covered entities, and had gone out of business during the course of OCR’s investigation.
OCR opened its investigation in February 2015, after receiving an anonymous complaint alleging that on February 6 and 9, 2015, a “dumpster diver” brought medical records obtained from Filefax to a shredding and recycling facility to exchange for cash. OCR’s investigation confirmed that an individual had left medical records containing the protected health information (“PHI”) of approximately 2,150 patients at the shredding and recycling facility. OCR’s investigation concluded that Filefax impermissibly disclosed the PHI by either (1) leaving it in an unlocked truck in the Filefax parking lot, or (2) granting permission to an unauthorized person to remove the PHI from Filefax, and leaving the PHI unsecured outside the Filefax facility.
The resolution agreement required Filefax to pay $100,000 and enter into a corrective action plan, which obligates Filefax’s receiver to properly store and dispose of the remaining medical records found at Filefax’s facility in compliance with HIPAA.
On February 12, 2018, in a settled enforcement action, the U.S. Commodity Futures Trading Commission (“CFTC”) charged a registered futures commission merchant (“FCM”) with violations of CFTC regulations relating to an ongoing data breach. Specifically, the FCM failed to diligently supervise an information technology provider’s (“IT vendor’s”) implementation of certain provisions in the FCM’s written information systems security program. Though not unprecedented, this case represents a rare CFTC enforcement action premised on a cybersecurity failure at a CFTC-registered entity.
According to the CFTC, a defect in a network-attached storage device installed by the FCM’s IT vendor left unencrypted customers’ records and other information stored on the device unprotected from cyber-exploitation. The defect left the information unprotected for nearly 10 months and led to the compromise of this data after the FCM’s network was accessed by an unauthorized, unaffiliated third party. The IT vendor failed to discover the vulnerability in subsequent network risk assessments, notwithstanding the fact that the unauthorized third party had blogged about exploiting the vulnerability at other companies. The FCM did not learn about the breach of its systems until directly contacted by the third party.
The CFTC charged the FCM under Regulation 166.3, which requires that every CFTC registrant “diligently supervise the handling [of confidential information] by its partners, officers, employees and agents,” and Regulation 160.30, which requires all FCMs to “adopt policies and procedures that address administrative, technical and physical safeguards for the protection of customer records and information.” The CFTC noted that an FCM may delegate the performance of its information systems security program’s technical provisions, including those relevant here. But in contracting with an IT vendor as its agent to perform these services, the FCM cannot abdicate its responsibilities under Regulation 166.3, and must diligently supervise the IT vendor’s handling of all activities relating to the registered entity’s business as a CFTC registrant.
To settle the case, the FCM agreed to (1) pay a $100,000 civil monetary penalty and (2) cease and desist from future violations of Regulation 166.3. The CFTC noted the FCM’s cooperation during the investigation and agreed to reduce sanctions as a result.
On February 5, 2018, the Federal Trade Commission (“FTC”) announced its most recent Children’s Online Privacy Protection Act (“COPPA”) case against Explore Talent, an online service marketed to aspiring actors and models. According to the FTC’s complaint, Explore Talent provided a free platform for consumers to find information about upcoming auditions, casting calls and other opportunities. The company also offered a monthly fee-based “pro” service that promised to provide consumers with access to specific opportunities. Users who registered online were asked to input a host of personal information including full name, email, telephone number, mailing address and photo; they also were asked to provide their eye color, hair color, body type, measurements, gender, ethnicity, age range and birth date.
The FTC alleges that Explore Talent collected the same range of personal information from users who indicated they were under age 13 as from other users, and made no attempts to provide COPPA-required notice or obtain parental consent before collecting such information. Once registered on ExploreTalent.com, all profiles, including children’s, became publicly visible, and registered adults were able to “friend” and exchange direct private messages with registered children. The FTC alleges that, between 2014 and 2016, more than 100,000 children registered on ExploreTalent.com. As part of the settlement, Explore Talent agreed to (1) pay a $500,000 civil penalty (which was suspended upon payment of $235,000), (2) comply with COPPA in the future and (3) delete the information it previously collected from children.
On February 1, 2018, the Singapore Personal Data Protection Commission (the “PDPC”) published its response to feedback collected during a public consultation process conducted during the late summer and fall of 2017 (the “Response”). During that public consultation, the PDPC circulated a proposal relating to two general topics: (1) the relevance of two new alternative bases for collecting, using and disclosing personal data (“Notification of Purpose” and “Legal or Business Purpose”), and (2) a mandatory data breach notification requirement. The PDPC invited feedback from the public on these topics.
“Notification of Purpose” as a new basis for an organization to collect, use and disclose personal data.
In its consultation, the PDPC solicited views on “Notification of Purpose” as a possible new basis for data processing. In its Response, the PDPC noted that it intends to amend its consent framework to incorporate the “Notification of Purpose” approach (also called “deemed consent by notification”), which will essentially provide for an opt-out approach.
Under that approach, organizations may collect, use and disclose personal data merely by providing (1) some form of appropriate notice of purpose in situations where there is no foreseeable adverse impact on the data subjects, and (2) a mechanism to opt out. The PDPC will issue guidelines on what would be considered “not likely to have any adverse impact.” The approach will also require organizations to undertake risk and impact assessments to determine any such possible adverse impacts. Where the risk assessments determine a likely adverse impact, the approach may not be used. Also, the “Notification of Purpose” approach may not be used for direct marketing purposes.
The PDPC will not specify how organizations will be required to notify individuals of purpose, and will leave it to organizations to determine the most appropriate method under the circumstances, which might include a general notification on a website or social media page. The notification must, however, include information on how to opt out or withdraw consent from the collection, use or disclosure. The PDPC also said it would provide further guidance on situations where opt-out would be challenging, such as where large volumes of personal data are collected by sensors, for example.
“Legitimate Interest” as a basis to collect, use or disclose personal data.
In its consultation, the PDPC also sought feedback on a proposed “Legal and Business Purpose” ground for processing personal information. In its Response, the PDPC said that based on the feedback, it intends to adopt this concept under the EU term “legitimate interest.” The PDPC will provide guidance on the legal and business purposes that come within the ambit of “legitimate interest,” such as fraud prevention. “Legitimate interest” will not cover direct marketing purposes. The intent behind this ground for processing is to enable organizations to collect, use and disclose personal data in contexts where there is a need to protect legitimate interests that will have economic, social, security or other benefits for the public or a section thereof, and the processing should not be subject to consent. The benefits to the public or a section thereof must outweigh any adverse impacts to individuals. Organizations must conduct risk assessments to determine whether they can meet this requirement. Organizations relying on “legitimate interest” must also disclose this fact and make available a document justifying the organization’s reliance on it.
Mandatory Data Breach Notification
Regarding the 72-hour breach notification requirement it proposed in the consultation, the PDPC acknowledged in its Response that the affected organization may need time to determine the veracity of a suspected data breach incident. Thus, it stated that the time frame for the breach notification obligation only commences when the affected organization has determined that a breach is eligible for reporting. This means that when an affected organization first becomes aware that an information security incident may have occurred, the organization still has time to conduct a digital forensic investigation to determine precisely what has happened, including whether any breach of personal information security has happened at all, before the clock begins to run on the 72-hour breach notification deadline. From that time, the organization must report the incident to the affected individuals and the PDPC as soon as practicable, but still within 72 hours.
The PDPC requires that the digital forensic investigation be completed within 30 days. However, it still allows that the investigation may continue for more than 30 days if the affected organization has documented reasons why the time taken to investigate was reasonable and expeditious.
Both the Centre for Information Policy and Leadership and Hunton & Williams LLP filed public comments in the PDPC’s consultation.
Recently, the General Services Administration (“GSA”) announced its plan to upgrade its cybersecurity requirements in an effort to build upon the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012, that became effective on December 31, 2017.
The first proposed rule, GSAR Case 2016-G511 “Information and Information Systems Security,” will require that federal contractors “protect the confidentiality, integrity and availability of unclassified GSA information and information systems from cybersecurity vulnerabilities and threats in accordance with the Federal Information Security Modernization Act of 2014 and associated Federal cybersecurity requirements.” The proposed rule will apply to “internal contractor systems, external contractor systems, cloud systems and mobile systems.” It will mandate compliance with applicable controls and standards, such as those of the National Institute of Standards and Technology, and will update existing GSAR clauses 552.239-70 and 552.239-71, which address data security issues. Contracting officers will be required to include these cybersecurity requirements into their statements of work. The proposed rule is scheduled to be released in April 2018. Thereafter, the public will have 60 days to offer comments.
The second proposed rule, GSAR Case 2016-G515 “Cyber Incident Reporting,” will “update requirements for GSA contractors to report cyber incidents that could potentially affect GSA or its customer agencies.” Specifically, contractors will be required to report any cyber incident “where the confidentiality, integrity or availability of GSA information or information systems are potentially compromised.” The proposed rule will establish a timeframe for reporting cyber incidents, detail what the report must contain and provide points of contact for filing the report. The proposed rule is intended to update the existing cyber reporting policy within GSA Order CIO-9297.2 that did not previously undergo the rulemaking process. Additionally, the proposed rule will establish requirements for contractors to preserve images of affected systems and impose training requirements for contractor employees. The proposed rule is scheduled to be released in August 2018, and the public will have 60 days to comment on the proposed rule.
Although the proposed rules have not yet been published, it is anticipated that they will share similarities with the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012.
On January 22, 2018, the New York Department of Financial Services (“NYDFS”) issued a press release reminding entities covered by its cybersecurity regulation that the first certification of compliance with the regulation is due on or prior to February 15, 2018. Covered entities must file the certification, which covers the 2017 calendar year, at the NYDFS online portal.
Maria T. Vullo, the Superintendent of the NYDFS, noted the critical importance of the certification of compliance and stated that “DFS’s regulation requires each entity to have an annual review and assessment of the program’s achievements, deficiencies and overall compliance with regulatory standards and the DFS cybersecurity portal will allow the safe and secure reporting of these certifications. DFS’s goal is to prevent cybersecurity attacks, and we therefore will now include cybersecurity in all DFS examinations to ensure that proper cybersecurity governance is being practiced by our regulated entities. As DFS continues to implement its landmark cybersecurity regulation, we will take proactive steps to protect our financial services industry from cyber criminals.”
Superintendent Vullo also announced that the NYDFS will incorporate cybersecurity in all of its regulatory examinations. This includes adding questions related to cybersecurity to “first day letters,” which are notices that the NYDFS issues to commence its examinations of financial services companies, including examinations of banks and insurance companies for safety and soundness and market conduct.
Read more about other key deadlines for the NYDFS cybersecurity regulation.
On January 10, 2018, the Law of 3 December 2017 creating the Data Protection Authority (the “Law”) was published in the Belgian Official Gazette. The Law was submitted in the Chamber of Representatives on August 23, 2017, and was approved by the Parliament in plenary meeting on November 16, 2017.
The EU General Data Protection Regulation (“GDPR”) provides national data protection authorities with a strengthened enforcement role. In this context, the Belgian legislator adopted the Law reforming the Belgian Privacy Commission, established by the Law of 8 December 1992 implementing Directive 95/46/EC. It replaces the Belgian Privacy Commission with the Belgian Data Protection Authority (“DPA”) (Autorité de protection des données in French and Gegevensbeschermingsautoriteit in Dutch).
The main purpose of the Law is to ensure that the DPA can fulfill its tasks under the GDPR, since the current Belgian Privacy Commission has limited prosecutorial powers and no direct sanctioning powers.
In particular, the Law changes the structure and composition of the current Belgian Privacy Commission and replaces the existing sector committees with six new ones:
- An Executive Committee that, amongst others, approves the DPA’s annual budget and strategy and management plan, and follows the technical developments that have an impact on data protection.
- A General Secretariat responsible for the daily operations of the DPA, including (1) following the social, economic and technological developments that have an impact on data protection; (2) establishing the list of processing activities that require a data protection impact assessment; (3) providing an opinion on prior consultation by a data controller; (4) approving codes of conduct and certification criteria; and (5) approving standard contractual clauses and binding corporate rules.
- A First Line Service responsible for receiving complaints and requests made to the DPA; starting mediation procedures; raising awareness around data protection and, in particular amongst minors, providing individuals with information regarding their data protection rights; and raising awareness of data controllers and processors regarding their obligations under the GDPR.
- A Knowledge Centre responsible for providing advice, upon request or on its own initiative, on questions related to data processing and recommendations regarding social, economic or technological developments.
- An Investigation Service responsible for the investigations.
- A Litigation Chamber that deals with administrative proceedings.
- A Reflection Board responsible for providing non-binding advice, upon request of the Executive Committee or the Knowledge Centre or on its own initiative, on all data protection-related subjects.
National and International Cooperation
The Law also provides that the DPA will have to cooperate with national and international actors. More specifically, the DPA will have to cooperate with all other Belgian public and private actors involved in the protection of the rights and freedoms of individuals, particularly regarding the free flow of personal data and customer protection. The DPA will also have to cooperate with other national data protection authorities. Such cooperation will focus on, inter alia, the creation of centers of expertise, the exchange of information, mutual assistance for controlling measures, and the sharing of human and financial resources.
Enforcement and Investigation Powers
The DPA will have investigation and control powers, as well as various enforcement powers, including the power to (1) drop a case without action, (2) dismiss a case, (3) order that the sentence be suspended, (4) propose a transaction, (5) issue a warning, (6) order compliance with individuals’ requests, (7) inform individuals of a security incident, (8) order that the processing be frozen, limited or temporarily or permanently prohibited, (9) order the rectification or deletion of personal data and the notification thereof to individuals, (10) order the license of a certification organism be withdrawn, (11) suspend data transfers, (12) impose daily fines and administrative fines, (13) transmit a file to the Public Prosecutor, and (14) publish the decisions taken on its website.
The investigation powers of the DPA will also broaden and will include the power to (1) hear witnesses, (2) identify individuals, (3) conduct a written inquiry, (4) conduct on-site reviews, (5) access computer systems and copy all data such systems may contain, (6) access information electronically, (7) seize or seal goods or computer systems, and (8) request the identification of the subscriber or regular user of an electronic communication service or electronic communication means used. In addition, the Investigation Service will be able to take interim measures, including suspending, limiting or freezing data processing activities.
The Law also sets out the procedure to be followed in case of a claim or any other request from a natural or legal person with respect to personal data protection.
In its decision, the CNIL found that WhatsApp violated the French Data Protection Act of January 6, 1978, as amended (Loi relative à l’informatique, aux fichiers et aux libertés) by: (1) sharing data with Facebook without an appropriate legal basis, (2) not providing sufficient notice to the relevant data subjects, and (3) not cooperating with the CNIL during the investigation.
Lack of Legal Basis
While WhatsApp shares its users’ data with Facebook for both business intelligence and security purposes, the CNIL focused its analysis on the “business intelligence” purpose. WhatsApp represented that such sharing was based on consent and legitimate interest as legal grounds. In its analysis of both legal bases, the CNIL concluded that:
- WhatsApp cannot rely on consent to share users’ data with Facebook for “business intelligence” purposes on the grounds that: (1) the consent is not specific enough, and only refers to the messaging service and improving Facebook’s services, and (2) the consent is not freely given, as the only way for a user to object to such processing is to uninstall the application.
- WhatsApp cannot rely on a legitimate interest to share users’ data with Facebook for “business intelligence” purposes because the company has not implemented sufficient safeguards to preserve users’ interests or fundamental rights. There is no mechanism for the users to refuse the data sharing while continuing to use the application.
Lack of Notice to Data Subjects
The CNIL found that WhatsApp did not provide sufficient notice on the registration form to data subjects about sharing personal data with Facebook.
Lack of Cooperation with the CNIL
The CNIL found that WhatsApp did not provide necessary cooperation during the investigation, such as refusing to provide the CNIL with data pertaining to a sample of French users on the basis that such request conflicts with U.S. law.
The CNIL’s Requests
In its formal notice, the CNIL requires WhatsApp to, within one month:
- cease sharing users’ data with Facebook for the purpose of “business intelligence” without a legal basis;
- provide a notice to data subjects that complies with the French Data Protection Act, and informs them of the purposes for which the data is shared with Facebook and their rights as data subjects;
- provide the CNIL with all the sample personal data requested (i.e., all data shared by WhatsApp with Facebook for a sample of 1,000 French users); and
- confirm that the company has complied with all of the CNIL’s requests above within the one month deadline.
If WhatsApp fails to comply with the terms of the formal notice within one month, the CNIL may appoint an internal investigator, who may propose that the CNIL imposes sanctions against the company for violations of the French Data Protection Act.
On November 7, 2017, the Standing Committee of the National People’s Congress of China published the second draft of the E-commerce Law (the “Second Draft”) and is allowing the general public an opportunity to comment through November 26, 2017.
The Second Draft applies to e-commerce activities within the territory of China. One significant change from the first draft is that the Second Draft omits the first draft’s definition of “personal information” of e-commerce users and the detailed requirements concerning the collection and use of personal information of such users. Instead, the Second Draft would require that, when collecting and using personal information of users, e-commerce operators comply with rules established under the Cybersecurity Law of China and other relevant laws and regulations.
Pursuant to the Second Draft, e-commerce operators would be required to provide users with clear methods and procedures for accessing the users’ information, making corrections or deleting the users’ information, or closing user accounts. Also, e-commerce operators would be restricted from imposing unreasonable conditions on users when they request access, correction or deletion of information, or closure of their accounts.
The Second Draft also would require operators of e-commerce platforms to adopt measures, technological and otherwise, to protect network security, and to adopt contingency plans for cybersecurity incidents. In the event of an actual cybersecurity incident, an operator of an e-commerce platform would be required to immediately put its contingency plan into action, take remedial measures and report the incident to the relevant authorities.
On October 17, 2017, the French Data Protection Authority (“CNIL”), after a consultation with multiple industry participants that was launched on March 23, 2016, published its compliance pack on connected vehicles (the “Pack”) in line with its report of October 3, 2016. The Pack applies to connected vehicles for private use only (not to Intelligent Transport Systems), and describes the main principles data controllers must adhere to under both the current French legislation and the EU General Data Protection Regulation (“GDPR”).
The CNIL distinguishes between the following three scenarios:
1. “IN -> IN” scenario
The data collected in the vehicle remains in that vehicle and is not shared with a service provider (e.g., an eco-driving solution that processes data directly in the vehicle to display eco-driving tips in real time on the vehicle’s dashboard).
2. “IN -> OUT” scenario
The data collected in the vehicle is shared outside of the vehicle for the purposes of providing a specific service to the individual (e.g., when a pay-as-you-drive contract is purchased from an insurance company).
3. “IN -> OUT -> IN” scenario
The data collected in the vehicle is shared outside of the vehicle to trigger an automatic action by the vehicle (e.g., in the context of a traffic solution that calculates a new route following a car incident).
In addition to listing the provisions already included in its report of October 3, 2016, the CNIL analyzes in detail the three scenarios described above and provides recommendations on the:
- purposes for which the data can be processed;
- legal bases controllers can rely upon;
- types of data that can be collected;
- required retention period;
- recipients of the data and use of processors;
- content of the notice to data subjects;
- applicable rights of individuals with respect to the processing;
- security measures to adopt; and
- registration obligations that may arise under current law.
Beyond being a helpful guide for data controllers to refer to when implementing such tools in vehicles, the Pack might help preview how supervisory authorities will interpret various GDPR provisions.
This week, the Securities and Exchange Commission (“SEC”) announced the creation of a new Cyber Unit that will target cyber-related threats that may impact investors. The Cyber Unit, which will be part of the SEC’s Enforcement Division, will seek to combat various types of cyber-related threats including:
- The manipulation of markets through the spread of false information;
- Hacking of material nonpublic information;
- Attacks on blockchain technology and initial coin offerings;
- Misconduct on the dark web;
- Intrusions into retail brokerage accounts; and
- Other cyber threats to trading platforms and other critical market infrastructure.
The creation of the Cyber Unit is the latest in a series of steps taken by the SEC to focus on cybersecurity issues, including the issuance of a recent Risk Alert that examines the cybersecurity policies and procedures of financial institutions it regulates. In announcing the creation of the Cyber Unit, Stephanie Avakian, Co-Director of the SEC’s Enforcement Division, noted the growing importance of combating cyber-related threats and stated that “The Cyber Unit will enhance our ability to detect and investigate cyber threats through increasing expertise in an area of critical national importance.”
On September 20, 2017, the French Data Protection Authority (CNIL) announced that it has updated two standards on privacy seals in order to take into account the requirements of the EU General Data Protection Regulation (“GDPR”).
The CNIL may issue privacy seals, which is issued with regard to a referential or standard adopted by the CNIL (i.e., a list of requirements that the product or procedure must satisfy in order to obtain the privacy seal). So far, the CNIL has adopted four standards, namely a standard on audit procedures covering the processing of personal data, a standard on data protection training programs, a standard on “digital safe boxes” and a standard on data privacy governance procedures. The CNIL has issued 123 privacy seals since 2012 (including 30 privacy seal renewals).
The updated standards include the standard on data protection training programs and on data privacy governance procedures. According to the CNIL, the updated standards are accountability tools that help organizations demonstrate compliance with the GDPR. In particular, the updated standard on governance procedures serves as a roadmap for ensuring and demonstrating compliance with the GDPR, while the updated standard on training programs allows for proposed training courses on the GDPR even before the GDPR is applicable. The CNIL will update the other standards by the end of 2017.
Privacy seals issued with regard to the previous version of the standards will remain valid until May 25, 2018, when the GDPR becomes effective. Organizations must re-apply for the privacy seal they have already obtained in order to continue benefiting from the privacy seal after that date. The CNIL will issue the new privacy seals for a period of three years.
On September 15, 2017, the Federal Trade Commission published the ninth blog post in its “Stick with Security” series. As previously reported, the FTC will publish an entry every Friday for the next few months focusing on each of the 10 principles outlined in its Start with Security Guide for Businesses. This week’s post, entitled Stick with Security: Make sure your service providers implement reasonable security measures, highlights the importance for companies to ensure that the service providers they engage with implement reasonable security measures.
The FTC’s post describes three ways companies can ensure that their service providers implement appropriate security measures:
- Conduct Due Diligence: Just as a consumer wouldn’t buy a used car without inspecting it first, companies should take reasonable steps to understand how information they place in another’s control will be used and secured.
- Put It in Writing: Companies should ensure that security expectations, performance standards and monitoring methods are reduced to writing in a contract. This may include, for example, ensuring a service provider has firewalls in place, encrypts data at rest or in transit, and implements intrusion detection systems.
- Verify Compliance: Even after companies have included security-related provisions into their contracts with service providers, prudent companies will regularly monitor and verify that service providers are indeed complying with the contractual requirements.
The guidance concludes by noting that the key message for companies is that they should build their security expectations into their contracts and make sure there is a way to monitor that the service providers are meeting those expectations.
The FTC’s next blog post, to be published on Friday, September 22, will focus on putting procedures in place to keep companies’ security current and address vulnerabilities that may arise.
To read our previous posts documenting the series, see FTC Posts Eighth Blog in its “Stick with Security” Series, FTC Posts Seventh Blog in its “Stick with Security” Series, FTC Posts Sixth Blog in its “Stick with Security” Series, FTC Posts Fifth Blog in its “Stick with Security” Series, FTC Posts Fourth Blog in its “Stick with Security” Series, FTC Posts Third Blog in its “Stick with Security” Series and FTC Posts Second Blog in its “Stick with Security” Series.
On September 7, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) issued an announcement containing disaster preparedness and recovery guidance in advance of Hurricane Irma. The announcement follows a bulletin issued in late August during Hurricane Harvey that addressed how protected health information (“PHI”) can be shared during emergencies. Together, these communications underscore key privacy and security issues for entities covered by HIPAA to help them protect individuals’ health information before, during and after emergency situations.
Among other things, these two pieces of guidance highlight the following considerations:
- Application of HIPAA. HIPAA applies only to covered entities (certain health plans, health care clearinghouses and health care providers) and business associates (generally, service providers that create, receive, maintain or transmit PHI for covered entities or other business associates). Other entities’ workforces, by contrast, are not directly liable for complying with HIPAA. The American Red Cross, for example, is not restricted by the HIPAA Privacy Rule from sharing health information.
- Privacy and Disclosures. The HIPAA Privacy Rule always allows for PHI to be shared for certain purposes which may be relevant in emergency situations. For example, covered entities may use and disclose PHI as necessary for treatment purposes. These include “the coordination or management of health care and related services by one or more health care providers and others, consultation between providers, and the referral of patients for treatment.” OCR maintains an interactive tool to assist organizations in understanding how HIPAA applies to disclosures of PHI in emergency situations.
- Safeguards and Contingency Plans. Organizations covered by HIPAA must continue to protect PHI by implementing reasonable safeguards against impermissible uses and disclosures. This includes the Security Rule, which requires administrative, physical and technical safeguards for electronic PHI, including contingency plans. Under the Security Rule, contingency plans must include or address a number of prescribed specifications, including a data backup plan, an emergency mode operation plan and testing and revision procedures.
In addition to the above, the August bulletin covered the decision by the Secretary of HHS to issue a limited waiver for covered hospitals in Texas and Louisiana after previously declaring a public health emergency in those states. Although HIPAA is not suspended during emergencies, the Secretary exercised the authority to waive sanctions and penalties for violations of certain provisions, including the requirement to honor a request to opt out of facility directories and a patient’s right to request privacy restrictions. In addition to being limited to specific HIPAA requirements, the waiver also applies only: (1) in the emergency area and for the emergency period identified in the public health emergency declaration; (2) to hospitals that have instituted a disaster protocol; and (3) for up to 72 hours from the time the hospital implements its disaster protocol.
On August 31, 2017, the National Information Security Standardization Technical Committee of China published four draft voluntary guidelines (“Draft Guidelines”) in relation to the Cybersecurity Law of China. The Draft Guidelines are open for comment from the general public until October 13, 2017.
Information Security Technology – Guidelines for Cross-Border Transfer Security Assessment: Compared with the first draft published in May, the second Draft Guidelines add new definitions of certain terms, such as “domestic operations,” “cross-border data transfer” and “assessment by competent authorities.” According to these Draft Guidelines, a network operator that is not registered in China would still be deemed to be conducting “domestic operations” if it conducts business within the territory of China, or provides products or services within the territory of China. Even if the data collected by a network operator is not retained outside of China, there could still be a cross-border transfer of the data if overseas entities, institutions or individuals are able to access the data remotely. These Draft Guidelines provide separate assessment procedures for self-assessments and assessments by competent authorities. A security assessment would focus on the purpose of the proposed cross-border transfer, with reference to the legality, appropriateness and necessity of the transfer, and the security risks involved in the transfer.
Information Security Technology – General Security Requirements for Network Products and Services: This document provides both general security requirements and enhanced security requirements applicable to network products and services sold or provided within the territory of China. According to these Draft Guidelines, “network products” include computers, information terminals, basic software, system software and the like. “Network services” include cloud computing services, data processing and storage services, network communication services and the like. General security requirements under this draft include malware prevention, vulnerability management, security operating maintenance and protection of user information. Enhanced security requirements include identity verification, access controls, security audits, communication protection and certain security protection requirements.
Information Security Technology – Guide to Security Inspection and Evaluation of Critical Information Infrastructure: This document provides the procedures and substance of security inspections and evaluations of critical information infrastructure. According to these Draft Guidelines, the inspection and evaluation is divided into three methods, which include compliance inspection, technical inspection and analysis and evaluation. The key steps in a security inspection and evaluation include preparation, implementation of compliance inspection, technical inspection and analysis and evaluation, risk control and preparation of a report.
Information Security Technology – Systems of Indicators for the Assurance of the Security of Critical Information Infrastructure: This document establishes and defines indicators to be used as focal points in evaluating the security of critical information infrastructure. The indicators discussed under these Draft Guidelines include operational capacity indicator, security indicator, security monitoring indicator, emergency response indicator, etc.
On August 25, 2017, U.S. District Judge Lucy Koh signed an order granting preliminary approval of the record class action settlement agreed to by Anthem Inc. this past June. The settlement arose out of a 2015 data breach that exposed the personal information of more than 78 million individuals, including names, dates of birth, Social Security numbers and health care ID numbers. The terms of the settlement include, among other things, the creation of a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers, as well as up to $38 million in attorneys’ fees. Anthem will also be required to make certain changes to its data security systems and cybersecurity practices for at least three years.
In granting preliminary approval of the settlement agreement, the court stated that the terms of the settlement “fall within the range of possible approval as fair, reasonable, and adequate” and also preliminarily certified the settlement class. KCC, the settlement administrator designated by the parties, will provide notice of the proposed settlement to class members by October 30. After notice is completed, class members will have the opportunity to object or opt-out of the settlement until December 29, 2017. The final approval hearing will be held on February 1, 2018.
As reported in BNA Privacy Law Watch, on August 22, 2017, the Russian privacy regulator, Roskomnadzor, announced that it had issued an order (the “Order”), effective immediately, revising notice protocols for companies that process personal data in Russia. Roskomnadzor stated that an earlier version of certain requirements for companies to notify the regulator of personal data processing was invalidated by the Russian Telecom Ministry in July.
The Order requires companies to notify Roskomnadzor in advance of personal data processing, including information on safeguards in place to prevent data breaches and whether the company intends to transfer data outside Russia (and, if so, the countries to which the data will be transferred). Companies must also confirm their compliance with Russia’s data localization law, which requires that companies processing personal data of Russian citizens store that data on servers located within Russia. In conjunction with the Order, Roskomnadzor released a new notification form that companies may use to communicate with the regulator.
On August 7, 2017, the Securities and Exchange Commission (“SEC”) Office of Compliance Inspections and Examinations (“OCIE”) issued a Risk Alert examining the cybersecurity policies and procedures of 75 broker-dealers, investment advisers and investment companies (collectively, the “firms”). The Risk Alert builds on OCIE’s 2014 Cybersecurity Initiative, a prior cybersecurity examination of the firms, and notes that while OCIE “observed increased cybersecurity preparedness” among the firms since 2014, it “also observed areas where compliance and oversight could be improved.”
Key improvements observed included:
- use of periodic risk assessments, penetration tests and vulnerability scans of critical systems to identify cybersecurity threats and vulnerabilities, as well as potential business consequences of a cybersecurity incident;
- procedures for regular system maintenance, including software patching, to address security updates;
- implementation of written policies and procedures, including response plans and defined roles and responsibilities, for addressing cybersecurity incidents; and
- vendor risk assessments conducted at the outset of an engagement with a vendor and often updated periodically throughout the business relationship.
Key issues observed included:
- failure to reasonably tailor written policies and procedures (e.g., many policies and procedures were written vaguely or broadly, with limited examples of safeguards and limited procedures for policy implementation);
- failure to adhere to or enforce written policies and procedures, or failure to ensure that such policies and procedures reflected firms’ actual practices;
- failure to timely remediate high-risk findings of penetration tests and vulnerability scans; and
- use of outdated operating systems that no were longer supported by security patches.
In addition, the Risk Alert included a list of best practices identified by OCIE as elements of robust cybersecurity programs. These included maintaining:
- an inventory of data, information and vendors;
- instructions for various aspects of cybersecurity protocols, including security monitoring, auditing and testing, as well as incident reporting;
- schedules and processes for cybersecurity testing; and
- “established and enforced” access controls to data and systems.
OCIE further noted that robust cybersecurity programs may include mandatory employee training and vetting and approval of policies and procedures by senior management. OCIE indicated in the Risk Alert that its list of cybersecurity program best practices is not intended to be exhaustive.
OCIE noted that it will continue to prioritize cybersecurity compliance and will examine firms’ procedures and controls, “including testing the implementation of those procedures and controls at firms.”
As reported in BNA Privacy Law Watch, on August 17, 2017, Delaware amended its data breach notification law, effective April 14, 2018. The Delaware law previously required companies to give notice of a breach to affected Delaware residents “as soon as possible” after determining that, as a result of the breach, “misuse of information about a Delaware resident has occurred or is reasonably likely to occur.” The prior version of the law did not require regulator notification.
The amendments include several key provisions:
- Definition of Personal Information. Under the revised law, the definition of “personal information” is expanded and now includes a Delaware resident’s first name or first initial and last name in combination with any one or more of the following data elements: (1) Social Security number; (2) driver’s license or state or federal identification card number; (3) account number, credit card number or debit card number in combination with any required security code, access code or password that would permit access to a financial account; (4) passport number; (5) a username or email address in combination with a password or security question and answer that would permit access to an online account; (6) medical history, treatment or diagnosis by a health care professional, or DNA profile; (7) health insurance identification number; (8) biometric data; and (9) an individual taxpayer identification number.
- Timing. Companies will be required to notify affected individuals of a data breach within 60 days.
- Notice to the Attorney General. Companies will be required to notify the Delaware Attorney General if a breach affects more than 500 Delaware residents.
- Harm Threshold. The amendments change the law’s harm threshold for notification. Under the revised law, notification to affected individuals (and the Attorney General, if applicable) is required unless, after an appropriate investigation, the company reasonably determines that the breach is unlikely to result in harm to affected individuals.
- Credit Monitoring. Companies will be required to offer credit monitoring services to affected individuals at no cost for one year if the breach includes a Delaware resident’s Social Security number. California’s breach notification law contains a similar requirement.
On August 15, 2017, the FTC announced that it had reached a settlement with Uber, Inc., over allegations that the ride-sharing company had made deceptive data privacy and security representations to its consumers. Under the terms of the settlement, Uber has agreed to implement a comprehensive privacy program and undergo regular, independent privacy audits for the next 20 years.
The FTC’s complaint alleged that Uber made false or misleading representations that the company (1) appropriately controlled employee access to consumers’ personal information and (2) provided reasonable security for consumers’ personal information.
Employee Access to Consumers’ Personal Information
The complaint cited news reports from November 2014 that accused Uber employees of improperly accessing and using consumer personal information, including the use of an internal tracking tool called “God View,” which allowed employees to access the geolocation of individual Uber riders in real time. In its response to these allegations, Uber represented that the company had a “strict policy prohibiting all employees at every level from accessing a rider or driver’s data” except for a “limited set of legitimate business purposes.” Uber also stated that employee access to riders’ personal information was “closely monitored and audited by data security specialists on an ongoing basis.” The FTC alleged that (1) these statements were false or misleading, (2) Uber failed to implement a system that effectively and continuously monitored employee access, and (3) Uber did not respond in a timely fashion when alerted of the potential misuse of consumer personal information.
Data Security Representations
The complaint further alleged that Uber made the following false or misleading representations about the security of riders’ personal information:
- Uber customer service representatives assured riders that the company:
- used “the most up to date technology and services” to protect personal information;
- was “extra vigilant in protecting all private and personal information”; and
- kept personal information “secure and encrypted to the highest security standards available.”
The FTC alleged that, in reality, Uber engaged in practices that failed to provide reasonable security to prevent unauthorized access to Uber riders’ and drivers’ personal information by, among other things:
- failing to implement appropriate administrative access controls and multi-factor authentication on the company’s third-party databases that stored personal information;
- failing to implement reasonable security training and guidance for employees;
- failing to have a written information security program in place; and
- storing sensitive personal information in a third-party storage database in clear, readable text, rather than encrypting the information.
The FTC alleged that these failures resulted in a May 2014 data breach of consumers’ personal information stored in a third-party database. The complaint alleged that the breach was caused by an intruder who used an access key that an Uber engineer had publicly posted to GitHub, a code-sharing website used by software developers.
Under the terms of the settlement agreement, Uber is:
- prohibited from misrepresenting how it monitors internal access to consumers’ personal information;
- prohibited from misrepresenting how it protects and secures that data;
- required to implement a comprehensive privacy program that addresses privacy risks related to new and existing products and services, and protects the privacy and confidentiality of personal information collected by the company; and
- required to obtain within 180 days of the settlement, and every two years after that for the next 20 years, independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.
Uber’s settlement agreement underscores the importance of having accurate data privacy and security representations that are consistently followed by all company employees.
In the wake of China’s Cybersecurity Law going into effect on June 1, 2017, local authorities in Shantou and Chongqing have brought enforcement actions against information technology companies for violations of the Cybersecurity Law. These are, reportedly, the first enforcement actions brought pursuant to the Cybersecurity Law.
Recently, Chongqing’s municipal Public Security Bureau’s cybersecurity team identified a violation of the Cybersecurity Law during a routine inspection. A technology development company failed, as required under the Cybersecurity Law, to retain web logs relating to its users’ logins when providing internet data center services. In response, the public security authority issued a warning and an order to correct the issue within 15 days, with a follow-up inspection to take place after the rectification. In another enforcement action taken by Shantou’s municipal Public Security Bureau’s cybersecurity team in July, an information technology company in Shantou, Guangdong Province, was ordered to correct a violation of the Cybersecurity Law.
Though reportedly the first enforcement actions brought pursuant to the new Cybersecurity Law, these amounted to only minor actions. They involved only warnings and orders to correct the issues; no fines or criminal penalties were imposed. Accordingly, these enforcement actions likely do not provide much insight into how the Cybersecurity Law will be enforced moving forward. These actions do, however, indicate that enforcement authorities, such as public security agencies and the cyberspace administration agency, have started to consider their roles in enforcing the Cybersecurity Law. More enforcement actions could be expected in future.
Nevada is the third state to enact legislation requiring website operators to post a public privacy notice, following California (enacted in 2004) and Delaware (enacted in 2016). The scope of Nevada’s law is narrower than the laws of California and Delaware in several key respects. Namely, the Nevada law limits its jurisdictional application to entities that purposefully direct or conduct activities in Nevada, or consummate some transaction with the state or one of its residents. Additionally, the law is not applicable to website operators whose revenue is derived primarily from other sources than online services and whose website annually receives fewer than 20,000 unique visitors.
The Nevada law does not provide a private right of action, but grants the Nevada Attorney General the power to enforce compliance and provides for injunctive relief and a maximum authorized civil penalty of $5,000. The law is set to take effect on October 1, 2017.
On July 25, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced the release of an updated web tool that highlights recent data breaches of health information.
Entities covered by the Health Insurance Portability and Accountability Act (“HIPAA”) are required to notify OCR when they experience a data breach. OCR publishes information it receives regarding data breaches affecting more than 500 individuals on its HIPAA Breach Reporting Tool (“HBRT”). OCR uses the HBRT to provide transparency to the public and HIPAA-covered entities by sharing information regarding reported data breaches, including (1) the name of the reporting entity; (2) the number of individuals affected by the data breach; (3) the type of data breach (e.g., hacking/IT incident, theft, loss, unauthorized access/disclosure); and (4) the location of the breached information (e.g., laptop, paper records, desktop computer).
In the email announcing its recent updates, OCR highlighted the following new features of the HBRT:
- enhanced functionality that highlights data breaches currently under investigation and reported within the last 24 months;
- an archive including all older data breaches;
- improved navigation to additional data breach information; and
- tips for consumers.
OCR stated that it plans to expand and improve the HBRT over time to add functionality and features based on the feedback it receives.
On July 21, 2017, New Jersey Governor Chris Christie signed a bill that places new restrictions on the collection and use of personal information by retail establishments for certain purposes. The statute, which is called the Personal Information and Privacy Protection Act, permits retail establishments in New Jersey to scan a person’s driver’s license or other state-issued identification card only for the following eight purposes:
- to verify the authenticity of the identification card or to verify the identity of the person if the person pays for goods or services with a method other than cash, returns an item or requests a refund or an exchange;
- to verify the person’s age when providing age-restricted goods or services to the person;
- to prevent fraud or other criminal activity if the person returns an item or requests a refund or an exchange and the business uses a fraud prevention service company or system;
- to prevent fraud or other criminal activity related to a credit transaction to open or manage a credit account;
- to establish or maintain a contractual relationship;
- to record, retain or transmit information as required by state or federal law;
- to transmit information to a consumer reporting agency, financial institution or debt collector to be used as permitted by the Fair Credit Reporting Act or certain other relevant federal laws; or
- to record, retain or transmit information by a covered entity pursuant to the Health Insurance Portability and Accountability Act of 1996.
In addition, the law limits the information which retail establishments may collect from the scanned identification cards. The information that may be collected from the card includes the person’s name, address, date of birth, the state issuing the identification card and the identification card number. The law also places restrictions on the retention, sale and sharing of such information and establishes security requirements for any information retained from the scanned identification cards. The law emphasizes that retailers must report security breaches of certain information collected from scanned identification cards pursuant to New Jersey’s security breach notification statute.
The law is set to take effect three months from the date of enactment.
On July 5, 2017, the FTC announced that Blue Global Media, LLC (“Blue Global”) agreed to settle charges that it misled consumers into filling out loan applications and then sold those applications, including sensitive personal information contained therein, to other entities without verifying how consumers’ information would be used or whether it would remain secure. According to the FTC’s complaint, Blue Global claimed it would connect loan applicants to lenders from its network of over 100 lenders in an effort to offer applicants the best terms. In reality, Blue Global “sold very few of the loan applications to lenders; did not match applications based on loan rates or terms; and sold the loan applications to the first buyer willing to pay for them.” The FTC alleged that, contrary to Blue Global’s representations, the company provided consumers’ sensitive information—including SSN and bank account number—to buyers without consumers’ knowledge or consent. The FTC further alleged that, upon receiving complaints from consumers that their personal information was being misused, Blue Global failed to investigate or take action to prevent harm to consumers.
The terms of the settlement prohibit Blue Global from misrepresenting (1) its ability to assist consumers in obtaining loans with favorable rates and terms; (2) that it will protect and secure consumers’ personal information and (3) the types of businesses with which Blue Global shares consumers’ personal information. The settlement further requires Blue Global to “investigate and verify the identity of businesses to which they disclose consumers’ sensitive information” and to obtain consumers’ informed consent for these disclosures. The settlement also includes a judgment for more than $104 million, suspended due to Blue Global’s inability to pay.
On June 23, 2017, Anthem Inc., the nation’s second largest health insurer, reached a record $115 million settlement in a class action lawsuit arising out of a 2015 data breach that exposed the personal information of more than 78 million people. Among other things, the settlement creates a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers, as well as up to $38 million in attorneys’ fees.
Anthem announced in February 2015 that it had been the target of an external cyber attack. The personal information obtained by attackers included names, dates of birth, Social Security numbers and health care ID numbers. Following the breach, Anthem offered affected individuals two years of credit monitoring. Under the settlement agreement, plaintiffs will be offered an additional two years of credit monitoring and identity protection services. Class members who already have credit monitoring services can submit a claim for monetary compensation instead of receiving the additional services.
The settlement also requires Anthem to make certain changes to its data security systems and cybersecurity practices for at least three years. These changes include (1) implementing data retention periods, (2) strict access requirements, (3) mandatory information security training for all associates and (4) annual IT security risk assessments. During this three year period, Anthem must engage an independent consultant to verify it is in compliance with the terms of the settlement agreement, and remediate 95 percent of critical findings within three years. The settlement further requires Anthem to allocate a certain amount of funds for information security and increase its funding for every additional 5,000 users if Anthem increases its users by more than 10 percent, whether by acquisition or growth.
The U.S. District Court for the Northern District of California, San Jose Division, is scheduled to hear a motion for preliminary approval of the settlement on August 17, 2017. If approved, a third-party administrator will be appointed to manage the settlement.
As companies in the EU and the U.S. prepare for the application of the EU General Data Protection Regulation (“GDPR”) in May 2018, Hunton & Williams’ Global Privacy and Cybersecurity partner Aaron Simpson discusses with Forcepoint the key, significant changes from the EU Directive that companies must comply with before next year. Accountability, expanded data subject rights, breach notification, sanctions and data transfer mechanisms are a few requirements that Simpson explores in detail. He reminds companies that, in the coming year, it will be very important to “monitor…and stay aware of the guidance being produced by regulators,” but also that the guidance is not a substitute for the specific preparations that each business will need to perform in order to comply with the GDPR.
On June 21, 2017, the Federal Trade Commission updated its guidance, Six-Step Compliance Plan for Your Business, for complying with the Children’s Online Privacy Protection Act (“COPPA”). The FTC enforces the COPPA Rule, which sets requirements regarding children’s privacy and safety online. The updated guidance adds new information on situations where COPPA applies and steps to take for compliance. The changes include:
- New products and technologies. COPPA applies to the collection of personal information through a “website or online service.” The term is defined broadly, and the updates to the guidance clarify that technologies such as connected toys and other Internet of Things devices can be covered by COPPA.
- New parental consent methods. Before collecting the personal information of children under the age of 13, COPPA requires that businesses obtain parental consent. The revised guidance addresses two newly-approved methods for obtaining parental consent: (1) answering a series of knowledge-based challenge questions that would be difficult for someone other than the parent to answer; or (2) verifying a picture of a driver’s license or other photo ID submitted by the parent and then comparing that photo to a second photo submitted by the parent, using facial recognition technology.
These updates follow a recent letter from Senator Mark Warner asking the FTC to strengthen efforts to protect children’s personal information in the face of new technologies such as Internet-connected “Smart Toys.”
The U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) and the Health Care Industry Cybersecurity Task Force (the “Task Force”) have published important materials addressing cybersecurity in the health care industry.
The OCR checklist, entitled “My entity just experienced a cyber-attack! What do we do now?,” lists key steps that an organization must undertake in the event of a cyber attack. These steps include:
- executing response and mitigation procedures and contingency plans by immediately fixing technical and other problems to stop a cybersecurity incident, and taking steps to mitigate any impermissible disclosure of protected health information (“PHI”);
- reporting the crime to law enforcement agencies, which may include state or local law enforcement, the Federal Bureau of Investigation or the Secret Service;
- reporting all cyber threat indicators to federal and information-sharing and analysis organizations (“ISAOs”), including the Department of Homeland Security, the HHS Assistant Secretary for Preparedness and Response, and private-sector cyber threat ISAOs; and
- notifying OCR of the breach as soon as possible, but no later than 60 days after the discovery of a breach affecting 500 or more individuals.
The checklist is accompanied by an infographic that lists these steps and notes that an organization must retain all documentation related to the risk assessment following a cyber attack, including any determination that a breach of PHI has not occurred.
The Task Force, which was established in 2015 by Congress, is composed of government officials and leaders in the health care industry. The Task Force’s report notes that “health care cybersecurity is a key public health concern that needs immediate and aggressive attention” and identifies six key imperatives for the health care industry. These imperatives are:
- defining and streamlining leadership, governance and expectations for health care industry cybersecurity;
- increasing the security and resilience of medical devices and health information technology;
- developing the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities;
- increasing health care industry readiness through improved cybersecurity awareness and education;
- identifying mechanisms to protect research and development efforts and intellectual property from attacks or exposure; and
- improving information sharing of industry threats, risks and mitigations.
The report lists recommendations and action items under each of these six imperatives. These include (1) evaluating options to migrate patient records and legacy systems to secure environments (e.g., hosted, cloud, shared computer environments), (2) developing executive education programs targeting executives and boards of directors about the importance of cybersecurity education, and (3) requiring strong authentication to improve identity and access management for health care workers, patients, medical devices and electronic health records.
The report concludes by providing a list of key resources and best practices for addressing cybersecurity threats that were gleaned from studying the financial services and energy sectors.
The publication of these cybersecurity materials follows in the wake of several notable cyberattacks, including the WannaCry ransomware attack that affected thousands of organizations in the health care industry.
On June 5, 2017, an Illinois federal court ordered satellite television provider Dish Network LLC (“Dish”) to pay a record $280 million in civil penalties for violations of the FTC’s Telemarketing Sales Rule (“TSR”), the Telephone Consumer Protection Act (“TCPA”) and state law. In its complaint, the FTC alleged that Dish initiated, or caused a telemarketer to initiate, outbound telephone calls to phone numbers listed on the Do Not Call Registry, in violation of the TSR. The complaint further alleged that Dish violated the TSR’s prohibition on abandoned calls and assisted and facilitated telemarketers when it knew or consciously avoided knowing that telemarketers were breaking the law.
The court held, in a 475-page opinion, that Dish committed over 66 million TSR violations. Judge Sue Myerscough stated that Dish “created a situation in which unscrupulous sales persons used illegal practices to sell Dish Network programming any way they could.”
$168 million of the penalty was awarded to the federal government, the largest civil penalty obtained to date in a Do Not Call case. The remaining amount was awarded to the states of California, Illinois, North Carolina and Ohio. The court also ordered injunctive relief, requiring Dish to:
- demonstrate that it and its primary retailers fully comply with the TSR’s Safe Harbor Provisions;
- hire a telemarketing expert to ensure compliance; and
- submit to bi-annual compliance verification by the government.
In a statement, Dish indicated its plans to appeal the judgment. Acting FTC Chairman Maureen Ohlhausen pointed to the court’s decision as evidence that “companies will pay a hefty price for violating consumers’ privacy with unwanted calls.”
On June 2, 2017, in preparation for the first annual review of the EU-U.S. Privacy Shield (“Privacy Shield”) framework, the European Commission has sent questionnaires to trade associations and other groups, including the Centre for Information Policy Leadership at Hunton & Williams LLP, to seek information from their Privacy Shield-certified members on the experiences of such organizations during the first year of the Privacy Shield. The EU Commission intends to use the questionnaire responses to inform the annual review of the function, implementation, supervision and enforcement of the Privacy Shield.
Among other focus areas, the questionnaire seeks information on how Privacy Shield-certified entities have:
- implemented policies, procedures and other measures to meet their Privacy Shield obligations and each of the Privacy Shield Principles;
- modified their business and contractual arrangements with third parties to ensure that the third parties appropriately protect the personal information they receive from Privacy Shield-certified organizations;
- addressed complaints (if any) from individuals whose personal information has been transferred pursuant to the Privacy Shield; and
- addressed the requirement to select an independent dispute resolution mechanism.
The questionnaire also asks for Privacy Shield-certified organizations’ views on:
- the issuance of transparency reports, pursuant to the USA Freedom Act, regarding U.S. authorities’ national security-related access requests;
- the extent to which personal information transferred pursuant to the Privacy Shield is used for automated decision-making in connection with decisions that might significantly affect individuals’ rights or obligations;
- developments in U.S. law that might affect the EU Commission’s assessment of the Privacy Shield; and
- challenges that such organizations have encountered in meeting the Privacy Shield’s requirements.
Responses to the questionnaire are due to the EU Commission by July 5, 2017.
On June 1, 2017, the new Cybersecurity Law went into effect in China. This post takes stock of (1) which measures have been passed so far, (2) which ones go into effect on June 1 and (3) which ones are in progress but have yet to be promulgated.
A draft implementing regulation and a draft technical guidance document on the treatment of cross-border transfers of personal information have been circulated, but at this time only the Cybersecurity Law itself and a relatively specific regulation (applicable to certain products and services used in network and information systems in relation to national security) have been finalized. As such, only the provisions of the Cybersecurity Law itself and this relatively specific regulation went into effect on June 1.
On June 1, 2017, the following obligations (among others) become legally mandatory for “network operators” and “providers of network products and services”:
- personal information protection obligations, including notice and consent requirements;
- for “network operators,” obligations to implement cybersecurity practices, such as designating personnel to be responsible for cybersecurity, and adopting contingency plans for cybersecurity incidents; and
- for “providers of network products and services,” obligations to provide security maintenance for their products or services and to adopt remedial measures in case of safety defects in their products or services.
Penalties for violating the Cybersecurity Law can vary according to the specific violation, but typically includes (1) a warning, an order to correct the violation, confiscation of illegal proceeds and/or a fine (typically ranging up to RMB 1 million); (2) personal fines for directly responsible persons (typically ranging up to RMB 100,000); and (3) in particularly serious circumstances, suspensions or shutdowns of offending websites and businesses, including revocations of operating permits and business licenses.
A final version of the draft implementing regulation and a draft technical guidance document on the treatment of cross-border transfers of personal information are forthcoming. When issued, they are expected to finalize and clarify the following prospective obligations:
- restrictions on cross-border transfers of personal information (and “important information”), including a notice and consent obligation specific to cross-border transfers; and
- procedures and standards for “security assessments,” which validate the continuation of cross-border transfers of personal information and “important information.”
The draft version of the implementing regulation on the treatment of cross-border transfers of personal information contains a grace period, under which “network operators” would not be required to comply with the cross-border transfer requirements until December 31, 2018. The final draft likely will contain a similar grace period.
Recently, the Colorado Division of Securities (the “Division”) published cybersecurity regulations for broker-dealers and investment advisers regulated by the Division. Colorado’s cybersecurity regulations follow similar regulations enacted in New York that apply to certain state-regulated financial institutions.
The regulations obligate covered broker-dealers and investment advisers to establish and maintain written cybersecurity procedures designed to protect “confidential personal information” which is defined to include a Colorado resident’s first name or first initial and last name, plus (1) Social Security number; (2) driver’s license number or identification card number; (3) account number or credit or debit card number, in combination with any required security code, access code or password that would permit access to a resident’s financial account; (4) digitized or other electronic signature or (5) user name, unique identifier or electronic mail address in combination with a password, access code security question or other authentication information that would permit access to an online account.
The cybersecurity procedures must include:
- an annual assessment by the firm or an agent of the firm of the potential risks and vulnerabilities to the confidentiality, integrity and availability of confidential personal information;
- the use of secure email for email containing confidential personal information, including use of encryption and digital signatures;
- authentication practices for employee access to electronic communications, databases and media;
- procedures for authenticating client instructions received via electronic communication; and
- disclosure to clients of the risks of using electronic communications.
In determining whether a firm’s cybersecurity procedures are reasonably designed, the Division may consider the firm’s size, relationships with third parties and cybersecurity policies and procedures. The Division may also consider the firm’s (1) authentication practices, (2) use of electronic communications, (3) use of automatic locking mechanisms for devices that have access to confidential personal information and (4) process for reporting lost or stolen devices.
The Colorado Secretary of State will set an effective date for the Colorado regulations after the Colorado Attorney General’s office issues an opinion on the regulations.
On May 25, 2017, Oregon Governor Kate Brown signed into law H.B. 2090, which updates Oregon’s Unlawful Trade Practices Act by holding companies liable for making misrepresentations on their websites (e.g., in privacy policies) or in their consumer agreements about how they will use, disclose, collect, maintain, delete or dispose of consumer information. Pursuant to H.B. 2090, a company engages in an unlawful trade practice if it makes assertions to consumers regarding the handling of their information that are materially inconsistent with its actual practices. Consumers can report violations to the Oregon Attorney General’s consumer complaint hotline. H.B. 2090 reinforces the significance of carefully drafting clear, accurate privacy policies and complying with those policies’ provisions.
On May 10, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced a $2.4 million civil monetary penalty against Memorial Hermann Health System (“MHHS”) for alleged violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule.
The penalty followed an OCR compliance review of MHHS based on multiple media reports suggesting that MHHS had disclosed a patient’s protected health information (“PHI”) without authorization. OCR’s review focused on an incident that occurred when a MHHS patient allegedly presented fraudulent identification and was subsequently arrested. MHHS senior management approved the publishing of a press release about the incident that contained the patient’s name, an impermissible disclosure of PHI in violation of the Privacy Rule. OCR’s review further determined that MHHS failed to timely document the sanctions it issued to its personnel for disclosing the patient’s PHI. Under the terms of OCR’s resolution agreement, MHHS must update its policies and procedures on safeguarding PHI from impermissible uses and disclosures, as well as train its workforce on compliance.
“Senior management should have known that disclosing a patient’s name on the title of a press release was a clear HIPAA Privacy violation that would induce a swift OCR response,” said OCR Director Roger Severino. “This case reminds us that organizations can readily cooperate with law enforcement without violating HIPAA, but that they must nevertheless continue to protect patient privacy when making statements to the public and elsewhere.” This settlement, the eighth announced this year, signals OCR’s increased enforcement of the Privacy Rule.
On May 12, 2017, a massive ransomware attack began affecting tens of thousands of computer systems in over 100 countries. The ransomware, known as “WannaCry,” leverages a Windows vulnerability and encrypts files on infected systems and demands payment for their release. If payment is not received within a specified time frame, the ransomware automatically deletes the files. A wide range of industries have been impacted by the attack, including businesses, hospitals, utilities and government entities around the world.
These types of incidents can have significant legal implications for affected entities and industries for whom data access and continuity is critical (health care and finance are particularly vulnerable). As affected entities work to understand and respond to the threat of ransomware, below is a summary of key legal considerations:
- FTC Enforcement. In a November 2016 blog entry, the FTC noted that “a business’ failure to secure its networks from ransomware can cause significant harm to the consumers (and employees) whose personal data is hacked. And in some cases, a business’ inability to maintain its day-to-day operations during a ransomware attack could deny people critical access to services like health care in the event of an emergency.” The FTC also noted that “a company’s failure to update its systems and patch vulnerabilities known to be exploited by ransomware could violate Section 5 of the FTC Act.” In various FTC enforcement actions (including those against Wyndham Worldwide Corporation and ASUSTeK Computer, Inc.), the FTC has demonstrated its willingness to bring Section 5 enforcement actions against companies who experience data security incidents resulting from malware exploitation of vulnerabilities. In the event of a security compromise, the FTC also may consider the accuracy of consumer promises an organization has made regarding the security of its systems. The FTC has used the unfairness and deception doctrines to pursue companies that misrepresented the security measures used to protect consumers’ personal information from access by unauthorized parties. Nearly all data security actions brought by the FTC have been settled and have resulted in comprehensive settlement agreements that typically impose obligations for up to 20 years.
- Breach Notification Laws. In the U.S., 48 States, the District of Columbia, Guam, Puerto Rico and the U.S. Virgin Islands have laws that require notification to affected individuals (and in some states, regulators) in the event of unauthorized acquisition of or access to personal information. Certain federal laws, such as the Health Insurance Portability and Accountability Act (“HIPAA”), also require notification for certain breaches of covered information, and there is an increasing number of breach notification laws being adopted internationally. To the extent a ransomware attack results in the unauthorized acquisition of or access to covered information, applicable breach notification laws may impose notification obligations on affected entities.
- Litigation. In the event that ransomware results in a breach of covered information, litigation is another potential risk. Despite the difficulty in bringing successful lawsuits against affected entities, plaintiffs’ lawyers continue to actively pursue newsworthy breaches, as businesses are paying significant amounts in settlements with affected individuals. Affected entities also may face lawsuits from their business partners whose data is involved in the attack, and often battle insurers over coverage of costs associated with the attack. Businesses must also be cognizant of cyber-related shareholder derivative lawsuits, which increasingly follow from catastrophic security breaches.
- Data Security Laws. A number of U.S. states have enacted laws that require organizations that maintain certain types of personal information about state residents to adhere to general information security requirements with respect to that personal information. As a general matter, these laws (such as Section 1798.81.5 of the California Civil Code) require businesses that own or license personal information about state residents to implement and maintain reasonable security procedures and practices to protect the information from unauthorized access, destruction, use, modification or disclosure. To the extent a ransomware attack results from a failure to implement reasonable safeguards, affected entities may be at risk of legal exposure under the relevant state security laws.
- Agency Guidance. Given the evolving nature of ransomware attacks, government agencies are continuously developing recommendations to help businesses respond. For example, the Department of Health and Human Services Office for Civil Rights, which enforces HIPAA, published a fact sheet advising health care entities on methods for preventing, investigating and recovering from ransomware attacks. The FBI has also developed ransomware resources directed towards Chief Information Security Officers and CEOs. This guidance should be carefully considered to help prevent and recover from ransomware attacks and to understand the potential criminal and enforcement implications of such attacks.
Ransomware is a growing concern, and while the recent global attack has been the most high-profile attack to date, it is part of an overall trend in the evolving threat landscape. Businesses and other organizations should take into account the above legal considerations in their efforts to prevent, investigate and recover from these disruptive attacks.
On April 24, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced that it had entered into a resolution agreement with CardioNet, Inc. (“CardioNet”) stemming from gaps in policies and procedures uncovered after CardioNet reported breaches of unsecured electronic protected health information (“ePHI”). CardioNet provides patients with an ambulatory cardiac monitoring service, and the settlement is OCR’s first with a wireless health services provider.
In early 2012, CardioNet submitted two breach notifications to OCR, one of which was prompted by the theft of a laptop from an employee’s parked vehicle outside of the employee’s home. During its subsequent investigation, OCR determined that CardioNet did not have an adequate risk analysis or risk management plan in place at the time of the theft, and that certain CardioNet policies and procedures addressing HIPAA Security Rule requirements existed only in draft form, having never been implemented. Additionally, CardioNet failed to produce any final policies and procedures regarding the implementation of safeguards for ePHI.
The resolution agreement required CardioNet to pay $2.5 million and enter into a corrective action plan (the “CAP”), which obligates CardioNet to:
- conduct a risk analysis;
- develop and implement a risk management plan;
- implement secure device and media controls;
- certify that all laptops, flashdrives, SD cards and other portable media devices are encrypted; and
- review and revise its training program for the Security Rule.
In addition to the above, the CAP requires CardioNet to report to OCR if it determines that a member of its workforce has failed to comply with its Security Rule policies and procedures (including corrective actions taken) and to submit reports on its compliance with the CAP to OCR.
OCR Director Roger Severino stated that “[m]obile devices in the health care sector remain particularly vulnerable to theft and loss” and that “[f]ailure to implement mobile device security by Covered Entities and Business Associates puts individuals’ sensitive health information at risk.”
Earlier this month, the New York State Department of Financial Services (“NYDFS”) recently published FAQs and key dates for its cybersecurity regulation (the “NYDFS Regulation”) for financial institutions that became effective on March 1, 2017.
The FAQs address topics including:
- whether a covered entity is required to give notice to consumers affected by a cybersecurity event;
- whether a covered entity may adopt portions of an affiliate’s cybersecurity program without adopting all of it;
- whether DFS-authorized New York branches, agencies and representative offices of out-of-country foreign banks are required to comply with the NYDFS Regulation;
- what constitutes “continuous monitoring” for purposes of the NYDFS Regulation;
- how a covered entity should submit Notices of Exemption, Certifications of Compliance and Notices of Cybersecurity Events; and
- whether an entity can be both a covered entity and a third-party service provider under the NYDFS Regulation.
The NYDFS also listed key dates for the NYDFS Regulation, which include:
- March 1, 2017 – the NYDFS Regulation becomes effective.
- August 28, 2017 – the 180-day transitional period ends and covered entities are required to be in compliance with requirements of the NYDFS Regulation unless otherwise specified.
- September 27, 2017 – the initial 30-day period for filing Notices of Exemption ends.
- February 15, 2018 – covered entities are required to submit the first certification under the NYDFS Regulation on or prior to this date.
- March 1, 2018 – the one year transitional period ends. Covered entities are required to comply with certain requirements such as those related to penetration testing, vulnerability assessments, risk assessment and cybersecurity training.
- September 3, 2018 – the eighteen month transitional period ends. Covered entities are required to comply with audit trail, data retention and encryption requirements.
- March 1, 2019 – the two year transitional period ends. Covered entities are required to develop a third-party service provider compliance program.
In a recent conference of the National Association of Insurance Commissioners, Maria Vullo, the NYDFS superintendent, stated that “The New York regulation is a road map with rules of the road.”
On April 12, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with Metro Community Provider Network (“MCPN”) that stemmed from MCPN’s lack of a risk analysis and risk management plan that addressed risks and vulnerabilities to protected health information (“PHI”).
In January 2012, MCPN submitted a breach report to OCR indicating that it had suffered a breach following a phishing incident that affected 3,200 patients. OCR investigated MCPN and found that, while MCPN had taken corrective action following the incident, it had failed to conduct a risk analysis until February 2012 or implement a risk management plan. In addition, the risk analysis MCPN eventually conducted was deemed “insufficient to meet the requirements of the Security Rule.”
The resolution agreement requires MCPN to pay $400,000 to OCR and enter into a Corrective Action Plan that obligates MCPN to:
- conduct a risk analysis and submit it to OCR for review and approval;
- implement a risk management plan to address and mitigate the risks and vulnerabilities identified in the risk analysis;
- revise its policies and procedures based on the findings of the risk analysis;
- review and revise its HIPAA training materials;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of three years.
In the settlement with MCPN, OCR balanced MCPN’s HIPAA violations with its status as a federally qualified health center that provides medical care to patients who have incomes at or below the poverty level. OCR Director Roger Severino stated that “Patients seeking health care trust that their providers will safeguard and protect their health information. Compliance with the HIPAA Security Rule helps covered entities meet this important obligation to their patient communities.”
On April 6, 2017, New York Attorney General Eric T. Schneiderman announced that privacy compliance company TRUSTe, Inc., agreed to settle allegations that it failed to properly verify that customer websites aimed at children did not run third-party software to track users. According to Attorney General Schneiderman, the enforcement action taken by the NY AG is the first to target a privacy compliance company over children’s privacy.
TRUSTe was certified by the FTC to operate a Children’s Online Privacy Protection Act (“COPPA”) safe harbor program, under which companies could use its COPPA services to demonstrate compliance with the law. The NY AG alleged that TRUSTe failed to run scans of “most or all” of its 32 customers’ websites for third-party tracking technology on the children’s webpages of those websites. The NY AG further alleged that TRUSTe “failed to make a reasonable determination as to whether third-party tracking technologies present on clients’ websites violated COPPA, certifying child-directed websites despite information indicating that third parties present on those websites collected and used the personal information of users in a manner prohibited by COPPA.”
Under the terms of the settlement, TRUSTe agreed to pay $100,000 and “adopt new measures to strengthen its privacy assessments,” including (1) conducting an annual review of the information policies, practices and representations of each customer that participates in its COPPA safe harbor program; (2) requiring customers to conduct comprehensive internal assessments of their practices relating to information collection and use and (3) providing regular training to individuals responsible for performing assessments within the COPPA safe harbor program.
On April 12, 2017, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP issued a discussion paper on Certifications, Seals and Marks under the GDPR and Their Roles as Accountability Tools and Cross-Border Data Transfer Mechanisms (the “Discussion Paper”). The Discussion Paper sets forth recommendations concerning the implementation of the EU General Data Protection Regulation’s (“GDPR’s”) provisions on the development and use of certification mechanisms. The GDPR will become effective on May 25, 2018. The EU Commission, the Article 29 Working Party, individual EU data protection authorities (“DPAs”) and other stakeholders have begun to consider the role of GDPR certifications and how to develop and implement them. CIPL’s Discussion Paper is meant as formal input to that process.
Certifications, seals and marks have the potential to play a significant role in enabling companies to achieve and demonstrate organizational accountability and GDPR compliance for some or all of their services, products or activities. The capability of certifications to provide a comprehensive GDPR compliance structure will be particularly useful for small and medium-sized enterprises. For large and multinational companies, certifications may facilitate business arrangements with business partners and service providers. In addition, certifications, seals and marks can be used as accountable, safe and efficient cross-border data transfer mechanisms under the GDPR, provided they are coupled with binding and enforceable commitments. Finally, there is potential for creating interoperability with other legal regimes, as well as with similar certifications, seals and marks in other regions. Thus, as explained in the Discussion Paper, certifications may present real benefits for all stakeholders, including individuals, organizations and DPAs.
To reap the full benefit of certifications, however, according to CIPL, it is crucial that certifications are efficiently operated, incentivized and clearly accompanied by benefits for certified organizations. Otherwise, organizations will be reluctant to invest time and money in obtaining and maintaining GDPR certifications.
The Discussion Paper contains the following “Top Ten” messages:
- Certification should be available for a product, system, service, particular process or an entire privacy program.
- There is a preference for a common EU GDPR baseline certification for all contexts and sectors, which can be differentiated in its application by different certification bodies during the certification process.
- The EU Commission and/or the European Data Protection Board (“EDPB”), in collaboration with certification bodies and industry, should develop the minimum elements of this common EU GDPR baseline certification, which may be used directly, or to which specific other sectoral or national GDPR certifications should be mapped.
- The differentiated application of the common EU GDPR certification for specific sectors may be informed by sector-specific codes of conduct.
- Overlap and proliferation of certifications should be avoided so as not to create consumer/stakeholder confusion or make it less attractive for organizations seeking certification.
- Certifications must be adaptable to different contexts, scalable to the size of the company and nature of the processing, and affordable.
- GDPR certifications must be consistent with, and take into account, other certification schemes and be able to interact with or be as interoperable as possible (this includes ISO/IEC Standards, the EU-U.S. and Swiss-U.S. Privacy Shield frameworks, APEC CBPR and the Japan Privacy Mark).
- The EU Commission and/or the EDPB should prioritize developing a common EU GDPR certification for purposes of data transfers pursuant to Article 46(2)(f).
- Organizations should be able to leverage their BCR approvals to receive or streamline certification under an EU GDPR certification.
- DPAs should incentivize and publicly affirm certifications as a recognized means to demonstrate GDPR compliance, and as a mitigating factor in case of enforcement, subject to the possibility of review of specific instances of noncompliance.
The Discussion Paper was developed in the context of CIPL’s ongoing GDPR Implementation Project, a multi-year initiative involving research, workshops, webinars and white papers, supported by over 70 private sector organizations, with active engagement and participation by many EU-based data protection and governmental authorities, academics and other stakeholders.
On April 4, 2017, the Article 29 Working Party (“Working Party”) adopted its draft Guidelines on Data Protection Impact Assessment and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (the “Guidelines”). The Guidelines aim to clarify when a data protection impact assessment (“DPIA”) is required under the EU General Data Protection Regulation (“GDPR”). The Guidelines also provide criteria to Supervisory Authorities (“SAs”) to use to establish their lists of processing operations that will be subject to the DPIA requirement.
The Guidelines further explain the DPIA requirement and provide a few recommendations:
- Scope of a DPIA. The Working Party confirms that a single DPIA may involve a single data processing operation or a set of similar processing operations (i.e., with respect to the risks they present).
- Processing operations that are subject to a DPIA. The Working Party reiterates that a DPIA is mandatory where processing is likely to result in a high risk to the rights and freedoms of individuals. The Working Party highlights several criteria to be taken into consideration by SAs when establishing their lists of the kind of processing activities that require a DPIA, including, (1) evaluation or scoring, including profiling and predicting, (2) automated decision-making by the data controller with legal or similar significant effects on the individuals, (3) systematic monitoring of individuals, (4) processing personal data on a large scale and (5) matching or combining datasets. According to the Working Party, the more criteria that is met, the more likely that such processing activities present a high risk for the individuals and therefore require a DPIA. The assessment of certain data processing operation risks, however, must still be made on a case by case basis. The Guidelines further outline cases where a DPIA would not be required, including, for example, where the processing is not likely to result in a high risk to the rights and freedoms of individuals, or a DPIA has already been conducted for similar data processing operations. In addition, according to the Working Party, a DPIA must be reviewed periodically, and in particular, when there is a change in the risks presented by the processing operations. Finally, the Working party specifies that the DPIA requirement contained in the GDPR applies to processing operations initiated after the GDPR becomes applicable (i.e., as of May 25, 2018), although it recommends that data controllers anticipate the GDPR and carry out DPIAs for processing operations already underway.
- How to carry out a DPIA. Where a likely high risk processing is identified, the Working Party recommends that the DPIA be carried out prior to the processing, and as early as possible in the design of the processing operation. Also, the data controller is responsible to ensure that a DPIA is carried out. The data controller must, however, cooperate with and ask the advice of the data protection officer. In addition, the data processor must assist the data controller in carrying out the DPIA when it is involved in the processing. Further, the Guidelines reiterate that data controllers have some flexibility in determining the structure and form of a DPIA. In this respect, Annex 2 of the Guidelines provides a list of criteria for data controllers to use to assess whether or not a DPIA, or a methodology to carry out a DPIA, is sufficiently comprehensive to comply with the GDPR. Finally, the Working Party recommends that data controllers publish their DPIAs, although this is not a strict requirement under the GDPR.
- Consultation of SAs. The Working Party reiterates that data controllers must consult SAs when they cannot find sufficient measures to mitigate the risks of a processing and the residual risks are still high, as well as in specific cases where required by EU Member State law.
- Conclusion and Recommendations. Finally, the Working Party reiterates the importance of DPIAs as a GDPR compliance tool, in particular where high risk data processing is planned or is taking place. Whenever a likely high risk processing is identified, the Working Party recommends that data controllers: (1) choose a DPIA methodology or specify and implement a systematic DPIA process, (2) provide the DPIA report to the competent SA where required, (3) consult the SA where required, (4) periodically review the DPIA and (5) document the decisions taken in the context of the DPIA.
Annex 1 of the Guidelines contains some examples of existing DPIA frameworks, including the ones published by the Spanish, French, German and UK SAs.
The Working Party will accept comments on the draft Guidelines until May 23, 2017.
On April 4, 2017, the Article 29 Working Party (the “Working Party”) adopted an Opinion on the Proposed Regulation of the European Commission for the ePrivacy Regulation (the “Proposed ePrivacy Regulation”). The Proposed ePrivacy Regulation is intended to replace the ePrivacy Directive and to increase harmonization of ePrivacy rules in the EU. A regulation is directly applicable in all EU Member States, while a directive requires transposition into national law.
The Working Party welcomes the Proposed ePrivacy Regulation and outlines some points of concern that should be improved during the legislative process which is intended to be completed by May 2018 when the EU General Data Protection Regulation (“GDPR”) takes effect.
Key Aspects of the Proposed ePrivacy Regulation
- Consistency with the GDPR. The Working Party welcomes the fact that the same authority responsible for monitoring compliance with the GDPR will also be responsible for the enforcement of the Proposed ePrivacy Regulation and will be able to impose similar fines. Furthermore, the Working Party favors the removal of the existing sector-specific data breach notification rules in the ePrivacy context, consistent with the GDPR’s general data breach notification rule applicable to all sectors.
- Extended Scope. The Working Party welcomes the expansion of the Proposed ePrivacy Regulation to include Over-The-Top providers in addition to traditional telecom operators. Moreover, the Working Party favors the clarification that the Proposed ePrivacy Regulation covers machine-to-machine interaction as well as content and associated metadata. The Opinion also favors the recognition of the importance of anonymization, the broad formulation of the protection of terminal equipment and the inclusion of legal persons in the scope of the Proposed ePrivacy Regulation.
- Consent. The Working Party welcomes the clarification that Internet access and mobile telephones are essential services and that providers of these services cannot “force” their customers to consent to any data processing unnecessary for the provision these services. According to the Working Party, given the dependence of people on these essential services, consent for the processing of their communications for additional purposes (such as for advertising and marketing purposes) is not valid. In addition, the Working Party approves that the consent requirement for the inclusion of personal data of natural persons in directories is harmonized. Finally, the Working Party appreciates that the prohibition to collect information from end-users’ terminal equipment does not apply in cases of measuring web traffic under certain conditions.
Points of Concern
- WiFi tracking. According to the Working Party, the obligations in the Proposed ePrivacy Regulation for the tracking of the location of terminal equipment should comply with the GDPR requirements. Specifically, the Working Party notes that MAC addresses are personal data, even after security measures, such as hashing, have been implemented. Depending on the purpose of the data collection, the Working Party notes that tracking under the GDPR is likely either to be subject to consent, or may be performed if the collected personal data is anonymized (preferably immediately after collection). Finally, the Working Party invites the European Commission to promote a technical standard for mobile devices to automatically signal an objection against such tracking.
- Analysis of content and metadata. The Working Party appreciates the recognition that metadata may reveal very sensitive data and that analysis of content is high-risk processing. According to the Working Party, it should be prohibited to process content and metadata of communications without the consent of both sender and recipient, except for specific purposes permitted by the Proposed ePrivacy Regulation, including security and billing purposes, as well as spam filtering purposes. To that end, the Working Party recommends that the Proposed ePrivacy Regulation also permit the processing of content and metadata of communications for purely household usage as well as for individual work-related usage. According to the Working Party, the analysis of content and metadata of communications for all other purposes, such as analytics, profiling, behavioral advertising or other commercial purposes, should require consent from all end-users.
- Tracking walls. The Working Party advocates that the Proposed ePrivacy Regulation should include an explicit prohibition of tracking walls (i.e., the practice whereby access to a website or service is denied unless individuals agree to be tracked on other websites or services). According to the Working Party, such “take it or leave it” approaches are rarely legitimate. The Working Party also recommends that access to content on websites and apps should not be made conditional on the acceptance of intrusive processing activities, such as cookies, device fingerprinting, injection of unique identifiers or other monitoring techniques.
- Privacy by default regarding terminal equipment and software. The Working Party recommends that terminal equipment and software must, by default, “offer privacy protective settings, and offer clear options to users to confirm or change these default settings during installation.” The Working Party recommends that users have the ability to provide consent through their browser settings and have the option to opt-in to Do Not Track.
The Working Party also made additional recommendations with regard to clarifying the Proposed ePrivacy Regulation’s extraterritorial scope, the conditions for obtaining granular consent through browser settings and including behavioral advertisements in the direct marketing rules. Finally, the Working Party discussed several other issues that should be clarified to ensure legal certainty, such as the conditions for the employer’s interference with company-issued devices.
Haim Ravia and Dotan Hammer of Pearl Cohen Zedek Latzer Baratz recently published an article outlining Israel’s new Protection of Privacy Regulations (“Regulations”), passed by the Knesset on March 21, 2017. The Regulations will impose mandatory comprehensive data security and breach notification requirements on anyone who owns, manages or maintains a database containing personal data in Israel.
The Regulations will become effective in late March 2018.
On March 17, 2017, retailer Neiman Marcus agreed to pay $1.6 million as part of a proposed settlement (the “Settlement”) to a consumer class action lawsuit stemming from a 2013 data breach that allegedly compromised the credit card data of approximately 350,000 customers.
The consumer plaintiffs sued Neiman Marcus in March 2014, alleging that the company failed to protect customers’ privacy and waited 28 days to inform affected customers of the breach. Neiman Marcus claimed that, rather than 350,000 customers, the breach affected only 9,200 customers. The case initially was dismissed on the grounds that the affected customers lacked standing, having been reimbursed for their losses; the Seventh Circuit reversed and remanded, finding that costs for preventative measures like credit monitoring sufficiently established standing.
Under the terms of the Settlement, each class member who submits a valid claim is entitled to receive up to $100. Each class representative will receive up to $2,500 in service awards, and class counsel will seek up to $530,000 in attorneys’ fees and costs. The Settlement also requires Neiman Marcus to maintain the data security measures it implemented in the wake of the breach, including the (1) appointment of a Chief Information Security Officer, (2) creation of an Information Security organizational unit, (3) increase in frequency and depth of cybersecurity reporting to the executive team and Board of Directors, (4) use of chip-based payment card infrastructure in stores, (5) education and training of employees on privacy and data security matters, (6) collection and analysis of logs of Neiman Marcus systems for potential security threats and (7) information sharing initiatives. The Settlement awaits preliminary approval from the United States District Court for the Northern District of Illinois.
On March 9, 2017, AllClear ID hosted a webinar with Hunton & Williams partner and chair of the Global Privacy and Cybersecurity practice Lisa J. Sotto on the new cybersecurity regulations from the New York State Department of Financial Services (“NYDFS”). The NYDFS regulations impose significant cybersecurity requirements on impacted businesses that will dictate how they plan for, respond to and recover from data security events.
Sotto and AllClear ID founder and chief executive officer, Bo Holland, discussed the key areas your business should address first in this new regulatory environment. Sotto points out that these regulations will “affect companies far and wide,” including “any vendor that touches a New York banking, insurance or financial organization.”
On March 2, 2017, the UK Information Commissioner’s Office (“ICO”) published draft guidance regarding the consent requirements of the EU General Data Protection Regulation (“GDPR”). The guidance sets forth how the ICO interprets the GDPR’s consent requirements, and its recommended approach to compliance and good practice. The ICO guidance precedes the Article 29 Working Party’s guidance on consent, which is expected in 2017.
The ICO guidance emphasizes that the GDPR sets a high standard for individuals’ consent. For organizations to be able to rely on consent as a legal basis for processing, and for that consent to be valid, it must be:
- Unbundled: Consent requests must be separate from other terms and conditions.
- Active: Consent can only result from a clear statement or affirmative action of an individual’s wishes; pre-checked opt-in boxes are invalid and, although the ICO does not completely rule out implied consent in specific circumstances, “opt-out is not consent.”
- Granular: The controller must provide granular options for obtaining consent separately for different processing operations and different purposes.
- Named: Organizations and any third parties who will be relying on consent must be named in the notice – pursuant to the guidance, even precisely defined categories of third-party organizations will not be acceptable under the GDPR.
- Documented: Controllers must keep records to demonstrate what the individual has consented to, including what they were told in privacy notices or policies existing at the time of consent, and when and how they consented.
- Easy to Withdraw: Controllers must tell individuals that they have the right to withdraw their consent at any time, and how to do this with simple and effective withdrawal mechanisms.
- No Imbalance in the Relationship: Consent cannot be freely given if there is an imbalance in the relationship between the individual and the controller. This will make consent particularly difficult for public authorities and for employers, who should look for an alternative lawful basis.
In providing guidance on the meaning of the term “unambiguous consent,” the ICO has stressed that consent must be demonstrated through a clear, affirmative act. Silence, pre-ticked boxes and inactivity do not represent consent. The affirmative act can be expressed in a written or oral statement, by electronic means, by ticking an opt-in box, by choosing a technical standard, by switching the technical standard from default or by another statement or act which clearly indicates acceptance. The ICO accepts that there may be implied consent in some circumstances, such as when an individual drops a business card to participate in a contest, or by submitting an online survey. The actual act signifies consent to that specific processing of data for these limited purposes.
“Explicit consent” in the GDPR represents an even higher standard than unambiguous consent. It must be separate from any other consents and must be expressly confirmed through the use of words. Explicit consent must specifically refer to the element that requires consent to be explicit (e.g., to sensitive data that is processed or to data transferred outside the EU, along with the underlying risks of the transfer).
Through the guidance, it is clear that the ICO sees consent as a dynamic concept that evolves over time and that is best managed in a proactive way. In addition to keeping a detailed record of consent, controllers are encouraged to ensure ongoing management of consents, choices and controls through privacy dashboards and similar preference and permission management tools. These should include mechanisms for withdrawal of consents and a general “any time opt-out.” In addition, the ICO recommends that controllers review and refresh consents, especially as processing operations and the purposes of processing evolve. In any case, controllers should offer a specific opt-out automatically every two years in reply to individuals with whom they have contact and send occasional reminders about the ability to withdraw consent. The ICO makes it clear that consent will be an appropriate legal basis only where (1) there is a real choice for individuals, (2) the individuals have ability to exercise actual control over data use and (3) it fulfills all of the GDPR’s requirements. If these conditions are not met, the ICO advises controllers to seek an alternative legal basis for their processing activities.
The ICO’s guidance is subject to public consultation until March 31, 2017.
On March 9, 2017, AllClear ID will host a webinar with Hunton & Williams partner and chair of the Global Privacy and Cybersecurity practice Lisa J. Sotto on the new cybersecurity regulations from the New York State Department of Financial Services (“NYDFS”). The NYDFS regulations will impose significant cybersecurity requirements on impacted businesses that will dictate how they plan for, respond to, and recover from data security events. To be compliant, businesses will need to rethink their cybersecurity programs in light of the many granular requirements in the NYDFS regulations. Join Lisa J. Sotto and AllClear ID founder and chief executive officer, Bo Holland, for a discussion on the key areas your business should address first in this new regulatory environment, including best practices for breach readiness, response and recovery.
On March 1, 2017, the Federal Communications Commission (“FCC”), under the new leadership of Chairman Ajit Pai, voted 2-1 to issue a temporary stay of the data security obligations of the FCC’s Broadband Consumer Privacy Rules (the “Rules”), which were to go into effect March 2, 2017. The temporary stay will remain in place until the FCC is able to act on pending petitions for reconsideration.
A joint press release by FCC Chairman Pai and Acting FTC Chairwoman Maureen K. Ohlhausen describes the stayed provisions as not consistent with the FTC’s privacy framework. The press release expresses the agency heads’ disagreement with the “FCC’s unilateral decision in 2015 to strip the FTC of its authority over broadband providers’ privacy and data security practices” and their belief that “jurisdiction over broadband providers’ privacy and data security practices should be returned to the FTC.” The temporary stay is described as “a step forward” in filling a consumer protection gap that was created by the FCC in 2015. The press release also announces the agencies’ plan to create a “technology-neutral privacy framework for the online world” that would do away with two distinct frameworks—one for Internet service providers and one for all other online companies.
Other elements of the Rules are still scheduled to go into effect later this year and are unaffected by the temporary stay.
China’s new Cybersecurity Law will impose new restrictions on information flows from operators of key information infrastructure, and will become effective in June 2017. Hunton & Williams LLP will host a webinar on China’s New Cybersecurity Law on March 7, 2017, at 12:00 p.m. EST.
On February 16, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with Memorial Healthcare System (“Memorial”) that emphasized the importance of audit controls in preventing breaches of protected health information (“PHI”). The $5.5 million settlement with Memorial is the fourth enforcement action taken by OCR in 2017, and matches the largest civil monetary ever imposed against a single covered entity.
In April 2012, Memorial submitted a breach report to OCR indicating that it had suffered a breach involving impermissible access to PHI by employees. Memorial supplemented that report three months later, indicating that it had discovered additional impermissible access that resulted in a total of 115,000 affected patients. The PHI involved consisted of patients’ names, dates of birth and Social Security numbers. OCR investigated Memorial and found that the entity had committed several HIPAA violations by (1) impermissibly disclosing PHI in violation of the Privacy Rule, (2) failing to implement procedures to regularly review records of information system activity such as audit logs and (3) failing to implement policies and procedures to review and modify users’ access to PHI.
The resolution agreement requires Memorial to pay $5.5 million to OCR and enter into a Corrective Action Plan that obligates Memorial to:
- conduct a risk analysis and implement a risk management plan;
- revise its policies and procedures regarding information systems activity review and access establishment, modification and termination;
- distribute the revised policies and procedures to its workforce members;
- submit a plan to OCR to internally monitor its compliance with the Corrective Action Plan;
- select and engage an independent third-party assessor to review the entity’s compliance with the Corrective Action Plan;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of three years.
In announcing the settlement with Memorial, OCR Acting Director Robinsue Frohboese stated that “organizations must implement audit controls and review audit logs regularly. As this case shows, a lack of access controls and regular review of audit logs helps hackers or malevolent insiders to cover their electronic tracks, making it difficult for covered entities and business associates to not only recover from breaches, but to prevent them before they happen.”
In connection with the Memorial settlement, OCR also linked to its recent guidance on audit trails. The guidance discusses three types of audit trails: (1) application audit trails, (2) system-level audit trails and (3) user audit trails, and encourages covered entities and business associates to “consider which audit tools may best help them with reducing non-useful information contained in audit records, as well as with extracting useful information.”
On January 18, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with MAPFRE Life Insurance Company of Puerto Rico (“MAPFRE”) relating to a breach of protected health information (“PHI”) contained on a portable storage device. This is the second enforcement action taken by OCR in 2017, following the action taken against Presence Health earlier this month for failing to make timely breach notifications.
In 2011, MAPFRE, which underwrites group health insurance plans, submitted a breach report to OCR indicating that it had suffered a breach when a USB data storage device was stolen from the company’s IT Department. OCR investigated MAPFRE and found that the entity had committed several HIPAA violations by failing to (1) conduct an adequate risk analysis, (2) implement a security awareness and training program and (3) encrypt ePHI on portable devices.
The resolution agreement requires MAPFRE to pay $2,204,182 to OCR and enter into a Corrective Action Plan that obligates MAPFRE to:
- conduct a risk analysis and implement a risk management plan;
- implement a process for evaluating environmental or operational changes that affect the security of ePHI;
- modify its policies and procedures based on the risk analysis and as necessary to comply with the HIPAA Privacy and Security Rules;
- distribute the revised policies and procedures to its workforce members;
- submit its security awareness training program to OCR and provide training to all workforce members;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of three years.
In announcing the settlement with MAPFRE, OCR Director Jocelyn Samuels stated that, “[c]overed entities must not only make assessments to safeguard ePHI, they must act on those assessments as well.”
On January 19, 2017, the North American Electric Reliability Corporation (“NERC”) released a draft Reliability Standard CIP-013-1 – Cyber Security – Supply Chain Risk Management (the “Proposed Standard”). The Proposed Standard addresses directives of the Federal Energy Regulatory Commission (“FERC”) in Order No. 829 to develop a new or modified reliability standard to address “supply chain risk management for industrial control system hardware, software, and computing and networking services associated with bulk electric system operations.”
The Proposed Standard requires each affected entity to develop and implement a cybersecurity risk management plan that addresses the following security objectives: (1) software integrity and authenticity, (2) vendor remote access, (3) information system planning and (4) vendor risk management and procurement controls.
NERC will host a webinar on February 2, 2017 to discuss the Proposed Standard and respond to questions from webinar participants. A formal comment period for the Proposed Standard is now open and will remain open through 8 p.m. ET on Monday, March 6, 2017. NERC must file the final version of the Proposed Standard with FERC by September 27, 2017.
To hear more about the Proposed Standard, listen to Hunton & Williams LLP’s webinar on Supply Chain Cyber Risk Management.
On January 7, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with Presence Health stemming from the entity’s failure to notify affected individuals, the media and OCR within 60 days of discovering a breach. This marks the first OCR settlement of 2017 and the first enforcement action relating to untimely breach reporting by a HIPAA covered entity.
Presence Health, a large health care network in Illinois with over 150 locations, submitted a breach report to OCR on January 31, 2014, indicating that it had discovered a breach on October 22, 2013 that involved missing paper-based operating room schedules. The schedules contained protected health information (“PHI”) such as patient names, medical record numbers, and dates and types of medical procedures. OCR investigated Presence Health and found that it notified affected individuals about breaches in 2015 and 2016 in an untimely manner that did not meet the 60-day notification requirement.
The resolution agreement requires Presence Health to pay $475,000 to OCR and enter into a Corrective Action Plan that obligates Presence Health to:
- revise its policies and procedures related to complying with the Breach Notification Rule, including policies and procedures that set forth its workforce members’ roles and responsibilities with respect to (1) receiving and addressing internal and external breach reports, (2) completing risk assessments of potential breaches of unsecured PHI and (3) preparing required notifications to individuals, the media and OCR;
- modify its policies and procedures for sanctions against workforce members who fail to comply with the entity’s HIPAA procedures;
- distribute the revised policies and procedures to all Presence Health workforce members;
- submit its security awareness training program to OCR and provide training to all workforce members;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of two years.
In announcing the settlement with Presence Health, OCR Director Jocelyn Samuels noted that “[c]overed entities need to have a clear policy and procedures in place to respond to the Breach Notification Rule’s timeliness requirements.” She also emphasized that “[i]ndividuals need prompt notice of a breach of their unsecured PHI so they can take action that could help mitigate any potential harm caused by the breach.”
This settlement puts covered entities on notice that they must act quickly following the discovery of a breach of unsecured PHI. It appears OCR will now begin to more vigorously enforce the requirement to provide notifications to individuals and submit breach notification reports for breaches affecting 500 or more individuals to OCR within 60 days of discovering a breach.
Last month, the Standing Committee of the National People’s Congress of China published a full draft of the E-commerce Law (the “Draft”) and is giving the general public an opportunity to comment on the draft through January 26, 2017.
The Draft applies to (1) e-commerce activities within China or (2) e-commerce activities involving either domestic enterprises that operate an e-commerce business or customers located in China. In particular, the Draft provides specific protections for “personal information” of e-commerce users, defined as information which can be used, separately or in combination with other information, to identify a specific user. The Draft provides specific examples, including the user’s name, identity certificate number, address, contact information, location, bank card information, transaction records and payment records, as well as express logistics records.
The Draft reiterates that enterprises operating an e-commerce business (“e-commerce enterprises”), which include e-commerce third-party platforms and e-commerce operators, are required to (1) follow the principles of legitimacy, rightfulness and necessity, (2) publish their rules on the collection, processing and use of personal information in advance and (3) obtain the consent of their users to those rules. The Draft does not provide any guidance on the form or manner in which consent must be obtained. The Draft also prohibits e-commerce enterprises from forcing users to consent to their collection, processing and use of personal information by threatening to cease the provision of services.
The processing and use of personal information by e-commerce enterprises must comply with the enterprise’s published rules on processing as agreed upon with its users. In addition, when amending its rules on information processing, an e-commerce enterprise must again obtain the consent of the users. If the users disagree with the amendments, the enterprise is required to provide appropriate relief. In instances where the enterprise proposes changes to its rules directed at the purpose, method or scope of the processing previously agreed upon with its users, the enterprise is required to inform the user and obtain the user’s express consent.
Before they may exchange and share data and information related to e-commerce, e-commerce enterprises are first required to irreversibly de-personalize the data and information in such a way that it can no longer be used to identify a specific individual (or an associated computer terminal). Additionally, in instances where the processing or use of personal information by an e-commerce enterprise might infringe upon the legitimate rights and interests of a user, the user has the right to request that the e-commerce operating entity cease the infringement. Further, upon expiration of a statutory or agreed-upon retention period, an e-commerce enterprise is required to cease its processing and use of relevant personal information, or delete or destroy such information.
According to the Draft, users also have the right to access their personal information. After receiving an access request from a user, the enterprise must promptly provide the relevant information after verifying the user’s identity. Similarly, in cases where a user requests the correction of any incorrect information, the enterprise is required to make the changes in a timely manner.
The Draft also includes provisions governing data breaches. In the case of an occurrence or possibility of leakage, loss or damage of personal information, the e-commerce enterprise must immediately take remedial measures, promptly notify the users and submit a report to the relevant authorities.
The personal information protection requirements laid out in the Draft are generally consistent with those that appear in the new Chinese Cybersecurity Law, and in a number of prior regulations, including those governing internet information service providers. It is possible that this may be a pattern that is taking hold among rules governing the processing of personal information in China.
On January 3, 2017, Bloomberg Law: Privacy and Data Security reported that Chilean legislators are soon expected to consider a new data protection law (the “Bill”) which would impose new privacy compliance standards and certain enforcement provisions on companies doing business in Chile.
Chile’s existing data protection law, the Law on the Protection of Private Life (Law No. 19,628), was signed into law in 1999 and does not provide for a privacy regulator with enforcement authority. Analysts expect that the Bill, the details of which have not yet been made public, will modify, rather than replace, the existing law, and will provide for the establishment of a data protection authority. It is expected that the data protection authority would report to another government agency rather than operating entirely independently.
The Bill is expected to be submitted before the legislature’s annual recess in February, though experts doubt the Bill will become law before March 2018, when a new administration will take office. Deputy Finance Minister Alejandro Micco indicated that the Bill aims to address the negative effect of inadequate data protection legislation on the development of global technological services in Chile.
On January 4, 2017, the National Institute of Standards and Technology (“NIST”) announced the final release of NISTIR 8062, An Introduction to Privacy Engineering and Risk Management in Federal Systems. NISTIR 8062 describes the concept of applying systems engineering practices to privacy and sets forth a model for conducting privacy risk assessments on federal systems. According to the NIST, NISTIR 8062 “hardens the way we treat privacy, moving us one step closer to making privacy more science than art.”
The stated goals of NISTIR 8062 are to:
- lay the groundwork for future guidance on how federal agencies will be able to incorporate privacy as an attribute of trustworthy systems through the management of privacy as a collaborative, interdisciplinary engineering practice;
- introduce a set of consistent objectives for privacy engineering and a new model for assessing privacy risks in federal systems; and
- provide a roadmap for evolving these preliminary concepts into actionable guidance, complementary to existing NIST guidance for information security risk management, so that agencies may more effectively meet their obligations under applicable federal privacy requirements and policies.
In its announcement, the NIST explains that the impetus for its work on privacy risk management came, in part, from the fact that there is an abundance of guidance on information security risk management but “no comparable body of work for privacy” and no “widely accepted models for doing the actual [risk] assessment.” As the NIST points out, high-level privacy principles, such as the Fair Information Practice Principles, “aren’t written in terms that system engineers can easily understand and apply.” NISTIR 8062 seeks to begin to close the gap between high-level privacy principles and practical privacy engineering and risk management.
The NIST’s announcement emphasizes that NISTIR 8062 is only an introduction to privacy engineering and risk management concepts. The NIST plans to refine its ideas and develop further guidance in the coming months and years.
On January 3, 2017, the Office of Management and Budget (“OMB”) issued a memorandum (the “Breach Memorandum”) advising federal agencies on how to prepare for and respond to a breach of personally identifiable information (“PII”). The Breach Memorandum, which is intended for each agency’s Senior Agency Official for Privacy (“SAOP”), updates OMB’s breach notification policies and guidelines in accordance with the Federal Information Security Modernization Act of 2014 (“FISMA”).
The Breach Memorandum sets the stage by discussing the evolving threat and risk landscape, noting that there has been a 27 percent increase in the number of incidents reported by federal agencies from 2013 to 2015. The Breach Memorandum defines a “breach,” which is a type of incident, as “[t]he loss of control, compromise, unauthorized disclosure, unauthorized acquisition, or any similar occurrence where (1) a person other than an authorized user accesses or potentially accesses personally identifiable information or (2) an authorized user accesses or potentially accesses personally identifiable information for an other than authorized purpose.” This definition goes beyond the definition contained in many state breach notification laws by including incidents of “potential” access to PII.
The Breach Memorandum next notes the importance of breach response and awareness training, and emphasizes key provisions to include in agency contracts that obligate contractors to (1) encrypt PII in accordance with OMB and agency-specific guidelines, (2) report breaches to the relevant agency as soon as possible and (3) cooperate with any forensic investigation and analysis. With respect to breach reporting, the Breach Memorandum encourages each agency to set up a simple email address, such as breach@[agency].gov, to which individuals may report suspected or confirmed breaches.
The Breach Memorandum then focuses on breach response plans. It requires each SAOP to develop and implement a breach response plan that:
- establishes a Breach Response Team at each agency to be headed by the SAOP;
- identifies applicable privacy compliance documentation such as system of record notices and privacy impact assessments;
- facilitates information sharing within the agency or between agencies for the purposes of reconciling or eliminating duplicate records, identifying potentially affected individuals or obtaining individuals’ contact information;
- analyzes reporting requirements to determine whether a specific breach requires the agency to notify law enforcement or Congress;
- assesses the risk of harm to potentially affected individuals by considering factors such as the PII at issue, the likelihood of access to and use of the information, and the relevant actors involved;
- mitigates the risk of harm to potentially affected individuals such as by purchasing identify theft protection services for potentially affected individuals; and
- notifies individuals affected by a breach, using the most appropriate method of notification.
Following a breach, agencies must track and document the response to each breach via a standard internal reporting template and identify any lessons learned from a breach. In addition, the SAOP and the agency must annually: (1) conduct a tabletop exercise, (2) review the breach response plan and consider potential updates and (3) submit an annual FISMA report on the adequacy of the agency’s information security policies and procedures.
The Breach Memorandum contains several appendices that can be used as resources for federal agencies, including a model breach reporting template, examples of services an agency may provide to affected individuals and a list of federal laws, executive orders, memoranda and directives that address data breaches.
On December 21, 2016, the Financial Industry Regulatory Authority (“FINRA”) announced that it had fined 12 financial institutions a total of $14.4 million for improper storage of electronic broker-dealer and customer records. Federal securities law and FINRA rules require that business-related electronic records be kept in “write once, read many” (“WORM”) format, which prevents alteration or destruction. FINRA found that the 12 sanctioned firms had failed to store such records in WORM format, in many cases for extended periods of time.
According to FINRA’s press release about the sanctions, it found that “each of these 12 firms had WORM deficiencies that affected millions, and in some cases, hundreds of millions, of records pivotal to the firms’ brokerage businesses, spanning multiple systems and categories of records.” Preventing the alteration or destruction of electronic brokerage records is, as the SEC has previously stated, “the primary means of monitoring compliance with applicable securities laws.” Further, as FINRA noted, these records contain sensitive financial data that is increasingly vulnerable to “aggressive attempts to hack into electronic data repositories.”
The individual fines ranged from $500,000 to $4 million. Brad Bennett, FINRA’s Executive Vice President and Chief of Enforcement, said of the fines, “These disciplinary actions are a result of FINRA’s focus on ensuring that firms maintain accurate, complete and adequately protected electronic records.”
On December 28, 2016, the New York State Department of Financial Services (“DFS”) announced an updated version of its cybersecurity regulation for financial institutions (the “Updated Regulation”). The Updated Regulation will become effective on March 1, 2017.
Key changes from the version that was published in September 2016 include:
- providing a definition of a “Third-Party Service Provider”;
- modifying the definition of “Nonpublic Information” to make it consistent with the definition of private information under New York’s state breach notification law;
- adding “asset inventory and device management” to the list of required components of a covered entity’s cybersecurity policy;
- permitting a covered entity’s Chief Information Security Officer to be employed by an affiliate of the covered entity or by a service provider;
- limiting the requirement for a covered entity to maintain audit trails to cover only cybersecurity events “that have a reasonable likelihood of materially harming any material part of the normal operations of the Covered Entity”;
- eliminating the obligation for covered entities to require multi-factor authentication for employees accessing internal databases; and
- adding a notice of exemption form that covered entities may complete and file with DFS if they believe they are exempt from specific sections of the regulations.
In announcing the Updated Regulation, DFS Superintendent Maria T. Vullo stated that the Updated Regulation “allows an appropriate period of time for regulated entities to review the rule before it becomes final and make certain that their systems can effectively and efficiently meet the risks associated with cyber threats.”
The Updated Regulation will be finalized in January 2017 following a 30-day notice and public comment period and will become effective on March 1, 2017.