Across healthcare organizations in the US, malicious actors are successfully leveraging phishing attacks to initially gain access to networks, according to findings from the 2019 HIMSS Cybersecurity Survey published by the Healthcare Information
The European Commission has issued an EU-wide recall of the Safe-KID-One children’s smartwatch marketed by ENOX Group over concerns that the device leaves data such as location history, phone and serial numbers vulnerable to hacking and alteration. The watch is equipped with GPS, a microphone and speaker, and has a companion app that grants parents oversight of the child wearer. According to a February 1, 2019 alert posted on the EU’s recall and notification index for nonfood products, flaws in the product could permit malicious users to send commands to any Safe-KID-One watch, making it call any other number, and to communicate with the child wearing the device or locate the child through GPS. The European Commission concluded that, as a result, the device does not comply with the 1994 Radio Equipment Directive. This recall follows Germany’s November 2017 ban on smartwatches for children.
With new threats to data emerging every day, public key infrastructure (PKI) has become an increasingly larger part of enterprises’ information security and risk management strategies. Research has found that 43% of
The post The Benefits of Correctly Deploying a PKI Solution appeared first on The Cyber Security Place.
Via the inimitable Brian Krebs, writing at Krebs On Security, comes further reportage detailing the continued authentication-flaw-exploitation of the GoDaddy, Inc. (NYSE: GDDY) Hole - a seemingly irrepairable flaw in their Registrar Line of Business systems, with a never-ending Exploitable Event Horizon.
Integrity is one of three vital components of securing information held within an organisation. Integrity is about ensuring consistency of
Despite their high-ranking positions, senior executives are reportedly the weak link in the corporate cybersecurity chain with a new report from The Bunker, which finds that cyber-criminals often target this known
Cryptocurrency exchange, QuadrigaCX, has suffered a security incident after it lost control of its customers assets. $137 million worth of
In the National Intelligence Agency of the USA believes that Russia will use cybertechnology for interference in the presidential election in Ukraine on March 31. This was stated by the Head of the National Intelligence Agency Dan Coats at the hearings in the US Senate Intelligence Committee.
Also, Dan Coats said that hackers from Russia can make attacks during the upcoming US elections in 2020.
It is known that the United States is ready to protect Ukraine from Russian interference in the elections, as declared by the President Donald Trump's national security advisor, John Bolton, during a visit the capital city of Ukraine (Kiev) in August last year.
In turn, the Head of the Foreign Intelligence Service of Ukraine Egor Bozhok recently said that the Russian Special Services received 350 million $ to interfere in the Ukrainian elections.
"The Kremlin will definitely try to interfere in the elections in Ukraine because Russia used to do this with the United States and African countries" - said the Head of the Security Service of Ukraine Vasily Gritsak.
The Security Service of Ukraine, the National Police and the Prosecutor General's Office are ready to resist Russian interference and know where Moscow can strike. Most actively Moscow is trying to make an information attack on Ukraine through TV screens. In addition, Russia uses information propaganda, cyber provocations, financially supports candidates and will try to capture polling stations.
E Hacking News - Latest Hacker News and IT Security News
In the National Intelligence Agency of the USA believes that Russia will use cybertechnology for interference in the presidential election in Ukraine on March 31. This was stated by the Head of the National Intelligence Agency Dan Coats at the hearings in the US Senate Intelligence Committee.
Also, Dan Coats said that hackers from Russia can make attacks during the upcoming US elections in 2020.
It is known that the United States is ready to protect Ukraine from Russian interference in the elections, as declared by the President Donald Trump's national security advisor, John Bolton, during a visit the capital city of Ukraine (Kiev) in August last year.
In turn, the Head of the Foreign Intelligence Service of Ukraine Egor Bozhok recently said that the Russian Special Services received 350 million $ to interfere in the Ukrainian elections.
"The Kremlin will definitely try to interfere in the elections in Ukraine because Russia used to do this with the United States and African countries" - said the Head of the Security Service of Ukraine Vasily Gritsak.
The Security Service of Ukraine, the National Police and the Prosecutor General's Office are ready to resist Russian interference and know where Moscow can strike. Most actively Moscow is trying to make an information attack on Ukraine through TV screens. In addition, Russia uses information propaganda, cyber provocations, financially supports candidates and will try to capture polling stations.
In January 2019, Hunton Andrews Kurth celebrates the 10-year anniversary of our award-winning Privacy and Information Security Law Blog. Over the past decade, we have worked hard to provide timely, cutting-edge updates on the ever-evolving global privacy and cybersecurity legal landscape. Ten Years Strong: A Decade of Privacy and Cybersecurity Insights is a compilation of our blog’s top ten most read posts over the decade, and addresses some of the most transformative changes in the privacy and cybersecurity field.
European airplane maker Airbus admitted yesterday a data breach of its “Commercial Aircraft business” information systems that allowed intruders to gain access to some of its employees’ personal information. Though
The post Airbus Suffers Data Breach, Some Employees’ Data Exposed appeared first on The Cyber Security Place.
Carolyn Crandall, Chief Deception Officer at Attivo Networks, explores how deception techniques can provide not only early detection of malicious activity but also an invaluable insight into an attacker’s methods.Deception
The post How deception changes the rules of engagement in cyber security appeared first on The Cyber Security Place.
You’ll have to forgive my ignorance—but what is an appropriate gift for Data Privacy Day? Perhaps an encrypted portable drive? That might not be a bad idea, but what I have
The post Cybersecurity Experts Share Insight For Data Privacy Day 2019 appeared first on The Cyber Security Place.
On January 22, 2019, the European Data Protection Board (“EDPB”) issued a report on the Second Annual Review of the EU-U.S. Privacy Shield (the “Report”). Although not binding on EU or U.S. authorities, the Report provides guidance to regulators in both jurisdictions regarding implementation of the Privacy Shield and highlights the EDPB’s ongoing concerns with regard to the Privacy Shield. We previously blogged about the European Commission’s report on the second annual review of the Privacy Shield, and the joint statement of the European Commission and Department of Commerce regarding the second annual review.
In the Report, the EDPB praised certain actions and efforts undertaken by U.S. authorities and the European Commission to implement the Privacy Shield, including the following:
- Efforts by the Department of Commerce to adapt the initial certification process to minimize inconsistencies between the Department’s Privacy Shield List and representations made by certifying organizations (in their privacy notices) regarding their participation in the Privacy Shield;
- Enforcement actions and other oversight measures taken by the Department of Commerce and Federal Trade Commission regarding compliance with the Privacy Shield; and
- Issuance of guidance for EU individuals on exercising their rights under the Privacy Shield, and for U.S. businesses to clarify the requirements of the Privacy Shield (g., the Department of Commerce’s FAQs available on PrivacyShield.gov).
The Report identifies continuing concerns of the EDPB, including the following key areas:
- According to the EDPB, “a majority of companies’ compliance with the substance of the Privacy Shield’s principles remain unchecked.” The EDPB indicated that the application of the Shield principles by certifying organizations has not yet been ascertained through oversight and enforcement action by U.S. authorities.
- With respect to the onward transfer principle, the EDPB suggested that U.S. authorities more closely monitor the implementation of this principle by certified entities, suggesting, for example, that the Department of Commerce exercise “its right to ask organizations to produce the contracts they have put in place with third countries’ partners” to assess whether the contracts provide the required safeguards and whether further guidance or action by the U.S. authorities is needed in this regard.
- The EDPB indicated that the re-certification process “needs to be further refined,” noting that the Privacy Shield list contains outdated listings, leading to confusion for data subjects.
- The Report highlights the uncertainty surrounding the application of the Privacy Shield to HR data, noting that conflicting interpretations of the definition of HR data has led to uncertainty as to what protections are available.
In addition, the Report notes that the EDPB is still awaiting the appointment of a permanent independent Ombudsperson to oversee the Privacy Shield program in the U.S. Until such time as an appointment is made, the EDPB cannot determine whether the Ombudsperson “is vested with sufficient powers to remedy non-compliance” with the Privacy Shield.
The Council of Europe agreed that January 28 should be declared European Data Protection Day back in 2007; two years later the U.S. joined in with the Data Privacy Day
The post 11 Expert Takes On Data Privacy Day 2019 You Need To Read appeared first on The Cyber Security Place.
The Illinois Supreme Court ruled today that an allegation of “actual injury or adverse effect” is not required to establish standing to sue under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”). This post discusses the importance of the ruling to current and future BIPA litigation.
The Illinois Supreme Court rendered a decision on January 25, 2019, that gives the green light to certain plaintiffs seeking redress under the BIPA. BIPA provides a private right of action to Illinois residents “aggrieved” by private entities that collect their biometric data (including retina scans, fingerprints and face geometry) without complying with the statute’s notice and consent requirements. Hundreds of cases have been filed under the law, including many putative class actions, enticed by per-violation statutory damages of $1,000 or more.
In the opinion, the Illinois Supreme Court unanimously found that allegations of a technical violation alone can sustain an action, and that limiting BIPA claims to those individuals who can plead and prove an actual injury would depart from the plain and unambiguous meaning of the law. The case is styled Stacy Rosenbach v. Six Flags Entertainment Corp., No. 123186 (Ill.).
BIPA currently is the most watched statute in the U.S. concerning the collection and use of biometric data because it is the only such law that provides a private right of action. The court’s decision resolves a jurisdictional issue that had derailed some prior lawsuits. Today’s decision promises to ramp up an already steady stream of litigation both in and outside of Illinois.
Use of biometric technology by businesses for employee timekeeping, customer identification, and other applications is increasing. The importance of strict compliance with BIPA for companies operating in Illinois is now unavoidably clear.
With the increase in cyber-attacks and information security breaches – 72% of large UK firmsidentified an information security breach in 2018, a rise from 68% from 2017 – the importance of protecting both
The post Implementing ISO 27001 and Avoiding Potential GDPR Consequences appeared first on The Cyber Security Place.
The Mirriam-Webster dictionary defines the idiom “better the devil you know than the devil you don’t” as “it is better to deal with a difficult person or situation one knows
The post The Devil You Know – How Idioms Can Relate to Information Security appeared first on The Cyber Security Place.
Hundreds of contractors and subcontractors with connections to U.S. electric utilities and government agencies have been hacked, according to a recent report by the Wall Street Journal. The U.S. government has linked the hackers to a Russian state-sponsored group, sometimes called Dragonfly or Energetic Bear. The U.S. government alerted the public that the hacking campaign started in March 2016, if not earlier, although many of its victims were unaware of the incident until notified by the Federal Bureau of Investigation and Department of Homeland Security, the Wall Street Journal reports.
Instead of using sophisticated techniques to directly attack utilities companies, the hackers largely “exploited trusted business relationships using impersonation and trickery” to access the networks of U.S. electric utilities, such as by planting malware on sites of online publications frequently read by utility engineers and through clever spear phishing emails. According the article, Jonathan Homer, the Department of Homeland Security’s Chief of Industrial Control Systems Group, reported in a briefing to utilities last year that the hackers could have caused temporary power outages. While the exact number of utilities and vendors compromised is unknown the article goes on, industry experts say that the hackers likely still have access to some systems.
On January 23, 2019, the European Commission announced that it has adopted its adequacy decision on Japan (the “Adequacy Decision”). According to the announcement, Japan has adopted an equivalent decision and the adequacy arrangement is applicable with immediate effect.
Prior to the adoption of the Adequacy Decision, Japan implemented a series of additional safeguards designed to ensure that data transferred from the EU to Japan will be protected in line with European standards. These include:
- A set of supplementary rules to bridge the difference between EU and Japanese standards on various issues, including sensitive data, the exercise of individual rights and onward transfer of EU data to third countries;
- Safeguards concerning Japanese public authorities’ access to EU personal data for criminal law enforcement and national security purposes; and
- A complaint-handling mechanism, administered and supervised by the Japanese Personal Information Protection Commission, to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities.
In terms of next steps, an initial joint review will be carried out after two years to evaluate the functioning of the framework. The assessment will cover all aspects of the Adequacy Decision, including the application of the additional safeguards mentioned above. Representatives of the European Data Protection Board will participate in the portion of the review relating to access to data by Japanese public authorities for law enforcement and national security purposes. Following this initial review, periodic reviews of the framework will take place at least every four years.
Commenting on the Adequacy Decision, Věra Jourová, Commissioner for Justice, Consumers and Gender Equality, noted in the announcement that the “decision creates the world’s largest area of safe data flows” and that “this arrangement will serve as an example for future partnerships in this key area [of privacy] and help setting global standards.”
Quite likely, this subterfuge attack, utilizing one of the more clever methods to evade detection to date, is the new attaque-du-jour.
On January 21, 2019, the French Data Protection Authority (the “CNIL”) imposed a fine of €50 million on Google LLC under the EU General Data Protection Regulation (the “GDPR”) for its alleged failure to (1) provide notice in an easily accessible form, using clear and plain language, when users configure their Android mobile device and create a Google account, and (2) obtain users’ valid consent to process their personal data for ad personalization purposes. The CNIL’s enforcement action was the result of collective actions filed by two not-for-profit associations. This fine against Google is the first fine imposed by the CNIL under the GDPR and the highest fine imposed by a supervisory authority within the EU under the GDPR to date.
On June 1, 2018, the CNIL shared these two complaints with other EU data protection supervisory authorities with a view toward designating a lead supervisory authority in accordance with Article 56 of the GDPR. On September 21, 2018, the CNIL nonetheless carried out an online inspection to assess whether the processing activities carried out by Google in the context of the Android operating system complied with the French Data Protection Act and the GDPR.
CNIL’s Jurisdiction over Google LLC’s Processing Activities
Google challenged the jurisdiction of the CNIL arguing that its Irish affiliate, Google Ireland Limited, is Google LLC’s European headquarters and its main establishment for the purposes of the GDPR’s one-stop-shop mechanism and that the complaints should have been handled by the Irish Data Protection Commissioner as Google’s lead supervisory authority.
Alleged GDPR Violations
In setting its fine at €50 million, the CNIL considered the following:
- The fact that the alleged violations relate to essential principles of the GDPR and are therefore particularly serious;
- The fact that the alleged violations are still occurring and constitute continuous breaches of the GDPR;
- The importance of the Android operating system in the French market; and
The CNIL imposed its fine upon Google LLC but addressed its decision to Google France SARL in order to enforce its decision. Google LLC may appeal this decision within four months before France’s highest Administrative Court (Conseil d’Etat).
A former employee of WP MultiLingual’s (WPML) claimed he exploited vulnerabilities over the weekend. The ex-employee sent out mass emails to
On January 15, 2019, the UK House of Commons rejected the draft Brexit Withdrawal Agreement negotiated between the UK Prime Minister and the EU by a margin of 432-202. While the magnitude of the loss sets in motion a process which could potentially have resulted in an early general election being held, on January 16 a majority of British Members of Parliament rejected a vote of no confidence in Theresa May’s government.
While calls for a fresh referendum are gathering momentum, and the possibility of an exit from the EU without an agreed-upon plan continues to loom large, from a data protection perspective the UK Information Commissioner’s Office’s (“ICO”) recently published guidance for businesses regarding the consequences of a UK exit without a deal remains relevant. In this guidance, the ICO has recommended six steps for companies to take in the event of a hard Brexit, including:
- Continue to apply GDPR standards and follow current ICO guidance;
- Identify relevant data flows from the EU to the UK and ensure appropriate data transfer mechanisms are in place in respect of those transfers once the UK leaves the EU;
- Identify relevant data flows from the UK to any country outside the UK, as these data transfers will require a separate data transfer mechanism in due course;
- Review and assess the company’s operations across the EU, and assess how the UK’s exit from the EU will affect the data protection regimes that apply to the company;
- Review privacy-related documents (e.g., notices) and internal documentation to identify any details that will need to be updated once the UK leaves the EU; and
- Ensure that key individuals within the organization are aware of these key issues, involved in relevant planning activities, and kept up to date with the latest information and guidance.
In addition, the ICO has published guidance on the effects of leaving the EU without a Withdrawal Agreement, which provides detailed explanations in relation to how various aspects of the GDPR will apply in the UK in the event of a no-deal Brexit. Those areas include data transfer restrictions, the appointment of representatives, the one-stop-shop, the ICO’s participation in the European Data Protection Board, and various other matters. Finally, the ICO has published a general overview of the issues at stake in the form of frequently asked questions.
The ICO has indicated that it will provide more detailed guidance as the situation develops further. View the ICO’s guidance.
As we previously reported in February 2017, an Illinois federal judge denied a motion to dismiss two complaints brought under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”) by individuals who alleged that Google captured, without plaintiff’s consent, biometric data from facial scans of images that were uploaded onto Google Photos. The cases subsequently were consolidated, and on December 29, 2018, the Northern District of Illinois dismissed the case on standing grounds, finding that despite the existence of statutory standing under BIPA, neither plaintiff had claimed any injury that would support Article III standing.
In Spokeo, Inc. v. Robins, the Supreme Court held that Article III standing requires a concrete and particularized injury even in the context of a statutory violation. The court here likewise concluded that although the plaintiffs in this case had statutory standing under BIPA, the procedural, statutory violation alone was insufficient in satisfying the standing requirement.
In asking whether either plaintiff adequately alleged such requisite injury, the court considered Google’s collection and retention of the facial scans. With respect to the retention issue, the court followed the 7th Circuit ruling in Gubala v. Time Warner Cable, Inc. that, while in violation of the Cable Communications Policy Act, the retention of individual information alone, without information disclosure or sufficient risk of information disclosure, did not confer Article III standing.
Regarding collection, the court considered (1) Patel v. Facebook Inc., a similar case brought in the Northern District of California that was not dismissed, involving a plaintiff who alleged that Facebook’s use of facial recognition for tagging photos violated BIPA’s notice and consent requirements; and (2) common law tort analogues. The Illinois court (1) declined to follow the California court, reasoning that there was an insufficient showing that the Illinois legislature intended to create a cause of action that would arise from the violation of BIPA’s notice and consent requirements alone; and (2) found that the two common law tort analogues bearing the closest relationship to the alleged injury, intrusion upon seclusion and misappropriation, were not appropriate in this case because the harms alleged by the plaintiffs were incompatible with or did not align with the harms of the tort of intrusion upon seclusion or misappropriation. Specifically, the templates that Google created were based on faces, which are regularly publicly exposed, and were not made publicly available or used by Google for commercial purposes. As such, the court dismissed the claim, holding that neither plaintiff in this case had claimed an injury that would support Article III standing.
A number of BIPA actions remain pending in federal and state courts. It remains to be seen whether other courts will agree with the Northern District of Illinois regarding the unavailability of BIPA claims based solely on procedural violations of the act.
On January 10, 2019, Massachusetts Governor Charlie Baker signed legislation amending the state’s data breach law. The amendments take effect on April 11, 2019.
Key updates to Massachusetts’s Data Breach Notification Act include the following:
- The required notice to the Massachusetts Attorney General and the Office of Consumer Affairs and Business Regulation will need to include additional information, including the types of personal information compromised, the person responsible for the breach (if known) and whether the entity maintains a written information security program. Under Massachusetts 201 CMR § 17.03, any entity that owns or licenses personal information about a Massachusetts resident is currently obligated to develop, implement and maintain a comprehensive written information security program that incorporates the prescriptive requirements contained in the regulation.
- If individuals’ Social Security numbers are disclosed, or reasonably believed to have been disclosed, the company experiencing a breach must offer credit monitoring services at no cost for at least 18 months (42 months, if the company is a consumer reporting agency). Companies also must certify to the Massachusetts attorney general and the Director of the Office of Consumer Affairs and Business Regulation that their credit monitoring services are compliant with state law.
- The amended law explicitly prohibits a company from delaying notice to affected individuals on the basis that it has not determined the number of individuals affected. Rather, the entity must send out additional notices on a rolling basis, as necessary.
- If the company experiencing a breach is owned by a separate entity, the individual notice letter must specify “the name of the parent or affiliated corporation.”
- Companies are prohibited from asking individuals to waive their right to a private action as a condition for receiving credit monitoring services.
The U.S. Department of Health and Human Services (“HHS”) recently announced the publication of “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients” (the “Cybersecurity Practices”). The Cybersecurity Practices were developed by the Healthcare & Public Health Sector Coordinating Councils Public Private Partnership, a group comprised of over 150 cybersecurity and healthcare experts from government and private industry.
The Cybersecurity Practices are currently composed of four volumes: (1) the Main Document, (2) a Technical Volume of cybersecurity practices for small healthcare organizations, (3) a Technical Volume of cybersecurity practices for medium and large healthcare organizations, and (4) a Resources and Templates Volume. The Cybersecurity Practices also will include a Cybersecurity Practices Assessments Toolkit, but that is still under development.
The Main Document provides an overview of prominent cyber attacks against healthcare organizations and statistics on the costs of such attacks—such as that in 2017, cyber attacks cost small and medium-sized businesses an average of $2.2 million—and lists the five most common cybersecurity threats that impact the healthcare industry: (1) email phishing attacks, (2) ransomware attacks, (3) loss or theft of equipment or data, (4) insider, accidental or intentional data loss and (5) attacks against connected medical devices that may affect patient safety. The Main Document describes real world scenarios exemplifying each threat, lists “Threat Quick Tips,” analyzes the vulnerabilities that lead to such threats, discusses the impact of such threats and provides practices for healthcare organizations (and their employees) to consider to counter such threats. The Main Document concludes by noting that it is essential for healthcare organizations and government to distribute “relevant, actionable information that mitigates the risk of cyber-attacks” and argues for a “culture change and an acceptance of the importance and necessity of cybersecurity as an integrated part of patient care.”
The two Technical Volumes list the following 10 cybersecurity practices for small and medium and large healthcare organizations:
- email protection systems;
- endpoint protection systems;
- access management;
- data protection and loss prevention;
- asset management;
- network management;
- vulnerability management;
- incident response;
- medical device security; and
- cybersecurity policies.
The Technical Volumes also list cybersecurity sub-practices and advice for healthcare organizations to follow, with the noted distinction that small healthcare organizations are focused on cost-effective solutions while medium and large organizations may have more “complicated ecosystems of IT assets.”
Finally, the Resources and Template Volume maps the 10 cybersecurity practices and sub-practices to the NIST Cybersecurity Framework. It also provides templates such as a Laptop, Portable Device, and Remote Use Policy and Procedure, Security Incident Response Plan, an Access Control Procedure, and a Privacy and Security Incident Report.
In announcing the Cybersecurity Practices, HHS Acting Chief Information Security Officer stated that cybersecurity is “the responsibility of every organization working in healthcare and public health. In all of our efforts, we must recognize and leverage the value of partnerships among government and industry stakeholders to tackle the shared problems collaboratively.”
We round up interesting research and reporting about security and privacy from around the web. This month: the security year in review, resilience on rails, incidents in depth, phishing hooks millennials, Internet of Threats, and CISOs climbing the corporate ladder.
A look back at cybercrime in 2018
It wouldn’t be a new year’s email without a retrospective on major security incidents over the previous 12 months. Credit to CSO Online for assembling a useful overview of some of last year’s most common risks and threats. To beef up this resource, it sourced external research and stats, while adding plenty of links for further reading. Some of the highlights include the massive rise in cryptocurrency mining. “Coin miners not only slow down devices but can overheat batteries and sometimes render a device useless,” it warned.
The article also advises against posting mobile numbers on the internet, because criminals are finding ways to harvest them for various scams. CSO also advises organisations about knowing the value of their data in order to protect it accordingly. Threatpost has a handy at-a-glance guide to some of the big security incidents from the past year. Meanwhile, kudos to Vice Motherboard for its excellent ‘jealousy list’ which rounds up great hacking and security stories from 2018 that first appeared in other media outlets.
Luas security derails tram website
The new year got off to a bad start for Dublin’s tram operator Luas, after an unknown attacker defaced its website in a security incident. On January 2nd, the Luas site had this message: “You are hacked… some time ago i wrote that you have serious security holes… you didn’t reply… the next time someone talks to you, press the reply button… you must pay 1 bitcoin in 5 days… otherwise I will publish all data and send emails to your users.”
The incident exposed 3,226 user records, and Luas said they belonged to customers who had subscribed to its newsletter. News of the incident spread widely, possibly due to Luas’ high profile as a victim, or because of the cryptocurrency angle.
The tram service itself was not affected, nor was the company’s online payments system. While the website was down, Luas used its Twitter feed to communicate travel updates to the public, and warned people not to visit the site. Interviewed by the Irish Times, Brian Honan said the incident showed that many organisations tend to forget website security after launch. As we’ve previously blogged, it’s worth carrying out periodic vulnerability assessments to spot gaps that an attacker could exploit. With the Luas site not fully back six days later, Brian noted on Twitter that it’s important to integrate incident response with business continuity management.
One hacked laptop and two hundred solemn faces
When an employee of a global apparel company clicked on a link in a phishing email while connected to a coffee shop wifi, they unwittingly let a cybercrime gang onto their corporate network. Once in, the attackers installed Framework POS malware on the company’s retail server to steal credit card details. It’s one real-life example from CrowdStrike’s Cyber Intrusion Casebook. The report details various incident response cases from 2018. It also gives recommendations for organisations on steps to take to protect their critical data better. In addition to coverage in online news reports, the document is available as a free PDF on CrowdStrike’s site.
Examples like these show the need for resilience, which we’ve blogged about before. No security is 100 per cent perfect. But it shouldn’t follow that one gap in the defences brings the entire wall crumbling down.
Digitally savvy, yes. Security savvy, not so much
Speaking of phishing, a new survey has found that digital natives are twice as likely to have fallen victim to a phishing scam than their older – sorry, we mean more experienced – colleagues. Some 17 per cent in the 23-41 age group clicked on a phishing link, compared to 42-53 years old (6 per cent) or 54+ (7 per cent). The findings suggest a gap between perception and reality.
Out of all the age groups, digital natives were the most confident in their ability to spot a scam compared to their senior peers. Yet the 14 per cent of digital natives who weren’t as sure of their ability to spot a phish was strikingly close to the percentage in the same age bracket who had fallen for a phishing email. The survey by Censuswide for Datapac found that 14 per cent of Irish office workers – around 185,000 people – have been successfully phished at some stage.
OWASP’s IoT hit list
Is your organisation planning an Internet of Things project in 2019? Then you might want to send them in OWASP’s direction first. The group’s IoT project aims to improve understanding of the security issues around embedding sensors in, well, anything. To that end, the group has updated its top 10 list for IoT. The risks include old reliables like weak, guessable passwords, outdated components, insecure data transfer or storage, and lack of physical hardening. The full list is here.
The number’s up for CISO promotions
Why do relatively few security professionals ascend to the highest levels of business? That’s the provocative question from Raj Samani, chief scientist with McAfee. In an op-ed for Infosecurity Magazine, Samani argues that security hasn’t yet communicated its value to the business in an identifiable way. Proof of this is the fatigue or indifference over ever-mounting numbers of data breaches. Unlike a physical incident like a car accident where the impact is instantly visible, security incidents don’t have the same obvious cause and effect.
“The inability to determine quantifiable loss means that identifying measures to reduce risk are merely estimated at best. Moreover, if the loss is rarely felt, then the value of taking active steps to protect an asset can simply be overlooked,” Samani writes. “We can either bemoan the status quo or identify an approach that allows us to articulate our business value in a quantifiable way.”
Social Engineering Can Make You a Better Person
When social engineering makes the headlines, it is generally as a negative term where S.E. principles are used to initiate, perpetuate, or assist a large hack that exfiltrates data or distributes ransomware. With headlines like “Social engineering at the heart of critical infrastructure attack” and “Iranian phishers bypass 2fa protection offered by Yahoo Mail and Gmail,” it is easy to see how the term has developed negative connotation . However, here at SECOM and SEORG we utilize social engineering with the goal to “leave others better for having met [us]” while employing, practicing, and curating strong social engineering skillsets. Here, we discussed whether all social engineers are bad people and, though people rarely fall cleanly into the category of “good” or “bad,” this conversation is constantly being debated.
Almost a year ago, I made my newsletter debut examining how SE skills could be used in everyday life. Since then, I look for opportunities to practice my craft, improve my abilities, and be a stronger SE whenever I can. After reflecting on this last year, I can absolutely say that social engineering makes me a better person, and if you choose to social engineer as a white hat, it can make you one too.
How Social Engineering Can Make You a Better Person
As social engineers, we must quickly build rapport with our targets, maintain that rapport, and accomplish our goals without being burnt. We do this via email through phishing, phone calls through vishing, and in person via impersonation. As white hat social engineers, the skills needed to accomplish these goals effectively range from utilizing Dr. Robert Cialdini’s influence principles to awareness of vocal tone, body language, and facial expressions. Let’s examine some of the positive skills social engineering can foster:
- Reciprocity – the reciprocity principle indicates that people will want to return something, a gift, favor, information, etc., that they are given in equal or greater value. However, it is important to remember that the recipient determines the value of what they have received. To effectively use this, an SE must remember that the target needs to value whatever they are given. In personal life, this causes us to think more about what others value over what we may value. This makes us more conscientious and encourages us to prioritize the other person.
- Awareness of others – in the field, SEs are constantly looking to pick up queues from their targets. What internal jargon do they use? How do they speak? What is their body during the interaction? Do they seem like they want to get away? Are they in a rush? This has caused me, when meeting new people, to study how they are speaking and attempt neutrality until I understand how to communicate most effectively to the person I am speaking with. Additionally, I pay attention to how they are behaving, whether they seem like they need to go, and respect their boundaries. This creates a safe space for the people you interact with.
- Speaking less and listening more – As an SE, we are usually on the hunt for information. It is challenging to get information out of someone if you’re the one doing all the talking. At home, I employ reflective questioning and allow my friends and family to get more speaking time and work to truly listen to the information they are sharing. People appreciate when they feel heard. This will strengthen your interpersonal relationships and improve your conversation skills.
- Empathy – you never know where the other person in the conversation is coming from. They could have just gotten rough news, missed breakfast, or not had enough sleep the night before. While listening, really work to understand the perspective the individual is coming from and assume positive intent. Figuring out where a person is coming from and how they may feel connects you more closely to others.
- Patience – Jumping into an engagement too hard too fast throws your targets off. In my day-to-day life, I have a tendency to want answers RIGHT NOW. However, the value of waiting for others to get on the same page cannot be stressed enough. I am now far more inclined to lay the foundations of a conversation and then wait for the other party to address topics when they are ready.
Great resources to build social engineering and life skills
If you want to practice these skills in your daily life, as well as your career, here are some great resources to start with:
- Joe Navarro’s The Dictionary of Body Language gives many tangible examples of body language that can improve your ability to read a room.
- Dr. Paul Ekman’s micro-expressions training will help you read others’ reactions better and can be used to understand their feelings better.
- Cold reading exercises like those in Ian Rowland’s book “The Full Facts Book of Cold Reading” can help strengthen conversational skills.
- Robin Dreeke’s book “It’s not all about me: The top 10 techniques for building quick rapport with anyone” provides tangible steps to fostering good rapport.
- Chris Hadnagy’s latest book, Social Engineering: The Science of Human Hacking
- The Social-Engineer Podcast hosts great guests who explain unique skill sets and tools that are used in both life and social engineering.
The intention with which you take an action can determine the quality of that action and, broadly, whether it is “good” or “bad.” Should you use your social engineering skills to exploit individuals for your own personal gain, that action is not good. However, by practicing the skillsets of strong social engineers while attempting to leave others better for having met you, you may inadvertently realize you have grown into a better version of yourself. Social engineering can make you a better person, and I challenge you to look for opportunities to practice these skills for the benefit of others in this new year. If you are curious about how to S.E. for good, check out the Social Engineering Code of Ethics. I hope you see yourself grow in the process!
Be secure and be kind,
Written By: Cat Murdock
On December 27, 2018, the French Data Protection Authority (the “CNIL”) announced that it imposed a fine of €250,000 on French telecom operator Bouygues Telecom for failing to protect the personal data of the customers of its mobile package B&YOU.
On March 2, 2018, the CNIL was informed – by a third party – of the existence of a years-long security vulnerability on Bouygues Telecom’s website bouyguestelecom.fr, the end result of which made possible for any person, including bad actors, to access documents containing customers’ personal data from several URL addresses with a similar structure. On March 6, 2018, Bouygues Telecom notified the CNIL of the data breach. The company explained that the incident was due to a human mistake: the computer code, which requires user authentication on the company’s website, had been deactivated during a test phase but not re-activated once the tests were completed. The company quickly blocked the data from improper access.
The CNIL’s Decision
The CNIL noted that the breach affected more than two million customers, and included personal data, such as the customer’s first and last name, date of birth, e-mail address, address and mobile telephone number. The CNIL further noted that the breach lasted for more than two years. The CNIL recognized that human mistake was at the root of the incident, and that the company could not completely guard against such mistakes. The CNIL found, however, that for more than two years the company failed to implement appropriate security measures that would have enabled it to discover the breach, and concluded that the company failed to comply with its obligation to protect its customers’ personal data. As the GDPR was not applicable at the time of the data breach, the CNIL decided to impose a fine of €250,000 on Bouygues Telecom.
On December 21, 2018, the Irish Data Protection Commission (the “DPC”) published preliminary guidance on data transfers to and from the UK in the event of a “no deal” Brexit (the “Guidance”). The Guidance is relevant for any Irish entities that transfer personal data to the UK, including Northern Ireland.
The Guidance notes that if the UK leaves the European Union at 00:00 CET on March 30, 2019, without a withdrawal agreement in place, the UK will be deemed a third country for the purposes of EU data transfers and will require Irish-based organizations and bodies to implement legal safeguards in order to continue transferring data to the UK or Northern Ireland.
The Guidance provides several examples of data transfers that may be affected and includes a list of next steps for organizations to consider in the run up to the withdrawal date. These measures include:
- Mapping the personal data the organization currently transfers to the UK and Northern Ireland;
- Determining whether such transfers will need to continue beyond March 30, 2019; and
- Assessing the different transfer mechanisms available to see which one will be most appropriate for the organization to continue transferring their data and working to have it in place before the UK departs from the EU.
The Guidance concludes by noting that more information will be available from the DPC as the withdrawal date nears.
As we previously reported, the UK House of Commons rejected the draft Brexit withdrawal agreement on January 15, 2019, making the prospect of “no deal” Brexit still a possibility.
On December 20, 2018, the French data protection authority (the “CNIL”) announced that it levied a €400,000 fine on Uber France SAS, the French establishment of Uber B.V. and Uber Technologies Inc., for failure to implement some basic security measures that made possible the 2016 Uber data breach.
On November 21, 2017, Uber Technologies Inc. published an article on its website revealing that two external individuals had accessed the personal data of 57 million Uber riders and drivers worldwide at the end of 2016.
On November 28, 2017, Uber B.V. sent a letter to the Chairman of the Article 29 Working Party (“Working Party”) to describe the circumstances of the data breach and express its willingness to cooperate with all competent data protection authorities.
On November 29, 2017, the Working Party established a taskforce to coordinate the plethora of national investigations throughout the EU into Uber’s 2016 data breach. This taskforce is composed of representatives from the Dutch, Spanish, French, Belgian, Italian, UK and Slovakian data protection authorities (“DPAs”).
On December 22, 2017, the CNIL sent a questionnaire to Uber Technologies Inc. and Uber B.V. related to the circumstances of the data breach and the security measures implemented by these companies. Uber replied to the questionnaire, explaining that the data breach occurred in three steps: (1) two external individuals managed to gain access to credentials stored in plain text on the collaborative development platform “GitHub” used by Uber’s software engineers; (2) the hackers then used these credentials to connect to GitHub, and found an access key recorded in plain text in a source code file, enabling the hackers to remotely access a server on which Uber users’ data were stored; and (3) they downloaded personal data relating to 57 million users, including 1.4 million in France (1.2 million riders and 163,000 drivers).
The CNIL’s Decision
Against that background, the CNIL issued a decision, discussing inter alia (1) the data controllership of Uber Technologies Inc. and Uber B.V.; (2) the applicability of French data protection law; (3) Uber’s failure to implement appropriate safeguards to prevent unauthorized third parties from accessing the data; and (4) the imposition of a sanction on Uber France SAS, the French establishment of Uber Technologies Inc. and Uber B.V.
Uber Technologies Inc. and Uber B.V. as joint data controllers: The CNIL rejected Uber’s arguments that its Dutch affiliate, Uber B.V., was the sole data controller and that Uber Technologies Inc. acted as a mere data processor of Uber B.V. when (1) issuing guidelines on the handling of personal data, (2) providing training for new employees of the Uber group, (3) executing agreements with third companies, and (4) handling the consequences of the data breach.
In particular, the CNIL considered that the last point—handling the data breach fallout—is not a mere technical or organizational question that can be dealt with by a data processor as part of the margin of maneuver left to the data processor. According to the CNIL, how a data breach is handled is a question related to the essential elements of the means of the data processing, and can only be determined by the data controller. In the CNIL’s view, the fact that Uber Technologies Inc. (1) drafted data protection guidelines applied by all the entities of the Uber group, (2) was responsible for training new employees of the group, and (3) executed agreements with third-party companies (including for the provision of tools necessary for the proper functioning of Uber services) also demonstrate that Uber Technologies Inc. plays a key role in the determination of the purposes and means of the data processing. As a result, the CNIL found that Uber Technologies Inc. is a joint data controller with Uber B.V.
Applicability of French data protection law: Uber has an establishment in France – Uber France SAS – that carries out marketing campaigns to promote Uber’s services and provides support to Uber riders and drivers in France. Referring to the decision of the European Court of Justice (“ECJ”) in Google v. Costeja, the CNIL considered the processing of Uber riders’ and drivers’ personal data to be carried out in the context of the activity of the French establishment of the data controllers, Uber B.V. and Uber Technologies Inc.
Failure to implement appropriate security measures: The CNIL concluded that the data breach was preventable if Uber had implemented certain basic security measures, including:
- The company should have required that its engineers connect to the “GitHub” platform with a strong authentication measure (e.g., a username and password and then a secret code sent on the engineer’s mobile phone).
- The company should not have stored – in plain text within the source code of the “GitHub” platform – credentials that allow access to the server.
- The company should have implemented an IP filtering system to access the “Amazon Web Services S3” servers containing personal data of its users.
Uber France SAS as the addressee of the CNIL’s decision: The CNIL, citing the ECJ’s Wirtschaftsakademie Schleswig-Holstein GmbH decision of June 5, 2018, rejected Uber’s arguments that the CNIL could impose a sanction only on a data controller (and not on a mere establishment of the data controller). In this decision, the ECJ found that, where a business established outside the EU has several establishments in different EU Member States, the supervisory authority of a Member State may exercise its EU Data Protection Directive-derived powers with respect to an establishment in the territory of that Member State even if, as a result of the division of tasks within the group, (1) that establishment is responsible solely for the sale of advertising space and other marketing activities in the territory of the Member State concerned and, (2) exclusive responsibility for collecting and processing personal data belongs, for the entire territory of the EU, to an establishment located in a different Member State. The CNIL therefore decided to impose a sanction on Uber France SAS. As the EU General Data Protection Regulation was not applicable at the time of the data breach, the CNIL imposed a fine of €400,000 on Uber France SAS. When setting the amount of the fine, the CNIL took into account the fact that hackers gained access to the data, thereby possibly allowing them to make further use of the data. The CNIL stressed that, although no damage suffered by affected individuals has been reported to date, evidence of a complete absence of damage cannot be invoked by Uber.
This is the third fine imposed by an EU DPA on Uber in relation to its 2016 data breach. On November 6, 2018, the Dutch DPA fined Uber €600,000 for failure to notify the breach. On November 26, 2018, the ICO also fined Uber £385,000 for failure to implement appropriate security measures.
My new writing course for cybersecurity professionals teaches how to write better reports, emails, and other content we regularly create. It captures my experience of writing in the field for over two decades and incorporates insights from other community members. It’s a course I wish I could’ve attended when I needed to improve my own security writing skills.
I titled the course The Secrets to Successful Cybersecurity Writing: Hack the Reader. Why “hack”? Because strong writers know how to find an opening to their readers’ hearts and minds. This course explains how you can break down your readers’ defenses, and capture their attention to deliver your message—even if they’re too busy or indifferent to others’ writing.
Here are several examples of such “hacking” techniques from course sections that focus on the structure and look of successful security writing:
- Headings: Use them to sneak in the gist of your message, so your can persuade your readers even if they don’t read the rest of your text.
- Lists: Rely on them to capture your readers’ attention when they skim your message for key ideas.
- Figure Captions: Include them to influence the conclusion your readers reach even if they only glance at the graphic.
This is an unusual opportunity to improve your writing skills without sitting through tedious lectures or writing irrelevant essays. Instead, you’ll make your writing remarkable by learning how to avoid common pitfalls.
For instance, this slide opens the discussion about expressing ideas clearly, concisely, and correctly:
This course is grounded in the idea that you can become a better writer by learning how to spot common problems in others’ writing. This is why the many examples are filled with delightful errors that are as much fun to find as they are to correct.
One of the practical takeaways from the course is a set of checklists you can use to eliminate issues related to your structure, look, words, tone, and information. For example:
The course will help you stand out from other cybersecurity professionals with similar technical skills. It will help you get your executives, clients, and colleagues to notice your contribution, accept your advice, and appreciate your input. You’ll benefit whether you are:
- A manager or an individual team member
- A consultant or an internally-focused employee
- A defender or an attacker
- An earthling or an alien
You have a limited opportunity to attend a beta version of the course. You will not only get an early adopter discount and bragging rights, but also shape the course for future participants. I’ll teach the 1st beta in Orlando in April 2019—registration is now open. I’ll present the 2nd beta in New York City in June 2019. Starting around September 2019 you’ll be able to take the course almost exclusively online via the SANS OnDemand platform.
If you want me to notify you when the registration for the registration for the 2nd beta opens or when the OnDemand version of the course launches, just drop me a note.
On November 23, 2018, both Australia and Chinese Taipei joined the APEC Cross-Border Privacy Rules (“CBPR”) system. The system is a regional multilateral cross-border transfer mechanism and an enforceable privacy code of conduct and certification developed for businesses by the 21 APEC member economies.
The Australian Attorney-General’s Department recently announced that APEC endorsed Australia’s application to participate and that the Department plans to work with both the Office of the Australian Information Commissioner and organizations to implement the CBPR system requirements in a way that ensures long-term benefits for Australian businesses and consumers.
In Chinese Taipei, the National Development Council announced that Chinese Taipei has joined the system. According to the announcement, Chinese Taipei’s participation will spur local enterprises to seek overseas business opportunities and help shape conditions conducive to cross-border digital trade.
Australia and Chinese Taipei become the seventh and eighth countries to participate in the system, joining the U.S., Mexico, Canada, Japan, South Korea and Singapore. Both nations’ decisions to join the system further highlights the growing international status of the CBPR system, which implements the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Several other APEC economies are actively considering joining.
The Agency of Access to Public Information (Agencia de Acceso a la Información Pública) (“AAIP”) has approved a set of guidelines for binding corporate rules (“BCRs”), a mechanism that multinational companies may use in cross-border data transfers to affiliates in countries with inadequate data protection regimes under the AAIP.
As reported by IAPP, pursuant to Regulation No. 159/2018, published December 7, 2018, the guidelines require BCRs to bind all members of a corporate group, including employees, subcontractors and third-party beneficiaries. Members of the corporate group must be jointly liable to the data subject and the supervisory authority for any violation of the BCRs.
Other requirements include:
- restrictions on the processing of special categories of personal data and on the creation of files containing personal data relating to criminal convictions and offenses;
- protections such as providing for the right to object to the processing of personal data for the purpose of unsolicited direct marketing;
- complaint procedures for data subjects that include the ability to institute a judicial or administrative complaint using their local venue; and
- data protection training to personnel in charge of data processing activities.
BCRs also should contemplate the application of general data protection principles, especially the legal basis for processing, data quality, purpose limitation, transparency, security and confidentiality, the data subjects’ rights, and the restriction to subsequent cross-border data transfer to non-adequate jurisdictions. BCRs that do not reflect the guidelines’ provisions must submit the relevant material to the AAIP for approval within 30 calendar days from the date of transfer. Approval is not required if BCRs that track the guidelines are used.
In connection with its hearings on data security, the Federal Trade Commission hosted a December 12 panel discussion on “The U.S. Approach to Consumer Data Security.” Moderated by the FTC’s Deputy Director for Economic Analysis James Cooper, the panel featured private practitioners Lisa Sotto, from Hunton Andrews Kurth, and Janis Kestenbaum, academics Daniel Solove (GW Law School) and David Thaw (University of Pittsburgh School of Law), and privacy advocate Chris Calabrese (Center for Democracy and Technology). Lisa set the stage with an overview of the U.S. data security framework, highlighting the complex web of federal and state rules and influential industry standards that result in a patchwork of overlapping mandates. Panelists debated the effect of current law and enforcement on companies’ data security programs before turning to the “optimal” framework for a U.S. data security regime. Among the details discussed were establishing a risk-based approach with a baseline set of standards and clear process requirements. While there was not uniform agreement on the specifics, the panelists all felt strongly that federal legislation was warranted, with the FTC taking on the role of principal enforcer.
On December 4, 2018, the Federal Trade Commission published a notice in the Federal Register indicating that it is seeking public comment on whether any amendments should be made to the FTC’s Identity Theft Red Flags Rule (“Red Flags Rule”) and the duties of card issuers regarding changes of address (“Card Issuers Rule”) (collectively, the “Identity Theft Rules”). The request for comment forms part of the FTC’s systematic review of all current FTC regulations and guides. These periodic reviews seek input from stakeholders on the benefits and costs of specific FTC rules and guides along with information about their regulatory and economic impacts.
The Red Flags Rule requires certain financial entities to develop and implement a written identity theft detection program that can identify and respond to the “red flags” that signal identity theft. The Card Issuers Rule requires that issuers of debit or credit cards (e.g., state credit unions, general retail merchandise stores, colleges and universities, and telecom companies) implement policies and procedures to assess the validity of address change requests if, within a short timeframe after receiving the request, the issuer receives a subsequent request for an additional or replacement card for the same account.
The FTC is seeking comments on multiple issues, including:
- Is there a continuing need for the specific provisions of the Identity Theft Rules?
- What benefits have the Identify Theft Rules provided to consumers?
- What modifications, if any, should be made to the Identify Theft Rules to reduce any costs imposed on consumers?
- What modifications, if any, should be made to the Identify Theft Rules to increase their benefits to businesses, including small businesses?
- What evidence is available concerning the degree of industry compliance with the Identify Theft Rules?
- What modifications, if any, should be made to the Identify Theft Rules to account for changes in relevant technology or economic conditions?
The comment period is open until February 11, 2019, and instructions on how to make a submission to the FTC are included in the notice.
On November 9, 2018, Serbia’s National Assembly enacted a new data protection law. The Personal Data Protection Law, which becomes effective on August 21, 2019, is modeled after the EU General Data Protection Regulation (“GDPR”).
As reported by Karanovic & Partners, key features of the new Serbian law include:
- Scope – the Personal Data Protection Law applies not only to data controllers and processors in Serbia but also those outside of Serbia who process the personal data of Serbian citizens.
- Database registration – the Personal Data Protection Law eliminates the previous requirement for data controllers to register personal databases with the Serbian data protection authority (“DPA”), though they will be required to appoint a data protection officer (“DPO”) to communicate with the DPA on data protection issues.
- Data subject rights – the new law expands the rights of data subjects to access their personal data, gives subjects the right of data portability, and imposes additional burdens on data controllers when a data subject requests the deletion of their personal data.
- Consent – the Personal Data Protection Law introduces new forms of valid consent for data processing (including oral and electronic) and clarifies that the consent must be unambiguous and informed. The prior Serbian data protection law only recognized handwritten consents as valid.
- Data security – the new law requires data controllers to implement and maintain safeguards designed to ensure the security of personal data.
- Privacy by Design – the new law obligates data controllers to implement privacy by design when developing new products and services and to conduct data protection impact assessments for certain types of data processing.
- Data transfers – the Personal Data Protection Law expands the ways in which personal data may be legally transferred from Serbia. Previously, data controllers were required to obtain the approval of the Serbian DPA for any transfers of personal data to non-EU countries. The new law permits personal data transfers based on standard contractual clauses and binding corporate rules approved by the Serbian DPA. Organizations can also transfer personal data to countries deemed to provide an adequate level of data protection by the EU or the Serbian DPA or when the data subject consents to the transfer.
- Data breaches – like the GDPR, the new law requires data controllers to notify the Serbian DPA within 72 hours of a data breach and will require them to notify individuals if the data breach is likely to result in a high risk to the rights and freedoms of individuals. Data processors must also notify the relevant data controllers in the event of a data breach.
The new law also imposes penalties for noncompliance, but these are significantly lower than those contained in the GDPR. The maximum fines in the new Serbian law are only 17,000 Euros, while the maximum fines in the GDPR can reach up to 20 million Euros or 4% of an organization’s annual global turnover.
We are introducing the Incident Handling & Response Professional (IHRP) training course on December 11, 2018. Find out more and register for an exciting preview webinar.
No matter the strength of your company’s defense strategy, it is inevitable that security incidents will happen. Poor and/or delayed incident response has caused enormous damages and reputational harm to Yahoo, Uber, and most recently Facebook, to name a few. For this reason, Incident Response (IR) has become a crucial component of any IT Security department and knowing how to respond to such events is growing to be a more and more important skill.
Aspiring to switch to a career in Incident Response? Here’s how our new Incident Handling & Response Professional (IHRP) training course can help you learn the necessary skills and techniques for a successful career in this field.
Incident Handling & Response Professional (IHRP)
The Incident Handling & Response Professional course (IHRP) is an online, self-paced training course that provides all the advanced knowledge and skills necessary to:
- Professionally analyze, handle and respond to security incidents, on heterogeneous networks and assets
- Understand the mechanics of modern cyber attacks and how to detect them
- Effectively use and fine-tune open source IDS, log management and SIEM solutions
- Detect and even (proactively) hunt for intrusions by analyzing traffic, flows and endpoints, as well as utilizing analytics and tactical threat intelligence
This training is the cornerstone of our blue teaming course catalog or, as we called it internally, “The PTP of Blue Team”.
Discover This Course & Get An Exclusive Offer
Take part in an exciting live demonstration and discover the complete syllabus of our latest course, Incident Handling & Response Professional (IHRP), on December 11. During this event, all the attendees will get their hands on an exclusive launch offer. Stay tuned!
Be the first to know all about this modern blue teaming training course, join us on December 11.
Connect with us on Social Media:
The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP recently submitted formal comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.
In its comments, CIPL commends NTIA for initiating a renewed national debate on updating the U.S. privacy framework, and notes that its approach—starting with the intended outcomes and goals of any privacy regime—is well suited to lay the foundation for a legislative proposal in the future.
Responding to the questions raised in the request for comment, CIPL makes the following observations and recommendations with regard to NTIA’s intended core outcomes and the high level goals of any new U.S. privacy framework:
- Transparency: CIPL agrees transparency should be a key outcome of any privacy framework and must be user-centric, contextual and tailored toward the specific audience and purpose. This can be achieved by implementing companywide privacy management and accountability frameworks.
- Control: CIPL believes that control should be a component of a new privacy framework in contexts where it is appropriate, and should reference mechanisms that empower consumers beyond individual choice or consent. However, the framework’s general focus should be putting the onus on organizations to use data responsibly and accountably to protect consumers from harm regardless of their individual level of engagement.
- Reasonable Minimization: CIPL supports the inclusion of reasonable minimization as an outcome of a new data protection framework, and further agrees with NTIA’s qualification that minimization should be reasonable and appropriate to the context and risk of privacy harm. These qualifiers are very important given the enormous potential of personal data for driving economic growth and societal benefits in the digital economy.
- Security: CIPL fully agrees with the inclusion of security in the list of outcomes, and notes the importance of allowing organizations flexibility in determining security measures that are reasonable and appropriate to the context at hand. In addition, a security outcome should provide for the adoption of appropriate breach response measures (e.g., notification requirements) and should permit organizations to use personal data for the development and implementation of security tools and related legitimate purposes, such as incident prevention, detection and monitoring.
- Access and Correction: While CIPL agrees that access, correction and deletion is an important outcome, such rights cannot be absolute and should not interfere with relevant obligations of an organization, other societal goals or legal rights of consumers and other third parties. Where exercising such rights would be inappropriate or impose unreasonable burdens on organizations, part of the solution lies in providing assurances to consumers that their personal information is protected by the full range of available accountability measures and will not be used for harmful purposes.
- Risk Management: CIPL welcomes NTIA’s characterization of risk management as the “core” of its approach to privacy protection. Identifying harms and addressing them specifically has the advantage of enabling organizations to prioritize their compliance measures and focus resources on what is most important, thereby strengthening both consumer privacy and organizations’ ability to engage in legitimate and accountable uses of personal information. It also means that we do not need to establish set categories of so-called sensitive information or certain predetermined high-risk processing activities, as any actual sensitivity or high-risk character will be determined and addressed in each risk assessment process.
- Accountability: CIPL strongly agrees with including accountability in the essential outcomes of a privacy framework. It is a key building block of modern data protection and is essential for the future of the digital society where laws alone cannot deliver timely, flexible and innovative solutions. CIPL recommends that NTIA clarify and elaborate upon this important concept in line with its globally accepted meaning, including in the APEC Privacy Framework and the GDPR, as well as other relevant international privacy regimes that incorporate this concept.
- Complaint-handling and Redress: In addition to the above outcomes, CIPL recommends the additional outcome of complaint-handling and redress. Consumers should be able to expect that organizations are able to reliably, quickly and effectively respond to actionable complaints and provide redress where appropriate. As it is consumer-facing, it should be a separately stated outcome that consumers can expect from a privacy framework.
High-Level Goals for Federal Action
- Harmonization: CIPL supports the effort to harmonize the U.S. privacy framework on the federal level, including through federal legislation that preempts inconsistent state privacy laws. CIPL recommends that NTIA clarify whether the proposed framework intends to cover employees, and suggests that a new framework should be focused on privacy in the consumer and commercial context and that the precise term “consumer” be defined to avoid legal uncertainty.
- Legal Clarity and Flexibility to Innovate: Clarity and flexibility in a privacy framework can be achieved through an approach based on organizational accountability risk assessment. With respect to risk, agreement around methodologies for privacy assessments, guidance on types of risk and the sharing of organizational best practices can also significantly contribute to legal clarity without undermining the flexibility to innovate.
- Comprehensive Application: CIPL supports a comprehensive baseline privacy law that applies to all organizations, preempts inconsistent state laws, amends or replaces inconsistent federal privacy laws where appropriate, and otherwise works with or around well-functioning existing sectoral laws.
- Risk and Outcome-based Approach: CIPL agrees with the goal of creating a risk and outcome based approach to privacy regulation. Employing such an approach places the burden of protecting consumers directly where it belongs – on businesses that use personal data, rather than on consumers, who in an increasing number of contexts should not and realistically cannot be tasked with understanding in detail and managing for themselves complex data uses or constantly making choices about them.
- Interoperability: Maximizing interoperability between different legal and privacy regimes should be a top priority goal for the United States. Any new privacy framework for the U.S. should continue to enable the free, responsible and accountable flow of data across borders.
- Incentivizing Privacy Research: CIPL fully agrees with the goal of having the U.S. government encourage and incentivize research into and development of products and services that improve privacy protections. However, this goal should be broadened and amplified along the lines of the argument for incentivizing organizational accountability generally. This enables a race to the top whereby organizations not only strive to comply with the bare minimum of what is legally required but are incentivized and rewarded for heightened levels of organizational accountability that benefit all stakeholders.
- FTC Enforcement: CIPL agrees that the Federal Trade Commission should be the principal federal agency to enforce any new comprehensive U.S. privacy legislation and should be appropriately resourced as such. Exactly how a new privacy framework and the FTC as the principal federal agency should interact with other federal functional regulators and sectoral privacy laws should be carefully considered and worked out with input from all relevant stakeholders.
- Scalability: CIPL agrees that enforcement should be proportionate to the scale and scope of the information an organization is handling and should be outcome-based. With increased responsibilities under a broader privacy law, the FTC will have to ensure that its current approach is adapted to the changes in the scope and nature of its responsibilities.
- Enabling Effective Use of Personal Information: In addition to the above goals for federal action, CIPL suggests the additional goal of enabling broad and effective uses of personal information for the benefit of economic development and societal progress, as well as for the benefit of individuals, particularly the data subjects. Due to their supervisory position, modern data protection and privacy enforcement authorities have the responsibility, in addition to protecting consumer privacy, to safeguard and facilitate the beneficial potential of such information and, therefore, the full range of responsible and accountable data uses.
Following consideration of the comments it receives, CIPL recommends that NTIA takes a holistic and deliberate approach toward developing a comprehensive privacy law that accomplishes the items discussed in the request for comment. One possible next step could be to actually articulate the outcomes and goals in draft legislative language to provide a clearer basis for further discussion on the precise elements and articulation of each of them. CIPL recommends an iterative process between NTIA and other public and private sector stakeholders towards that goal.
On November 20, 2018, the Illinois Supreme Court heard arguments in a case that could shape future litigation under the Illinois Biometric Information Privacy Act (“BIPA”). BIPA requires companies to (i) provide prior written notice to individuals that their biometric data will be collected and the purpose for such collection, (ii) obtain a written release from individuals before collecting their biometric data and (iii) develop a publicly available policy that sets forth a retention schedule and guidelines for deletion once the biometric data is no longer used for the purpose for which it was collected (but for no more than three years after collection). BIPA also prohibits companies from selling, leasing or trading biometric data.
The plaintiff in the case, Stacy Rosenbach v. Six Flags Entertainment Corp., alleged that Six Flags Entertainment Corporation (“Six Flags”) violated BIPA by collecting her son’s fingerprint in connection with the purchase of a season pass, without first notifying her or obtaining her consent to the collection of her son’s biometric data. At the trial level, Six Flags argued that the case should be dismissed for failure to establish standing because the plaintiff did not allege that actual harm resulted from the company’s collection of her son’s fingerprint data. The case was appealed to the Second District Appellate Court, which ruled in Six Flags’ favor, holding that BIPA plaintiffs cannot rely on technical violations of the law, such as failure to obtain consent, to be “aggrieved” and have standing. The plaintiff appealed the case to the Illinois Supreme Court.
In oral arguments heard by the Illinois Supreme Court on Tuesday, Six Flags again argued that the plaintiff must allege more than just a technical violation of BIPA to establish standing. Three of the Court’s seven justices appeared to disagree with this argument, with one, Justice Robert Thomas, countering that “there seems to be at least a logical appeal” to ensuring that individuals are made aware that their biometric data will be collected, and that “the purpose [of BIPA] is so [an actual harm] won’t happen in the first place.” Justice Anne Burke joined, stating that it is “too late to wait” for a violation of the law to occur in the first place because at that point, a plaintiff “may never know [about the violation] and you can’t get your fingerprints back. It’s irreparable harm.”
The Second District Appellate Court’s ruling in favor of Six Flags diverges from a First District Appellate Court opinion in Klaudia Sekura v. Krishna Schaumburg Tan Inc., which held that plaintiffs have causes of action under BIPA even without allegations of actual harm. The Illinois Supreme Court’s ruling in Rosenbach is expected to set the standard for which plaintiffs have standing under BIPA in future litigation.
On November 9, 2018, the European Commission (“the Commission”) submitted comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.
In its comments, the Commission welcomes and agrees with many of the high-level goals identified by NTIA, including harmonization of the legal landscape, incentivizing privacy research, employing a risk-based approach and creating interoperability at a global level. The Commission also welcomes that the key characteristics of a modern and flexible privacy regime (i.e., an overarching law, a core set of data protection principles, enforceable individual rights and an independent supervisory authority with effective enforcement powers) are also at the core of NTIA’s proposed approach to consumer privacy. The Commission structured its specific suggestions around these key characteristics.
In particular, the Commission makes specific suggestions around:
- Harmonization: The Commission notes that overcoming regulatory fragmentation associated with an approach based on sectoral law in favor of a more harmonized approach would create a level playing field, and provide necessary certainty for organizations while ensuring consistent protection for individuals.
- Data Protection Principles: The Commission commends NTIA on the inclusion of certain core data protection principles such as reasonable minimization, security, transparency and accountability, but suggests the further explicit inclusion of other principles such as lawful data processing (i.e., the requirement to process data pursuant to a legal basis, such as consent), purpose specification, accuracy and specific protections for sensitive categories of data.
- Breach Notification: The Commission suggests the specific inclusion of a breach notification requirement to enable individuals to protect themselves from and mitigate any potential harm that might result from a data breach. While there are already state breach notification laws in place, the Commission believes organizations and individuals could benefit from the harmonization of such rules.
- Individual Rights: The Commission believes that any proposal for a privacy regime should go beyond the inclusion of only traditional individual rights, such as access and correction, and should include other rights regarding automated decision-making (e.g., the right to explanation or to request human intervention) and rights around redress (e.g., the right to lodge a complaint and have it addressed, and the right to effective judicial redress).
- Oversight and Enforcement: The Commission notes that the effective implementation of privacy rules critically depends on having robust oversight and enforcement by an independent and well-resourced authority. In this regard, the Commission recommends strengthening the FTC’s enforcement authority, the introduction of mechanisms to ensure effective resolution of individual complaints and the introduction of deterrent sanctions.
The Commission notes in its response that while this consultation only covers a first step in a process that might lead to federal action, it stands ready to provide further comments on a more developed proposal in the future.
NTIA’s request for comments closed on November 9, 2018 and NTIA will post the comments it received online shortly.
On November 8, 2018, Privacy International (“Privacy”), a non-profit organization “dedicated to defending the right to privacy around the world,” filed complaints under the GDPR against consumer marketing data brokers Acxiom and Oracle. In the complaint, Privacy specifically requests the Information Commissioner (1) conduct a “full investigation into the activities of Acxiom and Oracle,” including into whether the companies comply with the rights (i.e., right to access, right to information, etc.) and safeguards (i.e., data protection impact assessments, data protection by design, etc.) in the GDPR; and (2) “in light of the results of that investigation, [take] any necessary further [action]… that will protect individuals from wide-scale and systematic infringements of the GDPR.”
The complaint alleges that the companies’ processing of personal data neither comports with the consent and legitimate interest requirements of the GDPR, nor the GDPR’s principles of:
- transparency (specifically relating to sources, recipients and profiling);
- fairness (considering individuals’ reasonable expectations, the lack of a direct relationship, and the opaque nature of processing);
- lawfulness (including whether either company’s reliance on consent or legitimate interest is justified);
- purpose limitation;
- data minimization; and
The complaint emphasizes that Acxiom and Oracle are illustrative of the “systematic” problems in the data broker and AdTech ecosystems, and that it is “imperative that the Information Commissioner not only investigate these specific companies, but also take action in respect of other relevant actors in these industries and their practices.”
In addition to the complaint against Acxiom and Oracle, Privacy submitted two separate joined complaints against credit reference data brokers Experian and Equifax, and AdTech data brokers Quantcast, Tapad and Criteo.
On November 6, 2018, the French Data Protection Authority (the “CNIL”) published its own guidelines on data protection impact assessments (the “Guidelines”) and a list of processing operations that require a data protection impact assessment (“DPIA”). Read the guidelines and list of processing operations (in French).
The Guidelines aim to complement guidelines on DPIA adopted by the Article 29 Working Party on October 4, 2017, and endorsed by the European Data Protection Board (“EDPB”) on May 25, 2018. The CNIL crafted its own Guidelines to specify the following:
- Scope of the obligation to carry out a DPIA. The Guidelines describe the three examples of processing operations requiring a DPIA provided by Article 35(3) of the EU General Data Protection Regulation (“GDPR”). The Guidelines also list nine criteria the Article 29 Working Party identified as useful in determining whether a processing operation requires a DPIA, if that processing does not correspond to one of the three examples provided by the GDPR. In the CNIL’s view, as a general rule a processing operation meeting at least two of the nine criteria requires a DPIA. If the data controller considers that processing meeting two criteria is not likely to result in a high risk to the rights and freedoms of individuals, and therefore does not require a DPIA, the data controller should explain and document its decision for not carrying out a DPIA and include in that documentation the views of the data protection officer (“DPO”), if appointed. The Guidelines make clear that a DPIA should be carried out if the data controller is uncertain. The Guidelines also state that processing operations lawfully implemented prior to May 25, 2018 (e.g., processing operations registered with the CNIL, exempt from registration or recorded in the register held by the DPO under the previous regime) do not require a DPIA within a period of 3 years from May 25, 2018, unless there has been a substantial change in the processing since its implementation.
- Conditions in which a DPIA is to be carried out. The Guidelines state that DPIAs should be reviewed regularly—at minimum, every three years—to ensure that the level of risk to individuals’ rights and freedoms remains acceptable. This corresponds to the three-year period mentioned in the draft guidelines on DPIAs adopted by the Article 29 Working Party on April 4, 2017.
- Situations in which a DPIA must be provided to the CNIL. The Guidelines specify that data controllers may rely on the CNIL’s sectoral guidelines (“Referentials”) to determine whether the CNIL must be consulted. If the data processing complies with a Referential, the data controller may take the position that there is no high residual risk and no need to seek prior consultation for the processing from the CNIL. If the data processing does not fully comply with the Referential, the data controller should assess the level of residual risk and the need to consult the CNIL. The Guidelines note that the CNIL may request DPIAs in case of inspections.
CNIL’s List of Processing Operations Requiring a DPIA
The CNIL previously submitted a draft list of processing operations requiring a DPIA to the EDPB for its opinion. The CNIL adopted its final list on October 11, 2018, based on that opinion. The final list includes 14 types of processing operations for which a DPIA is mandatory. The CNIL provided concrete examples for each type of processing operation, including:
- processing operations for the purpose of systematically monitoring the employees’ activities, such as the implementation of data loss prevention tools, CCTV systems recording employees handling money, CCTV systems recording a warehouse stocking valuable items in which handlers are working, digital tachograph installed in road freight transport vehicles, etc.;
- processing operations for the purpose of reporting professional concerns, such as the implementation of a whistleblowing hotline;
processing operations involving profiling of individuals that may lead to their exclusion from the benefit of a contract or to the contract suspension or termination, such as processing to combat fraud of (non-cash) means of payment;
- profiling that involves data coming from external sources, such as a combination of data operated by data brokers and processing to customize online ads;
- processing of location data on a large scale, such as a mobile app that enables to collect users’ geolocation data, etc.
The CNIL’s list is non-exhaustive and may be regularly reviewed, depending on the CNIL’s assessment of the “high risks” posed by certain processing operations.
The CNIL is expected to soon publish its list of processing operations for which a DPIA is not required.
On October 23, 2018, the parties in the Yahoo! Inc. (“Yahoo!”) Customer Data Security Breach Litigation pending in the Northern District of California and the parties in the related litigation pending in California state court filed a motion seeking preliminary approval of a settlement related to breaches of the company’s data. These breaches were announced from September 2016 to October 2017 and collectively impacted approximately 3 billion user accounts worldwide. In June 2017, Yahoo! and Verizon Communications Inc. had completed an asset sale transaction, pursuant to which Yahoo! became Altaba Inc. (“Altaba”) and Yahoo!’s previously operating business became Oath Holdings Inc. (“Oath”). Altaba and Oath have each agreed to be responsible for 50 percent of the settlement.
Under the terms of the agreement, Yahoo!, through its successor in interest, Oath Holdings Inc., has agreed to enhance its business practices to improve the security of its users’ personal information stored on its databases. Yahoo! will also pay for a minimum of two years of credit monitoring services to protect settlement class members from future harm, as well as establish a $50 million settlement fund to provide an alternative cash payment for those who verify they already have credit monitoring or identity protection. The settlement fund will also cover demonstrated out-of-pocket losses, including loss of time, and payments to Yahoo! users who paid for advertisement-free or premium Yahoo! Mail services and those who paid for Aabaco Small Business services, which included business email services. The motion for approval is currently before the court, which has scheduled a hearing for November 29, 2018, on the matter.
On November 1, 2018, Senator Ron Wyden (D-Ore.) released a draft bill, the Consumer Data Protection Act, that seeks to “empower consumers to control their personal information.” The draft bill imposes heavy penalties on organizations and their executives, and would require senior executives of companies with more than one billion dollars per year of revenue or data on more than 50 million consumers to file annual data reports with the Federal Trade Commission. The draft bill would subject senior company executives to imprisonment for up to 20 years or fines up to $5 million, or both, for certifying false statements on an annual data report. Additionally, like the EU General Data Protection Regulation, the draft bill proposes a maximum fine of 4% of total annual gross revenue for companies that are found to be in violation of Section 5 of the FTC Act.
The draft bill also proposes to grant the FTC authority to write and enforce privacy regulations, to establish minimum privacy and cybersecurity standards, and to create a national “Do Not Track” system that would allow consumers to prevent third-party companies from tracking internet users by sharing or selling data and targeting advertisements based on their personal information.
Senator Wyden stated, “My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules.”
Effective October 1, 2018, Connecticut law requires organizations that experience a security breach affecting Connecticut residents’ Social Security numbers (“SSNs”) to provide 24 months of credit monitoring to affected individuals. Previously, Connecticut law required entities to provide 12 months of credit monitoring for breaches affecting SSNs.
The amendment was passed as part of Public Act 18-90, An Act Concerning Security Freezes on Credit Reports, Identity Theft Prevention Services and Regulations of Credit Rating Agencies. Among other requirements, the Act also eliminates fees for placing and lifting a security freeze and requires consumer reporting agencies to (1) act on requests related to credit freezes as soon as practicable, but no later than 5 days for requests to place a security freeze or 3 days for requests to remove a security freeze, and (2) offer to notify the other consumer reporting agencies of the request for a credit freeze on behalf of the consumer.
On October 29, 2018, the Office of the Privacy Commissioner of Canada (the “OPC”) released final guidance (“Final Guidance”) regarding how businesses may satisfy the reporting and record-keeping obligations under Canada’s new data breach reporting law. The law, effective November 1, 2018, requires organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) to (1) report to the OPC breaches of security safeguards involving personal information “that pose a real risk of significant harm” to individuals, (2) notify affected individuals of the breach and (3) keep records of every breach of security safeguards, regardless of whether or not there is a real risk of significant harm.
As we previously reported, the OPC had published draft guidance for which it had requested public comment. Like the draft version, the Final Guidance includes information regarding how to assess the risk of significant harm, and regarding notice, reporting and recordkeeping requirements (i.e., timing, content and form). The Final Guidance adds a requirement that a record must also include either sufficient detail for the OPC to assess whether an organization correctly applied the real risk of significant harm standard, or a brief explanation as to why the organization determined there was not a real risk of significant harm.
The Final Guidance additionally clarifies the following:
- Who is responsible for reporting and keeping records of the breach? Businesses subject to PIPEDA requirements must report breaches of security safeguards involving personal information “under its control.”
- Who is “in control” of personal information? The Final Guidance notes that in general, when an organization (the “principal”) provides personal information to a third party processor (the “processor”), the principal may reasonably be found to be in control of the personal information it has transferred to the processor, triggering the reporting and record-keeping obligations of a breach that occurs with the processor. On the other hand, if the processor uses or discloses the same personal information for other purposes, it is no longer simply processing the personal information on behalf of the principal; it is instead acting as an organization “in control” of the information, and would thereby have the obligation to notify, report, and record. The Final Guidance acknowledges that determining who has personal information “under its control” must be assessed on a case-by-case basis, taking into account any relevant contractual arrangements and “commercial realities” between organizations, such as shifting roles and evolving business models. The Final Guidance recommends that principals ensure “sufficient contractual arrangements [are] in place with the processor to address compliance” with the PIPEDA breach reporting, notification and record-keeping obligations.
- When do other entities besides affected individuals and the OPC need to be notified? If a breach triggers notification due to a real risk of significant harm, “any government institutions or organizations that the organization believes… may be able to reduce the risk of harm… or mitigate the harm” resulting from the breach must also be notified.
Though the privacy commissioner called the new law a “step in the right direction,” the commissioner also voiced concerns about the law, including that: (1) breach reports to the OPC do not contain the information that would allow for the regulator to assess the quality of an organization’s data security safeguards; (2) the lack of financial sanctions for inadequate data security safeguards misses an opportunity to incentivize organizations to prevent breaches; and (3) the government has not provided the OPC with enough resources to “analyze breach reports, provide advice and verify compliance.”
Let’s talk about it
October brought Social-Engineer to the SEVillage at DerbyCon 8.0 – Evolution, SEORG’s final SEVillage for the year, and WOW, was it an AMAZING DerbyCon. Ryan and Colin arrived Tuesday to set up shop and stuff many padfolios to prepare for their OSINT class that ran over Wednesday and Thursday. The OSINT class was Social-Engineer’s largest class EVER and it sold out in TWELVE SECONDS. Yes. You read that correctly. Our largest class sold out in 12 seconds. The students loved it, and one team even finished the final hands-on challenge in just over an hour when it usually takes multiple hours. A second team slid past the finish line in the nick of time, just before class ended on Thursday.
After class, the rest of the team rolled into Louisville, KY where DerbyCon was held at the Marriott downtown, instead of the Hyatt, for the first time. Our amazing volunteers and staff gathered together to set up the village and prep for the amazing few days to come.
Vishing data and the SECTF – Friday, October 4, 2018
Friday started for SEORG at noon when Cat Murdock and Chris Hadnagy took the Track 1 stage to present on Social-Engineer’s last-three years’ of vishing data in their speech “IRS, HR, Microsoft and your Grandma: What they all have in common.”
Cat gets psyched about data
Did you know that Mondays are the hardest day to compromise targets via vishing by a HUGE percentage?!? On Monday, social engineers are looking at a 29% compromise ratio compared to a 58%-65% compromise ratio any other day of the week. Apparently, employees hit the ground running on Mondays, are fresh off the weekend, and ready to secure their information from SEs.
Chris and Cat drop some data knowledge
That one-time Cat stole Dave’s hat but everyone got iced anyway
After the speech, the SEVillage team raced back to launch the 2nd SECTF at DerbyCon. The room was PACKED, with audience members sitting on the floor and lining the walls.
A completely packed room awaited the SECTF at DerbyCon
This year, the targets featured were large energy companies including Halliburton, Phillips 66, Devon Energy, Noble Energy, and Sunoco. While these targets were particularly challenging, and some even had systems that had to ethically be avoided for competition’s sake, it was one of the most entertaining SECTFs to date.
DEF CON’s 2nd place winner and always amazing audience member – Rachel Tobac
All the contestants were able to get targets on the phone and elicit many flags. The competition was SO fierce, the difference between the first and second place winner was only a single flag, making for a great competition. In the end, Krittika’s amazing reporting and calls won her the first-place trophy. This means that all the winners of the SECTF prizes this year were women!!! Get it, ladies!
Our DerbyCon 1st place winner, Krittika, Answering some Q&A after calls
The first competitor started the afternoon off right! Soooo many flags!
This sweet SECTF trophy finally found its forever home!
Can you fool the Polygraph, Mission SE Impossible, and Ethics– Saturday, October 5
Saturday at Derby is always an amazing day, as it starts off with the incredibly unique “Can you fool the Polygraph” challenge. Our reigning champion from 2017 began as the first competitor in this competition.
Reigning champ defends his title!
Contestants had to answer extremely uncomfortable questions while attempting to trick the polygraph machine, which has sensors measuring reactions on the chest, fingers, and even your butt. Questions ran along the lines of, “have you ever taken credit for a coworker’s accomplishments?” As well as, “do you regularly urinate in the shower?” Ultimately, our ferocious, and possibly psycho/sociopathic, competitors ended in a three-way tie!! Whaaatt….
With game faces like this, the tie was not surprising
Clearly, we couldn’t end in a tie. So, our amazing polygraph examiner created a tie breaker for us on the spot! Thanks, Jacob. The tie breaker was having the contestants answer “no” to the question, “Is it <insert day of the week here>?” Each contestant was asked five days of the week, including “Saturday,” the day the competition occurred, and they had to answer “no” to each objective question. The individual who lied the best won!
CONGRATS TO OUR WINNER SCOTT!!!
The most convincing liar of them all – Well done, Scott!
After a brief lunch break, the Village rallied for Mission SE Impossible, a staged “escape room” type competition where competitors have to shim themselves out of handcuffs and leg cuffs, pick a lock, analyze microexpressions, and traverse a laser grid produced by tiny sharks with lasers on their freakin’ heads.
No pressure or anything, but I hope he hustles with all those people watching…
Will he break free?!?! Spoiler alert – he did.
The SEVillage is family friendly, and this kid ROCKED it!
Disclaimer: No sharks were harmed in the making of MSI
Super sweet lasers in the HOUSE
Commitment to dodging those laser sharks
Our winner, squeezing through lasers on his way to victory
Ultimately, MSI ended with our winner, Rick, slamming the competition by finishing in RECORD time at 59 seconds. CONGRATULATIONS, RICK!!!!
Once MSI wrapped up, we only had one SEVillage activity remaining; a panel on Ethics in Social Engineering featuring Jamison Scheeres, Chris Silvers, Rachel Tobac, Grifter, and Chris Hadnagy. This panel was inspired by our recently released Social Engineering Code of Ethics, as, after its release, it quickly became a community tool and topic. It was truly wonderful to see a packed house looking to discuss ethics in our work from 6-8PM on a Saturday.
Full house for the ethics panel
The discussion was amazing, all viewpoints and questions were compelling and deep. Ultimately the community is made stronger when we can have tough conversations like these, where we really dig into thinking about where the tactics we use can take an emotional toll on targets while still being a necessary precaution to protect against malicious actors. A full recording of this panel is available here. #NotAPhish
The participants of the Ethics in Social Engineering Panel, Jamison, Chris S, Rachel, Grifter, and Chris H
Jamison dropping some deep thoughts
Wrap up – Sunday, October 6, 2018
Sunday, the team packed up the village and wearily found brunch in Louisville before heading to closing ceremonies, officially wrapping up the SEVillage at DerbyCon as well as all SEVillages for 2018. The weekend was truly an epic con, and we are always so grateful to be able to attend. We could not do it without our sponsor, Red Sky, or our amazing team. A huge thanks to Jim, Kris, Chris, Hannah, Evan, Spencer, Colin, Ryan, Cat, and Chris H – the weekend would literally not be possible without these wonderful individuals.
Colin manning that swag booth!
These are some great people!
Thank you all and be looking for the SECTF report that dives into the data from all our 2018 SECTF competitions!! The webinar discussing the report will be at 2PM ET on November 28. You can sign up now and don’t forget to mark your calendars!
Recently, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement and record settlement of $16 million with Anthem, Inc. (“Anthem”) following Anthem’s 2015 data breach. That breach, affecting approximately 79 million individuals, was the largest breach of protected health information (“PHI”) in history.
Three years ago, in February 2015, OCR opened a compliance review of Anthem, the nation’s second largest health insurer, following media reports that Anthem had suffered a significant cyberattack. In March 2015, Anthem submitted a breach report to OCR detailing the cyberattack, indicating that it began after at least one employee responded to a spear phishing email. Attackers were able to download malicious files to the employee’s computer and gain access to other Anthem systems that contained individuals’ names, Social Security numbers, medical identification numbers, addresses, dates of birth, email addresses and employment information.
OCR investigated Anthem and found that it may have violated the HIPAA Privacy and Security Rules by failing to:
- conduct an accurate and thorough risk analysis of the risks and vulnerabilities to the confidentiality, integrity and availability of electronic PHI (“ePHI”);
- implement procedures to regularly review records of information system activity;
- identify and respond to the security incident;
- implement sufficient technical access procedures to protect access to ePHI; and
- prevent unauthorized access to ePHI.
The resolution agreement requires Anthem to pay $16 million to OCR and enter into a Corrective Action Plan that obligates Anthem to:
- conduct a risk analysis and submit it to OCR for review and approval;
- implement a risk management plan to address and mitigate the risks and vulnerabilities identified in the risk analysis;
- revise its policies and procedures to specifically address (1) the regular review of records of information system activity and (2) technical access to ePHI, such as network or portal segmentation and the enforcement of password management requirements, such as password age;
- distribute the policies and procedures to all members of its workforce within 30 days of adoption;
- report any events of noncompliance with its HIPAA policies and procedures; and
- submit annual compliance reports for a period of two years.
In announcing the settlement with Anthem, OCR Director Roger Severino noted that the record-breaking settlement with Anthem was merited, as the company had experienced the largest health data breach in U.S. history. “Unfortunately, Anthem failed to implement appropriate measures for detecting hackers who had gained access to their system to harvest passwords and steal people’s private information.” Severino continued, “We know that large health care entities are attractive targets for hackers, which is why they are expected to have strong password policies and to monitor and respond to security incidents in a timely fashion or risk enforcement by OCR.”
The $16 million settlement with Anthem almost triples the previous record of $5.55 million, which OCR imposed in 2016 against Advocate Health Care Network. The settlement also comes two months after a U.S. District Court granted final approval of Anthem’s record $115 million class action settlement related to the breach.
We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.
In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.
Click to read the full article: Improve Security by Thinking Beyond the Security Realm
The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.
I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”
Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.
Fileless was synonymous with in-memory until around 2014.
The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.
The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.
However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.
Here’s my perspective on the methods that comprise modern fileless attacks:
- Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
- Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
- Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
- Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.
While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.
Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview:
As reported on the Blockchain Legal Resource, California Governor Jerry Brown recently signed into law Assembly Bill No. 2658 for the purpose of further studying blockchain’s application to Californians. In doing so, California joins a growing list of states officially exploring distributed ledger technology.
Specifically, the law requires the Secretary of the Government Operations Agency to convene a blockchain working group prior to July 1, 2019. Under the new law, “blockchain” means “a mathematically secured, chronological and decentralized ledger or database.” In addition to including various representatives from state government, the working group is required to include appointees from the technology industry and non-technology industries, as well as appointees with backgrounds in law, privacy and consumer protection.
Under the new law, which has a sunset date of January 1, 2022, the working group is required to evaluate:
- the uses of blockchain in state government and California-based businesses;
- the risks, including privacy risks, associated with the use of blockchain by state government and California-based businesses;
- the benefits associated with the use of blockchain by state government and California-based businesses;
- the legal implications associated with the use of blockchain by state government and California-based businesses; and
- the best practices for enabling blockchain technology to benefit the State of California, California-based businesses and California residents.
In doing so, the working group is required to seek “input from a broad range of stakeholders with a diverse range of interests affected by state policies governing emerging technologies, privacy, business, the courts, the legal community and state government.”
The working group is also tasked with delivering a report to the California Legislature by January 1, 2020, on the potential uses, risks and benefits of blockchain technology by state government and California businesses. Moreover, the report is required to include recommendations for amending relevant provisions of California law that may be impacted by the deployment of blockchain technology.
Vizio, Inc. (“Vizio”), a California-based company best known for its internet-connected televisions, agreed to a $17 million settlement that, if approved, will resolve multiple proposed consumer class actions consolidated in California federal court. The suits’ claims, which are limited to the period between February 1, 2014 and February 6, 2017, involve data-tracking software Vizio installed on its smart TVs. The software allegedly identified content displayed on Vizio TVs and enabled Vizio to determine the date, time, channel of programs and whether a viewer watched live or recorded content. The viewing patterns were connected to viewer’s IP addresses, though never, Vizio emphasized in its press release announcing the proposed settlement, to an individual’s name, address, or similar identifying information. According to Vizio, viewing data allows advertisers and programmers to develop content better aligned with consumers’ preferences and interests.
Among other claims, the suits allege that Vizio failed to adequately disclose its surveillance practices and obtain consumers’ express consent before collecting the information. The various suits, some of which were filed in 2015, were consolidated in California’s Central District in April 2016 and subsequently survived Vizio’s motion to dismiss. Vizio had argued that several of the claims were deficient, and contended that the injunctive relief claims were moot in light of a February 2017 consent decree resolving the Federal Trade Commission’s (“FTC”) complaint over Vizio’s collection and use of viewing data and other information. To settle the FTC case, Vizio agreed, among other things, to stop unauthorized tracking, to prominently disclose its TV viewing collection practices and to get consumers’ express consent before collecting and sharing viewing information.
The parties notified the district court in June that they struck a settlement in principle. On October 4, 2018, they jointly moved for preliminary settlement approval. Counsel for the consumers argued that the deal is fair, because revenue that Vizio obtained from sharing consumers’ data will be fully disgorged and class members who submit a claim will receive a proportion of the settlement of between $13 and $31, based on a 2 to 5 percent claims rate. Vizio also agreed to provide non-monetary relief including revised on-screen disclosures concerning its viewing data practices and deleting all viewing data collected prior to February 6, 2017. The relief is pending until the court approves the settlement.
The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic.
The Draft Report is designed “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their IoT devices throughout their lifecycles.” According to the Draft Report, “[m]any organizations are not necessarily aware they are using a large number of IoT devices. It is important that organizations understand their use of IoT because many IoT devices affect cybersecurity and privacy risks differently than conventional IT devices do.”
The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy.
In order to address those considerations and risk mitigation goals, the Draft Report provides the following recommendations:
- Understand the IoT device risk considerations and the challenges they may cause to mitigating cybersecurity and privacy risks for devices in the appropriate risk mitigation areas.
- Adjust organizational policies and processes to address the cybersecurity and privacy risk mitigation challenges throughout the IoT device lifecycle.
- Implement updated mitigation practices for the organization’s IoT devices as you would any other changes to practices.
Comments are due by October 24, 2018.
Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.
What is a Blockchain?
A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.
Scope of the CNIL’s Assessment
The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.
- DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
- Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).
In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.
Role of Actors in a Blockchain Network
The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.
If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.
According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.
How to Minimize Risks To Data Subjects
- Assessing the appropriateness of using blockchain
As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.
If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.
- Choosing the right format under which the data will be recorded
As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.
In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.
The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.
With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).
In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”
How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights
According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)
However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.
In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.
With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.
The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.
In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.
I had not seen this interesting letter (August 27, 2018) from the House Energy and Commerce Committee to DHS about the nature of funding and support for the CVE.
This is the sort of thoughtful work that we hope and expect government departments do, and kudos to everyone involved in thinking about how CVE should be nurtured and maintained.
On September 28, 2018, California Governor Jerry Brown signed into law two identical bills regulating Internet-connected devices sold in California. S.B. 327 and A.B. 1906 (the “Bills”), aimed at the “Internet of Things,” require that manufacturers of connected devices—devices which are “capable of connecting to the Internet, directly or indirectly,” and are assigned an Internet Protocol or Bluetooth address, such as Nest’s thermostat—outfit the products with “reasonable” security features by January 1, 2020; or, in the bills’ words: “equip [a] device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure[.]”
According to Bloomberg Law, the Bills’ non-specificity regarding what “reasonable” features include is intentional; it is up to the manufacturers to decide what steps to take. Manufacturers argue that the Bills are egregiously vague, and do not apply to companies that import and resell connected devices made in other countries under their own labels.
The Bills are opposed by the Custom Electronic Design & Installation Association, Entertainment Software Association and National Electrical Manufacturers Association. They are sponsored by Common Sense Kids Action; supporters include the Consumer Federation of America, Electronic Frontier Foundation and Privacy Rights Clearinghouse.
On September 26, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Indian Ministry of Electronics and Information Technology on the draft Indian Data Protection Bill 2018 (“Draft Bill”).
CIPL’s comments on the Draft Bill focus on several key issues that are of particular importance for any modern-day data protection law, including increased emphasis on accountability and the risk-based approach to data processing, interoperability with other data protection laws globally, the significance of having a variety of legal bases for processing and not overly relying on consent, the need for extensive and flexible data transfer mechanisms, and the importance of maximizing the effectiveness of the data protection authority.
Specifically, the comments address the following key issues:
- the Draft Bill’s extraterritorial scope;
- the standard for anonymization;
- notice requirements;
- accountability and the risk-based approach;
- legal bases for processing, including importance of the reasonable purposes ground;
- sensitive personal data;
- children’s data;
- individual rights;
- data breach notification;
- Data Protection Impact Assessments;
- record-keeping requirements and data audits;
- Data Protection Officers;
- the adverse effects of a data localization requirement;
- cross-border transfers;
- codes of practice; and
- the timeline for adoption.
These comments were formed as part of CIPL’s ongoing engagement in India. In January 2018, CIPL responded to the Indian Ministry of Electronics and Information Technology’s public consultation on the White Paper of the Committee of Experts on a Data Protection Framework for India.
As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018.
Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.
The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.
Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.
Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.Click to read the full article: Convergence is the Key to Future-Proofing Security
...Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
On September 7, 2018, the New Jersey Attorney General announced a settlement with data management software developer Lightyear Dealer Technologies, LLC, doing business as DealerBuilt, resolving an investigation by the state Division of Consumer Affairs into a data breach that exposed the personal information of car dealership customers in New Jersey and across the country. The breach occurred in 2016, when a researcher exposed a gap in the company’s security and gained access to unencrypted files containing names, addresses, social security numbers, driver’s license numbers, bank account information and other data belonging to thousands of individuals, including at least 2,471 New Jersey residents.
To resolve the investigation, DealerBuilt agreed to undertake a number of changes to its security practices to help prevent similar breaches from occurring in the future, including:
- the creation of an information security program to be implemented and maintained by a chief security officer;
- the maintenance and implementation of encryption protocols for personal information stored on laptops or other portable devices or transmitted wirelessly;
- the maintenance and implementation of policies that clearly define which users have authorization to access its computer network;
- the maintenance of enforcement mechanisms to approve or disapprove access requests based on those policies; and
- the maintenance of data security assessment tools, including vulnerability scans.
In addition to the above, DealerBuilt agreed to an $80,784 settlement amount, comprised of $49,420 in civil penalties and $31,364 in reimbursement of the Division’s attorneys’ fees, investigative costs and expert fees.
Read the consent order resolving the investigation.
On September 4, 2018, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) announced a collaborative project to develop a voluntary privacy framework to help organizations manage privacy risk. The announcement states that the effort is motivated by innovative new technologies, such as the Internet of Things and artificial intelligence, as well as the increasing complexity of network environments and detail of user data, which make protecting individuals’ privacy more difficult. “We’ve had great success with broad adoption of the NIST Cybersecurity Framework, and we see this as providing complementary guidance for managing privacy risk,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.
The goals for the framework stated in the announcement include providing an enterprise-level approach that helps organizations prioritize strategies for flexible and effective privacy protection solutions and bridge gaps between privacy professionals and senior executives so that organizations can respond effectively to these challenges without stifling innovation. To kick off the effort, the NIST has scheduled a public workshop on October 16, 2018, in Austin, Texas, which will occur in conjunction with the International Association of Privacy Professionals’ “Privacy. Security. Risk. 2018” conference. The Austin workshop is the first in a series planned to collect current practices, challenges and requirements in managing privacy risks in ways that go beyond common cybersecurity practices.
In parallel with the NIST’s efforts, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) is “developing a domestic legal and policy approach for consumer privacy.” The announcement stated that the NTIA is coordinating its efforts with the department’s International Trade Administration “to ensure consistency with international policy objectives.”
On September 5, 2018, the Law of 30 July 2018 on the Protection of Natural Persons with regard to the Processing of Personal Data (the “Law”) was published in the Belgian Official Gazette.
This is the second step in adapting the Belgian legal framework to the EU GDPR after the Law of 3 December 2017 Creating the Data Protection Authority, which reformed the Belgian Data Protection Authority.
The Law is available in French and Dutch.
On September 5, 2018, the European Commission (the “Commission”) announced in a press release the launch of the procedure to formally adopt the Commission’s adequacy decision with respect to Japan.
The press release notes that the EU-Japan talks on personal data protection were completed in July 2018, and announces the publication of the draft adequacy decision and related documents which, among other things, set forth the additional safeguards Japan will accord EU personal data that is transferred to Japan. According to the release, Japan is undertaking a similar formal adoption process concerning the reciprocal adequacy findings between the EU and Japan.
The adequacy decision intends to ensure that Japan provides privacy protections for EU personal data that are “essentially equivalent” to the EU standard. The key elements of the agreement include:
- Specific safeguards to be applied by Japan to bridge the difference between EU and Japanese standards on issues such as sensitive data, onward transfer of EU data to third countries, and the right to access and rectification.
- Enforcement by the Japan Personal Information Protection Commission.
- Safeguards concerning access to EU personal data by Japanese public authorities for law enforcement and national security purposes.
- A complaint-handling mechanism.
The press release also notes that the adequacy decision will complement the EU-Japan Economic Partnership Agreement by supporting free data flows between the EU and Japan and providing for privileged access to 127 million Japanese consumers.
Finally, the press release also outlines the next four steps in the formal approval process:
- Opinion from the European Data Protection Board.
- Consultation of a committee composed of representatives from the EU Member States (comitology procedure).
- Update of the European Parliament Committee on Civil Liberties, Justice and Home Affairs.
- Adoption of the adequacy decision by the College of Commissioners.
On August 3, 2018, Ohio Governor John Kasich signed into law Senate Bill 220 (the “Bill”), which provides covered entities with an affirmative defense to tort claims, based on Ohio law or brought in an Ohio court, that allege or relate to the failure to implement reasonable information security controls which resulted in a data breach. According to the Bill, its purpose is “to be an incentive and to encourage businesses to achieve a higher level of cybersecurity through voluntary action.” The Bill will take effect 90 days after it is provided to the Ohio Secretary of State.
On August 6, 2018, the Federal Trade Commission published a notice seeking public comment on whether the FTC should expand its enforcement power over corporate privacy and data security practices. The notice, published in the Federal Register, follows FTC Chairman Joseph Simons’ declaration at a July 18 House subcommittee hearing that the FTC’s current authority to do so, under Section 5 of the FTC Act, is inadequate to deal with the privacy and security issues in today’s market.
The FTC asks for input by August 20, 2018. It also requests comment on growing or evolving its authority in several other areas, including the intersection between privacy, big data and competition. Beginning in September 2018, the FTC will conduct a series of public hearings to consider “whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection law, enforcement priorities, and policy.”
- Protect the OS through baseline security measures for users of modern hardware and Windows versions. This includes safeguarding the integrity of core OS components from bootkits (Windows Defender System Guard); running Microsoft’s browsers in a hypervisor-enforced sandbox (Windows Defender Application Guard); and implementing exploit mitigation (Windows Defender Exploit Guard: Exploit Protection). Providing a robust operating environment for its users has become too important to delegate such tasks to third-parties.
- Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
- Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.
- Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
- Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
- Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
- Several capabilities have dependencies that are incompatible with other products. For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
- Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
- The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.
- Microsoft created the Antimalware Scan Interface (AMSI) for integrations. For instance, when a third-party product stops a threat not detected by Windows Defender Antivirus, the product can use AMSI to notify Microsoft’s tech about the event.
- The company also announced the Microsoft Intelligent Security Association. Members of this invitation-only club can share threat intel by way of Microsoft’s Intelligent Security Graph and collaborate in other unstated ways.
I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute. Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.
A Backdoor with a Backdoor
To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.
Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:
“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”
Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server irc.slim.org.au that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.
Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.
You Are an Idiot
The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:
When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:
At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.
The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.
As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course. It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.
On July 27, 2018, the Justice BN Srikrishna committee, formed by the Indian government in August 2017 with the goal of introducing a comprehensive data protection law in India, issued a report, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians (the “Committee Report”), and a draft data protection bill called the Personal Data Protection Bill, 2018 (the “Bill”). Noting that the Indian Supreme Court has recognized the right to privacy as a fundamental right, the Committee Report summarizes the existing data protection framework in India, and recommends that the government of India adopt a comprehensive data protection law such as that proposed in the Bill.
The Bill would establish requirements for the collection and processing of personal data, including particular limitations on the processing of sensitive personal data and the length of time in which personal data may be retained. The Bill would require organizations to appoint a Data Protection Officer and require annual third-party audits of the organization’s processing of personal data. Further, the Bill would require organizations to implement certain information security safeguards, including (where appropriate) de-identification and encryption, as well as safeguards to prevent misuse, unauthorized access to, modification, disclosure or destruction of personal data. The Bill also would require regulator notification and, in certain circumstances, individual notification in the event of a data breach. Noncompliance with the Bill would result in penalties up to 50 million Rupees (approximately USD $728,000), or two percent of global annual turnover of the preceding financial year, whichever is higher.
The Bill has been submitted for consideration to the Ministry of Electronics and Information Technology and is expected to be introduced in Parliament at a later date.
In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information (“ePHI”). The guidance, along with additional resources identified by OCR, are outlined below:
- Identifying software vulnerabilities. Every HIPAA-covered entity is required to perform a risk analysis that identifies risks and vulnerabilities to the confidentiality, integrity and availability of ePHI. Such entities must also implement measures to mitigate risks identified during the risk analysis. In its guidance, OCR indicated that mitigation activities could include installing available patches (where reasonable and appropriate) or, where patches are unavailable (such as in the case of obsolete or unsupported software), reasonable compensating controls, such as restricting network access.
- Patching software. Patches may be applied to software and firmware on a wide range of devices, and the installation of vendor patches is typically routine. The installation of such updates, however, may result in unexpected events due to the interconnected nature of computer programs and systems. OCR recommends that organizations install patches for identified vulnerabilities in accordance with their security management processes. In order to help ensure the protection of ePHI during patching, OCR also identifies common steps in patch management as including evaluation, patch testing, approval, deployment, verification and testing.
In addition to the information contained in the guidance, OCR identified a number of additional resources, which are listed below:
Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.
Personalized Porn Extortion Scam
Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:
“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?
I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”
The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.
The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.
Data Breach Lawsuit Scam
In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:
“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”
The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.
What to Do?
If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.
Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.
To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:
- When Targeted Attacks Aren’t Targeted: The Magic of Cold Reading
- How the Scarcity Principle is Used in Online Scams and Attacks
- A Close Look at PayPal Overpayment Scams That Target Craigslist Sellers
On July 17, 2018, the European Union and Japan successfully concluded negotiations on a reciprocal finding of an adequate level of data protection, thereby agreeing to recognize each other’s data protection systems as “equivalent.” This will allow personal data to flow safely between the EU and Japan, without being subject to any further safeguards or authorizations.
This is the first time that the EU and a third country have agreed on a reciprocal recognition of the adequate level of data protection. So far, the EU has adopted only unilateral adequacy decisions with 12 other countries—namely, Andorra, Argentina, and Canadian organizations subject to PIPEDA, the Faroe Islands, Guernsey, Israel, the Isle of Man, Jersey, New Zealand, Switzerland, Uruguay and the United States (EU-U.S. Privacy Shield)—which allow personal data to flow safely from the EU to these countries.
On January 10, 2017, the European Commission (“the Commission”) published a communication addressed to the European Parliament and European Council on Exchanging and Protecting Personal Data in a Globalized World. As announced in this communication, the Commission launched discussions on possible adequacy decisions with “key trading partners,” starting with Japan and South Korea in 2017.
The discussions with Japan were facilitated by the amendments made to the Japanese Act on the Protection of Personal Information (Act No. 57 of 2003) that came into force on May 30, 2017. These amendments have modernized Japan’s data protection legislation and increased convergence with the European data protection system.
Key parts of the adequacy finding
Once adopted, the adequacy finding will cover personal data exchanged for commercial purposes between EU and Japanese businesses, as well as personal data exchanged for law enforcement purposes between EU and Japanese authorities, ensuring that in all such exchanges a high level of data protection is applied.
This adequacy finding was decided based on a series of additional safeguards that Japan will apply to EU citizens’ personal data when transferred to their country, including the following measures:
- expanding the definition of sensitive data;
- facilitating the exercise of individuals’ rights of access to and rectification of their personal data;
- increasing the level of protection for onward data transfers of EU data from Japan to a third country; and
- establishing a complaint-handling mechanism, under the supervision of the Japanese data protection authority (the Personal Information Protection Commission), to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities.
The EU and Japan will launch their respective internal procedures for the adoption of the adequacy finding. The Commission is planning on adopting its adequacy decision in fall 2018, following the usual procedure for adopting EU adequacy decisions. This involves (1) the approval of the draft adequacy decision by the College of EU Commissioners; (2) obtaining an opinion from EU Data Protection Authorities within the European Data Protection Board, (3) completing by a comitology procedure, requiring the European Commission to obtain the green light from a committee composed of representatives of EU Member States; and (4) updating the European Parliament Committee on Civil Liberties, Justice and Home Affairs. Once adopted, this will be the first adequacy decision under the EU General Data Protection Regulation.
On July 12, 2018, two U.S. Senators sent a letter to the Federal Trade Commission asking the agency to investigate the privacy policies and practices of smart TV manufacturers. In their letter, Senators Edward Markey (D-MA) and Richard Blumenthal (D-CT) note that smart TVs can “compile detailed profiles about users’ preferences and characteristics” which can then allow companies to personalize ads to be sent to “customers’ computers, phones or any other device that shares the smart TV’s internet connection.”
The Senators cite the history of unique privacy concerns raised by companies tracking information about the content viewers watch on TV. They also noted the VIZIO case, in which the FTC settled with VIZIO for preinstalling software on its TV to track data on consumers without their consent.
The letter concludes by reemphasizing the private nature of content consumers watch on their smart TVs, and stating that any company that collects data from consumers via their smart TVs should “comprehensively and consistently detail” what data will collected and how it will be used. The letter also recommends that users should be given the opportunity to affirmatively consent to the collection and use of their sensitive information.
On June 27, 2018, the Ministry of Public Security of the People’s Republic of China published the Draft Regulations on the Classified Protection of Cybersecurity (网络安全等级保护条例（征求意见稿）) (“Draft Regulation”) and is seeking comments from the public by July 27, 2018.
Pursuant to Article 21 of the Cybersecurity Law, the Draft Regulation establishes the classified protection of cybersecurity. The classified protection of information security scheme was previously implemented under the Administrative Measures for the Classified Protection of Information Security. The Draft Regulation extends targets of security protection from just computer systems to anything related to construction, operation, maintenance and use of networks, such as cloud computing, big data, artificial intelligence, Internet of Things, project control systems and mobile Internet, except those set up by individuals and families for personal use.
The obligations of network operators include, but are not limited to, (1) grade confirmation and filing; (2) security construction and ratification; (3) grade assessment; (4) self-inspection; (5) protection of network infrastructure, network operation, and data and information; (6) effective handling of network safety accidents; and (7) guarding against network crimes, all of which vary across the classified levels where the network operators are graded.
Network Operator Compliance
- Classified Levels. The network operator must ascertain its security level in the planning and design phase. The network is classified by five levels for the degree of security protection as shown below.
Explanation of terms such as “object” and “degree of injury” can be found in Draft Information Security Technology-Guidelines for Grading of Classified Cybersecurity Protection, which closed for public comment on March 5, 2018.
- Grading Review. The considerations for classified level grading include network functions, scope of services, types of service recipients and types of data being processed. For networks graded at Level 2 or above, the operator is required to conduct an expert review and then obtain approval from any relevant industry regulator. Cross provincial or national uniform connected networks must be graded and organized for review by the industry regulator.
- Grading Filing. After grading review, any networks graded at Level 2 or above must file with a public security authority at or above county level, after confirmation of the classified level. The filing certificate should be issued after satisfactory review by the relevant public security authority. The timeline for the relevant public security authority to review such applications is not defined in the Draft Regulation, and is within the authority’s discretion.
- General Obligations of Cybersecurity Protection. Most of the general cybersecurity obligations are stated in the Cybersecurity Law, and the Draft Regulation stipulates additional obligations, such as:
- In the event of detection, blocking or elimination of illegal activity, network operators must prevent illegal activity from spreading and preventthe destruction or loss of evidence of crimes.
- File network records.
- Report online events to the local public security authority with jurisdiction within 24 hours. To prevent divulging state secrets, reports should be made to the local secrecy administration with jurisdiction at the same time.
- Special Obligations of Security Protection. The networks graded at Level 3 or above require a higher standard for their network operators, which will bear general liability and special liability, including:
- designating the department of cybersecurity and forming a level-by-level examination system for any change of network, access, operation and maintenance provider;
- reviewing the plan or strategy developed by professional technical personnel;
- conducting a background check on key cybersecurity personnel, and confirming those personnel have relevant professional certificates;
- managing the security of of service providers;
- dynamically monitoring the network and establishing a connection with the public security authority at the same level;
- implementing redundancy, back-up and recovery measures for important network equipment, communications links and systems; and
- establishing a classified assessment scheme, conducting such assessments, rectifying the results, and reporting the information to relevant authorities.
- Online Testing Before Operation. Network operators at Level 2 or above must test the security of new networks before operation. Assessments must be performed at least once a year. For new networks at Level 3 or above, the classified assessment must be conducted by a cybersecurity classified assessment entity before operation and annually thereafter. Based on the results, the network operators must rectify the risks and report to the public security authority with its filing records.
- Procurement. The network products used for the “important part” of the network must be evaluated by a professional assessment entity. If a product has an impact on national security, it must be checked by state cyberspace authorities and relevant departments of State Counsel. The Draft Regulation does not clearly define what the “important part” of a network means.
- Maintenance. Maintenance of networks graded at Level 3 or above must be conducted in China. If business needs require cross-border maintenance, cybersecurity evaluations and risk control measures must take place before performance of such cross-border maintenance. Maintenance records must be kept for public security’s inspection.
- Protection of Data and Information Security. Network operators must protect the security of their data and information in the process of collection, storage, transmission, use, supply and destruction, and keep recovery and backup files in a different place. Personal information protection requirements in the Draft Regulation are similar to those found under the Cybersecurity Law.
- Protection of Encrypted Networks. The networks relating to state secrets are governed by encryption protection. Networks graded at Level 3 or above must be password protected and operators must entrust relevant entities to test the security of the password application. Upon passing evaluation, the networks can run online and must be evaluated once a year. The results of the evaluation must be filed with (1) the public security authority with its filing record and (2) the cryptography management authority where the operator is located.
Powers of the Competent Authorities
In addition to regular supervision and inspection, the Draft Regulation gives the competent authorities more powerful measures to handle investigations and emergencies. During an investigation, when necessary, the competent authorities may order the operator to block information transmission, shut down the network temporarily and backup relevant data. In case of an emergency, the competent authorities may order the operator to disconnect the network and shut down servers.
Penalties for Violations
The Cybersecurity Law includes liability provisions for violations of security protection, technical maintenance, and data security and personal information protection, as well as enforcement of the Draft Regulation. The penalties include rectification orders, fines, relevant business suspension, business closing or website shut-down pending rectification, and revocation of relevant business permits and/or licenses.
On July 11, 2018, computer manufacturer Lenovo Group Ltd. (“Lenovo”) agreed to a proposed $8.3 million settlement in the hopes of resolving consumer class claims regarding pop-up ad software Lenovo pre-installed on its laptops. Lenovo issued a press release stating that, “while Lenovo disagrees with allegations contained in these complaints, we are pleased to bring this matter to a close after 2-1/2 years.”
In June of 2014, Lenovo and Superfish, a software development company, entered into a profit-sharing agreement regarding Superfish’s VisualDiscovery ad-serving software. Lenovo pre-installed VisualDiscovery on a certain group of its laptops, which it began shipping out in late summer. According to the consumer class claims, VisualDiscovery accessed sensitive consumer data and riddled the laptops with security vulnerabilities.
The proposed settlement, filed in the U.S. District Court for the Northern District of California, requires Lenovo and Superfish to pay the class $7.3 million and $1 million, respectively. It will be finalized only with Judge Haywood Gilliam’s approval.
We previously reported on the FTC’s 2017 settlement with Lenovo regarding preinstalled laptop software.
This post has been updated.
As reported by Mundie e Advogados, on July 10, 2018, Brazil’s Federal Senate approved a Data Protection Bill of Law (the “Bill”). The Bill, which is inspired by the EU General Data Protection Regulation (“GDPR”), is expected to be sent to the Brazilian President in the coming days.
As reported by Mattos Filho, Veiga Filho, Marrey Jr e Quiroga Advogados, the Bill establishes a comprehensive data protection regime in Brazil and imposes detailed rules for the collection, use, processing and storage of personal data, both electronic and physical.
Key requirements of the Bill include:
- National Data Protection Authority. The Bill calls for the establishment of a national data protection authority which will be responsible for regulating data protection, supervising compliance with the Bill and enforcing sanctions.
- Data Protection Officer. The Bill requires businesses to appoint a data protection officer.
- Legal Basis for Data Processing. Similar to the GDPR, the Bill provides that the processing of personal data may only be carried out where there is a legal basis for the processing, which may include, among other bases, where the processing is (1) done with the consent of the data subject, (2) necessary for compliance with a legal or regulatory obligation, (3) necessary for the fulfillment of an agreement, or (4) necessary to meet the legitimate interest of the data controller or third parties. The legal basis for data processing must be registered and documented. Processing of sensitive data (including, among other data elements, health information, biometric information and genetic data) is subject to additional restrictions.
- Consent Requirements. Where consent of the data subject is relied upon for processing personal data, consent must be provided in advance and must be free, informed and unequivocal, and provided for a specific purpose. Data subjects may revoke consent at any time.
- Data Breach Notification. The Bill requires notification of data breaches to the data protection authority and, in some circumstances, to affected data subjects.
- Privacy by Design and Privacy Impact Assessments. The Bill requires organizations to adopt data protection measures as part of the creation of new products or technologies. The data protection authority will be empowered to require a privacy impact assessment in certain circumstances.
- Data Transfer Restrictions. The Bill places restrictions on cross-border transfers of personal data. Such transfers are allowed (1) to countries deemed by the data protection authority to provide an adequate level of data protection, and (2) where effectuated using standard contractual clauses or other mechanisms approved by the data protection authority.
Noncompliance with the Bill can result in fines of up to two percent of gross sales, limited to 50 million reias (approximately USD 12.9 million) per violation. The Bill will take effect 18 months after it is published in Brazil’s Federal Gazette.
Update: The Bill was signed into law in mid-August and is expected to take effect in early 2020.
On July 3, 2018, a draft bill (the “Data Protection Bill”) was introduced that would establish a comprehensive data protection regime in Kenya. The Data Protection Bill would require “banks, telecommunications operators, utilities, private and public companies and individuals” to obtain data subjects’ consent before collecting and processing their personal data. The Data Protection Bill also would impose certain data security obligations related to the collection, processing and storage of data, and would place restrictions on third-party data transfers. Violations of the Data Protection Bill could result in fines up to 500,000 shillings (USD 4,960) and a five-year prison term. According to BNA Privacy Law Watch, while the Data Protection Bill is a “private member’s bill,” the Kenyan government “is working on a separate data-protection policy and bill to be published this week,” with the goal of consolidating the two proposals.
As reported in BNA Privacy Law Watch, on June 27, 2018, Equifax entered into a consent order (the “Order”) with 8 state banking regulators (the “Multi-State Regulatory Agencies”), including those in New York and California, arising from the company’s 2017 data breach that exposed the personal information of 143 million consumers.
Equifax’s key obligations under the terms of the Order include: (1) developing a written risk assessment; (2) establishing a formal and documented Internal Audit Program that is capable of effectively evaluating IT controls; (3) developing a consolidated written Information Security Program and Information Security Policy; (4) improving oversight of its critical vendors and ensuring that sufficient controls are developed to safeguard information; (5) improving standards and controls for supporting the patch management function, including reducing the number of unpatched systems; and (6) enhancing oversight of IT operations as it relates to disaster recovery and business continuity. The Order also requires Equifax to strengthen its Board of Directors’ oversight over the company’s information security program, including regular Board reviews of relevant policies and procedures.
Equifax must also submit to the Multi-State Regulatory Agencies a list of all remediation projects planned, in process or implemented in response to the 2017 data breach, as well as written reports outlining its progress toward complying with the provisions of the Order.
On June 28, 2018, the Governor of California signed AB 375, the California Consumer Privacy Act of 2018 (the “Act”). The Act introduces key privacy requirements for businesses, and was passed quickly by California lawmakers in an effort to remove a ballot initiative of the same name from the November 6, 2018, statewide ballot. We previously reported on the relevant ballot initiative. The Act will take effect January 1, 2020.
Key provisions of the Act include:
- Applicability. The Act will apply to any for-profit business that (1) “does business in the state of California”; (2) collects consumers’ personal information (or on the behalf of which such information is collected) and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information; and (3) satisfies one or more of the following thresholds: (a) has annual gross revenues in excess of $25 million, (b) alone or in combination annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more consumers, households or devices, or (c) derives 50 percent or more of its annual revenue from selling consumers’ personal information (collectively, “Covered Businesses”).
- Definition of Personal Information. Personal information is defined broadly as “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” This definition of personal information aligns more closely with the EU General Data Protection Regulation’s definition of personal data. The Act includes a list of enumerated examples of personal information, which includes, among other data elements, name, postal or email address, Social Security number, government-issued identification number, biometric data, Internet activity information and geolocation data, as well as “inferences drawn from any of the information identified” in this definition.
- Right to Know
- Upon a verifiable request from a California consumer, a Covered Business must disclose (1) the categories and specific pieces of personal information the business has collected about the consumer; (2) the categories of sources from which the personal information is collected; (3) the business or commercial purposes for collecting or selling personal information; and (4) the categories of third parties with whom the business shares personal information.
- In addition, upon verifiable request, a business that sells personal information about a California consumer, or that discloses a consumer’s personal information for a business purpose, must disclose (1) the categories of personal information that the business sold about the consumer; (2) the categories of third parties to whom the personal information was sold (by category of personal information for each third party to whom the personal information was sold); and (3) the categories of personal information that the business disclosed about the consumer for a business purpose.
- The above disclosures must be made within 45 days of receipt of the request using one of the prescribed methods specified in the Act. The disclosure must cover the 12-month period preceding the business’s receipt of the verifiable request. The 45-day time period may be extended when reasonably necessary, provided the consumer is provided notice of the extension within the first 45-day period. Importantly, the disclosures must be made in a “readily useable format that allows the consumer to transmit this information from one entity to another entity without hindrance.”
- Exemption. Covered Businesses will not be required to make the disclosures described above to the extent the Covered Business discloses personal information to another entity pursuant to a written contract with such entity, provided the contract prohibits the recipient from selling the personal information, or retaining, using or disclosing the personal information for any purpose other than performance of services under the contract. In addition, the Act provides that a business is not liable for a service provider’s violation of the Act, provided that, at the time the business disclosed personal information to the service provider, the business had neither actual knowledge nor reason to believe that the service provider intended to commit such a violation.
- Disclosures and Opt-Out. The Act will require Covered Businesses to provide notice to consumers of their rights under the Act (e.g., their right to opt out of the sale of their personal information), a list of the categories of personal information collected about consumers in the preceding 12 months, and, where applicable, that the Covered Business sells or discloses their personal information. If the Covered Business sells consumers’ personal information or discloses it to third parties for a business purpose, the notice must also include lists of the categories of personal information sold and disclosed about consumers, respectively. Covered Businesses will be required to make this disclosure in their online privacy notice. Covered Businesses must separately provide a clear and conspicuous link on their website that says, “Do Not Sell My Personal Information,” and provide consumers a mechanism to opt out of the sale of their personal information, a decision which the Covered Business must respect. Businesses also cannot discriminate against consumers who opt out of the sale of their personal information, but can offer financial incentives for the collection of personal information.
- Specific Rules for Minors: If a business has actual knowledge that a consumer is less than 16 years of age, the Act prohibits a business from selling that consumer’s personal information unless (1) the consumer is between 13–16 years of age and has affirmatively authorized the sale (i.e., they opt in); or (2) the consumer is less than 13 years of age and the consumer’s parent or guardian has affirmatively authorized the sale.
- Right to Deletion. The Act will require a business, upon verifiable request from a California consumer, to delete specified personal information that the business has collected about the consumer and direct any service providers to delete the consumer’s personal information. However, there are several enumerated exceptions to this deletion requirement. Specifically, a business or service provider is not required to comply with the consumer’s deletion request if it is necessary to maintain the consumer’s personal information to:
- Complete the transaction for which the personal information was collected, provide a good or service requested by the consumer, or reasonably anticipated, within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract with the consumer.
- Detect security incidents; protect against malicious, deceptive, fraudulent or illegal activity; or prosecute those responsible for that activity.
- Debug to identify and repair errors that impair existing intended functionality.
- Exercise free speech, ensure the right of another consumer to exercise his or her right of free speech, or exercise another right provided for by law.
- Comply with the California Electronic Communications Privacy Act.
- Engage in public or peer-reviewed scientific, historical or statistical research in the public interest (when deletion of the information is likely to render impossible or seriously impair the achievement of such research) if the consumer has provided informed consent.
- To enable solely internal uses that are reasonably aligned with the consumer’s expectations based on the consumer’s relationship with the business.
- Comply with a legal obligation.
- Otherwise use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.
- The Act is enforceable by the California Attorney General and authorizes a civil penalty up to $7,500 per violation.
- The Act provides a private right of action only in connection with “certain unauthorized access and exfiltration, theft, or disclosure of a consumer’s nonencrypted or nonredacted personal information,” as defined in the state’s breach notification law, if the business failed “to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.”
- In this case, the consumer may bring an action to recover damages up to $750 per incident or actual damages, whichever is greater.
- The statute also directs the court to consider certain factors when assessing the amount of statutory damages, including the nature, seriousness, persistence and willfulness of the defendant’s misconduct, the number of violations, the length of time over which the misconduct occurred, and the defendant’s assets, liabilities and net worth.
Prior to initiating any action against a business for statutory damages, a consumer must provide the business with 30 days’ written notice of the consumer’s allegations and, if within the 30 days the business cures the alleged violation and provides an express written statement that the violations have been cured, the consumer may not initiate an action for individual statutory damages or class-wide statutory damages. These limitations do not apply to actions initiated solely for actual pecuniary damages suffered as a result of the alleged violation.
On June 21, 2018, California lawmakers introduced AB 375, the California Consumer Privacy Act of 2018 (the “Bill”). If enacted and signed by the Governor by June 28, 2018, the Bill would introduce key privacy requirements for businesses, but would also result in the removal of a ballot initiative of the same name from the November 6, 2018, statewide ballot. We previously reported on the relevant ballot initiative.
The Bill expands some of the requirements in the ballot initiative. For example, if enacted, the Bill would require businesses to disclose (e.g., in its Privacy Notice) the categories of personal information it collects about California consumers and the purposes for which that information is used. The Bill also would require businesses to disclose, upon a California consumer’s verifiable request, the categories and specific pieces of personal information it has collected about the consumer, as well as the business purposes for collecting or selling the information and the categories of third parties with whom it is shared. The Bill would require businesses to honor consumers’ requests to delete their data and to opt out of the sale of their personal information, and would prohibit a business from selling the personal information of a consumer under the age of 16 without explicit (i.e., opt-in) consent.
A significant difference between the Bill and the ballot initiative is that the Bill would give the California Attorney General exclusive authority to enforce most of its provisions (whereas the ballot initiative provides for a private right of action with statutory damages of up to $3,000 per violation). One exception would be that a private right of action would exist in the event of a data breach in which the California Attorney General declines to bring an action.
If enacted, the Bill would take effect January 1, 2020.
On June 22, 2018, the United States Supreme Court held in Carpenter v. United States that law enforcement agencies must obtain a warrant supported by probable cause to obtain historical cell-site location information (“CSLI”) from third-party providers. The government argued in Carpenter that it could access historical CSLI through a court order alone under the Stored Communications Act (the “SCA”). Under 18 U.S.C. § 2703(d), obtaining an SCA court order for stored records only requires the government to “offer specific and articulable facts showing that there are reasonable grounds.” However, in a split 5-4 decision, the Supreme Court held that the Fourth Amendment requires law enforcement agencies to obtain a warrant supported by probable cause to obtain historical CSLI.
In Carpenter, the FBI obtained a court order under the SCA for historical CSLI. These records were used to convict the defendant, Carpenter, of robbing a number of stores, including the cell phone provider that ultimately provided the relevant records. Carpenter argued that accessing his CSLI without a warrant constituted a Fourth Amendment violation. The government argued that historical CSLI constituted routinely collected business records protected by the Supreme Court’s third-party doctrine (established in U.S. v. Miller and Smith v. Maryland), which provided that the public did not have a reasonable expectation of privacy for certain records held by third-party service providers. Siding with Carpenter, however, the Court held, “A majority of the court has already recognized that individuals have a reasonable expectation of privacy in the whole of their physical movements…Allowing government access to cell-site records—which hold for many Americans the ‘privacies of life,’—contravenes that expectation.”
Chief Justice Roberts was joined in the majority opinion by Justices Ginsburg, Breyer, Sotomayor and Kagan. Justices Kennedy, Thomas, Alito and Gorsuch dissented, each offering separate dissenting opinions.
Recently, Iowa and Nebraska enacted information security laws applicable to personal information. Iowa’s law applies to operators of online services directed at and used by students in kindergarten through grade 12, whereas Nebraska’s law applies to all commercial entities doing business in Nebraska who own or license Nebraska residents’ personal information.
In Iowa, effective July 1, 2018, HF 2354 will impose information security requirements on operators of websites, online services, online applications or mobile applications who have actual knowledge that their sites, services or applications are designed, marketed and used primarily for kindergarten through grade 12 school purposes (“Operators”). Under the law, Operators will be required to implement and maintain information security procedures and practices consistent with industry standards and applicable state and federal laws to prevent students’ personal information from unauthorized access, destruction, use, modification or disclosure. Operators also are prohibited from selling or renting students’ information. The law does not apply to “general audience” websites, online services, online applications or mobile applications.
In Nebraska, effective July 18, 2018, LB757 requires commercial entities that conduct business in Nebraska and own, license or maintain computerized data that includes Nebraska residents’ personal information to implement and maintain reasonable security procedures and practices, including safeguards for the disposal of personal information. Under the law, commercial entities also must require, by contract, that their service providers institute and maintain reasonable security procedures and practices (the service provider provision applies to contracts entered into on or after the effective date of the law). A violation of the information security requirements under the law is subject to the penalty provisions of the state’s Consumer Protection Act, but expressly does not give rise to a private cause of action.
On November 6, 2018, California voters will consider a ballot initiative called the California Consumer Privacy Act (“the Act”). The Act is designed to give California residents (i.e., “consumers”) the right to request from businesses (see “Applicability” below) the categories of personal information the business has sold or disclosed to third parties, with some exceptions. The Act would also require businesses to disclose in their privacy notices consumers’ rights under the Act, as well as how consumers may opt out of the sale of their personal information if the business sells consumer personal information. Key provisions of the Act include:
- Definition of Personal Information. Personal information is defined broadly as “information that identifies, relates to, describes, references, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or device.” The Act includes a list of enumerated examples of personal information, which includes, among other data elements, name, postal or email address, Social Security number, government-issued identification number, biometric data, Internet activity information and geolocation data.
- Applicability. The Act would apply to any for-profit business that “does business in the state of California” and (1) has annual gross revenues in excess of $50 million; (2) annually sells, alone or in combination, the personal information of 100,000 or more consumers or devices; or (3) derives 50 percent or more of its annual revenue from selling consumers’ personal information (collectively, “Covered Businesses”).
- Right to Know. The Act would require Covered Businesses to disclose, upon a verifiable request from a California consumer, the categories of personal information the business has collected about the consumer, as well as the categories of personal information sold and/or disclosed for a business purpose to third parties. The Act would also require Covered Businesses to identify (i.e., provide the name and contact information for) the third parties to whom the Covered Business has sold or disclosed, for a business purpose, consumers’ personal information. Covered Businesses would be required to comply with such requests free of charge within 45 days of receipt, and would be required to provide this information only once within a 12-month period.
- Exemption. Based on a carve-out in the definition of “third party” (which is defined to exclude (1) “the business that collects personal information from consumers under this Act” or (2) “a person to whom the business discloses a consumer’s personal information for a business purpose pursuant to a written contract”), Covered Businesses would not be required to make the disclosures described above to the extent the Covered Business discloses personal information to another entity pursuant to a written contract with such entity, provided the contract prohibits the recipient from selling the personal information, or retaining, using or disclosing the personal information for any purpose other than performance of services under the contract.
- Disclosures and Right to Opt Out. The Act would require Covered Businesses to provide notice to consumers of their rights under the Act, and, where applicable, that the Covered Business sells their personal information. If the Covered Business sells consumers’ personal information, the notice must disclose that fact and include that consumers have a right to opt out of the sale of their personal information. Covered Businesses would be required to make this disclosure in their online privacy notice and must separately provide a clear and conspicuous link on their website that says, “Do Not Sell My Personal Information” and provides an opt-out mechanism. If a consumer opts out, the Covered Business would be required to stop selling the consumers’ personal information unless the consumer expressly re-authorizes such sale.
- Liability for Security Breaches. Pursuant to the Act, if a Covered Business suffers a “breach of the security of the system” (as defined in California’s breach notification law), the Covered Business may be held liable for a violation of the Act if the Covered Business “failed to implement and maintain reasonable security procedures and practices, appropriate to the nature of the information, to protect personal information.”
- Enforcement. The Act would establish a private right of action and expressly provides that a violation of the Act establishes injury-in-fact without the need to show financial harm. The Act establishes maximum statutory damages of $3,000 per violation or actual damages, whichever is higher. Separately, the Act also would be enforceable by the California Attorney General and would authorize a civil penalty of up to $7,500 per violation. The Act also contains whistleblower enforcement provisions.
If passed, the Act would take effect November 7, 2018, but would “only apply to personal information collected or sold by a business on or after” August 7, 2019.
Recently, the Personal Data Collection and Protection Ordinance (“the Ordinance”) was introduced to the Chicago City Council. The Ordinance would require businesses to (1) obtain prior opt-in consent from Chicago residents to use, disclose or sell their personal information, (2) notify affected Chicago residents and the City of Chicago in the event of a data breach, (3) register with the City of Chicago if they qualify as “data brokers,” (4) provide specific notification to mobile device users for location services and (5) obtain prior express consent to use geolocation data from mobile applications.
Key provisions of the Ordinance include:
- Opt-in Consent to Use and Share Personal Information. In order to use, disclose or sell the personal information of Chicago residents, website operators and online services providers must obtain prior opt-in consent from individuals. Upon request, businesses must disclose to the individual (or their designee) the personal information they maintain about the individual.
- Security Breach Notification. The Ordinance also imposes breach notification obligations on businesses that process personal information of Chicago residents. Businesses are generally required to notify affected residents or, if they do not own the affected personal information, the data owners within 15 days of discovering the breach. Businesses must also notify the City of Chicago regarding the timing, content and distribution of the notices to individuals and number of affected individuals.
- Data Broker Registration. Data brokers, defined as commercial entities that collect, assemble and possess personal information about Chicago residents who are not their customers or employees to trade the information, must register with the City of Chicago. Data brokers must submit an annual report to the City, including, among other items, (1) the number of Chicago residents whose personal information the brokers collected in the previous year and (2) the name and nature of the businesses to which the brokers shared personal information.
- Mobile Devices with Location Services Functionality. Retailers that sell or lease mobile devices with location services functionality must provide notice about the functionality in the form and substance prescribed by the Ordinance.
- Location-enabled Mobile Applications. In order to collect, use, store or disclose geolocation information from a mobile application, individuals must generally provide affirmative express consent. This requirement is subject to various exceptions, such as in certain instances to allow a parent or guardian to locate their minor child.
Depending on the requirement, the Ordinance allows for a private right of action and specifies fines to address violations.
Recently, Colorado’s governor signed into law House Bill 18-1128 “concerning strengthening protections for consumer data privacy” (the “Bill”), which takes effect September 1, 2018. Among other provisions, the Bill (1) amends the state’s data breach notification law to require notice to affected Colorado residents and the Colorado Attorney General within 30 days of determining that a security breach occurred, imposes content requirements for the notice to residents and expands the definition of personal information; (2) establishes data security requirements applicable to businesses and their third-party service providers; and (3) amends the state’s law regarding disposal of personal identifying information.
Key breach notification provisions of the Bill include:
- Definition of Personal Information: The Bill amends Colorado’s breach notification law to define “personal information” as a Colorado resident’s first name or first initial and last name in combination with one or more of the following data elements: (1) Social Security number; (2) student, military or passport identification number; (3) driver’s license number or identification card number; (4) medical information; (5) health insurance identification number; or (6) biometric data. The amended law’s definition of “personal information” also includes a Colorado resident’s (1) username or email address in combination with a password or security questions and answers that would permit access to an online account and (2) account number or credit or debit card number in combination with any required security code, access code or password that would permit access to that account.
- Attorney General Notification: If an entity must notify Colorado residents of a data breach, and reasonably believes that the breach has affected 500 or more residents, it must also provide notice to the Colorado Attorney General. Notice to the Attorney General is required even if the covered entity maintains its own procedures for security breaches as part of an information security policy or pursuant to state or federal law.
- Timing: Notice to affected Colorado residents and the Colorado Attorney General must be made within 30 days after determining that a security breach occurred.
- Content Requirements: The Bill also requires that notice to affected Colorado residents must include (1) the date, estimated date or estimated date range of the breach; (2) a description of the personal information acquired or reasonably believed to have been acquired; (3) contact information for the entity; (4) the toll-free numbers, addresses and websites for consumer reporting agencies and the FTC; and (5) a statement that the Colorado resident can obtain information from the FTC and the credit reporting agencies about fraud alerts and security freezes. If the breach involves a Colorado resident’s username or email address in combination with a password or security questions and answers that would permit access to an online account, the entity must also direct affected individuals to promptly change their password and security questions and answers, or to take other steps appropriate to protect the individual’s online account with the entity and all other online accounts for which the individual used the same or similar information.
Key data security and disposal provisions of the Bill include:
- Definition of Personal Identifying Information: The Bill defines personal identifying information as “a social security number; a personal identification number; a password; a pass code; an official state or government-issued driver’s license or identification card number; a government passport number; biometric data…; an employer, student, or military identification number; or a financial transaction device.”
- Applicability: The information security and disposal provisions of the Bill apply to “covered entities,” defined as persons that maintain, own or license personal identifying information in the course of the person’s business, vocation or occupation.
- Protection of Personal Identifying Information: The Bill requires a covered entity that maintains, owns or licenses personal identifying information to implement and maintain reasonable security procedures and practices appropriate to the nature of the personal identifying information it holds, and the nature and size of the business and its operations.
- Third-Party Service Providers: Under the Bill, a covered entity that discloses information to a third-party service provider must require the service provider to implement and maintain reasonable security procedures and practices that are (1) appropriate to the nature of the personal identifying information disclosed and (2) reasonably designed to help protect the personal identifying information from unauthorized access, use, modification, disclosure or destruction. A covered entity does not need to require a third-party service provider to do so if the covered entity agrees to provide its own security protection for the information it discloses to the provider.
- Written Disposal Policy: The Bill requires covered entities to create a written policy for the destruction or proper disposal of paper and electronic documents containing personal identifying information that requires the destruction of those documents when they are no longer needed. A covered entity is deemed in compliance with this section of the Bill if it is regulated by state or federal law and maintains procedures for disposal of personal identifying information pursuant to that law.
Recently, Vermont enacted legislation (H.764) that regulates data brokers who buy and sell personal information. Vermont is the first state in the nation to enact this type of legislation.
- Definition of Data Broker. The law defines a “data broker” broadly as “a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship.”
- Definition of “Brokered Personal Information.” “Brokered personal information” is defined broadly to mean one or more of the following computerized data elements about a consumer, if categorized or organized for dissemination to third parties: (1) name, (2) address, (3) date of birth, (4) place of birth, (5) mother’s maiden name, (6) unique biometric data, including fingerprints, retina or iris images, or other unique physical or digital representations of biometric data, (7) name or address of a member of the consumer’s immediate family or household, (8) Social Security number or other government-issued identification number, or (9) other information that, alone or in combination with the other information sold or licensed, would allow a reasonable person to identify the consumer with reasonable security.
- Registration Requirement. The law requires data brokers to register annually with the Vermont Attorney General and pay a $100 annual registration fee.
- Disclosures to State Attorney General. Data brokers must disclose annually to the State Attorney General information regarding their practices related to the collection, storage or sale of consumers’ personal information. Data brokers also must disclose annually their practices, if any, for allowing consumers to opt out of the collection, storage or sale of their personal information. Further, the law requires data brokers to report annually the number of data breaches experienced during the prior year and, if known the total number of consumers affected by the breaches. There are additional disclosure requirements if the data broker knowingly possesses brokered personal information of minors, including a separate statement detailing the data broker’s practices for the collection, storage and sale of that information and applicable opt-out policies. Importantly, the law does not require data brokers to offer consumers the ability to opt out.
- Information Security Program. The law requires data brokers to develop, implement and maintain a written, comprehensive information security program that contains appropriate physical, technical and administrative safeguards designed to protect consumers’ personal information.
- Elimination of Fees for Security Freezes. The law eliminates fees associated with a consumer placing or lifting a security freeze. Previously, Vermont law allowed for fees of up to $10 to place, and up to $5 to lift temporarily or remove, a security freeze.
- Enforcement. A violation of the law is considered an unfair and deceptive act in commerce in violation of Vermont’s consumer protection law.
- Effective Date. The registration and data security obligations take effect January 1, 2019, while the other provisions of the law take effect immediately.
In a statement, Vermont Attorney General T.J. Donovan said, “This bill not only saves [Vermonters] money, but it gives them information and tools to help them keep their personal information secure.”
On June 6, 2018, the U.S. Court of Appeals for the Eleventh Circuit vacated a 2016 Federal Trade Commission (“FTC”) order compelling LabMD to implement a “comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers.” The Eleventh Circuit agreed with LabMD that the FTC order was unenforceable because it did not direct the company to stop any “unfair act or practice” within the meaning of Section 5(a) of the Federal Trade Commission Act (the “FTC Act”).
The case stems from allegations that LabMD, a now-defunct clinical laboratory for physicians, failed to protect the sensitive personal information (including medical information) of consumers, resulting in two specific security incidents. One such incident occurred when a third party informed LabMD that an insurance-related report, which contained personal information of approximately 9,300 LabMD clients (including names, dates of birth and Social Security numbers), was available on a peer-to-peer (“P2P”) file-sharing network.
Following an FTC appeal process, the FTC ordered LabMD to implement a comprehensive information security program that included:
- designated employees accountable for the program;
- identification of material internal and external risks to the security, confidentiality and integrity of personal information;
- reasonable safeguards to control identified risks;
- reasonable steps to select service providers capable of safeguarding personal information, and requiring them by contract to do so; and
- ongoing evaluation and adjustment of the program.
In its petition for review of the FTC order, LabMD asked the Eleventh Circuit to decide whether (1) its alleged failure to implement reasonable data security practices constituted an unfair practice within the meaning of Section 5 of the FTC Act and (2) whether the FTC’s order was enforceable if it does not direct LabMD to stop committing any specific unfair act or practice.
The Eleventh Circuit assumed, for purposes of its ruling, that LabMD’s failure to implement a reasonably designed data-security program constituted an unfair act or practice within the meaning of Section 5 of the FTC Act. However, the court held that the FTC’s cease and desist order, which was predicated on LabMD’s general negligent failure to act, was not enforceable. The court noted that the prohibitions contained in the FTC’s cease and desist orders and injunctions “must be stated with clarity and precision,” otherwise they may be unenforceable. The court found that in LabMD’s case, the cease and desist order contained no prohibitions nor instructions to the company to stop a specific act or practice. Rather, the FTC “command[ed] LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.” The court took issue with the FTC’s scheme of “micromanaging,” and concluded that the cease and desist order “mandate[d] a complete overhaul of LabMD’s data-security program and [said] precious little about how this [was] to be accomplished.” The court also noted that the FTC’s prescription was “a scheme Congress could not have envisioned.”
On June 2, 2018, Oregon’s amended data breach notification law (“the amended law”) went into effect. Among other changes, the amended law broadens the applicability of breach notification requirements, prohibits fees for security freezes and related services provided to consumers in the wake of a breach and adds a specific notification timing requirement.
Key Provisions of the Amended Law Include:
- Definition of Personal Information: Oregon’s definition of personal information now includes the consumer’s first name or initial and last name combined with “any other information or combination of information that a person reasonably knows or should know would permit access to the consumer’s financial account.”
- Expanded Scope of Application: Instead of applying only to persons who “own or license” personal information that they use in the course of their business, the amended law now also applies to any person who “otherwise possesses” such information and uses it in the course of their business. It also requires notice when an organization receives a notice of breach from another person that “maintains or otherwise possesses personal information on the person’s behalf.” Persons who maintain or otherwise possess information on behalf of another must “notify the other person as soon as is practicable after discovering a breach of security.”
- Notice Requirements: The amended law adds a new notice deadline. Notice of a breach of security must be given in the “most expeditious manner possible, without unreasonable delay,” and not later than 45 days after discovering or being notified of the security breach. Also, while the amended law exempts entities that are required to provide breach notification under certain other requirements (e.g., federal laws such as HIPAA), such entities are now required to provide the Attorney General with any notice sent to consumers or regulators in compliance with such other requirements.
- Providing Credit Monitoring Services: If organizations offer consumers credit monitoring services or identity theft prevention or mitigation services in connection with their notice of a breach, they cannot make those services contingent on the consumer providing a credit or debit card number, or accepting another service that the person offers to provide for a fee. The terms and conditions of any contract for the provision of these services must embody these requirements.
- Prohibiting Fees for Security Freezes: Under the amended law, consumer reporting agencies are prohibited from charging a consumer a fee for “placing, temporarily lifting or removing a security freeze on the consumer’s report,” creating or deleting protective records, placing or removing security freezes on protected records, or replacing identification numbers, passwords or similar devices that the agency previously provided.
On May 31, 2018, the Federal Trade Commission published on its Business Blog a post addressing the easily missed data deletion requirement under the Children’s Online Privacy Protection Act (“COPPA”).
The post cautions that companies must review their data policy in order to comply with the data retention and deletion rule. Under Section 312.10 of COPPA, an online service operator may retain personal information of a child “for only as long as is reasonably necessary to fulfill the purposes for which the information was collected.” After that, the operator must delete it with reasonable measures to ensure secure deletion.
The FTC explains that a thorough review of data retention policies is crucial for compliance, as the deletion requirement is triggered without an express request from parents. Companies must verify, among other items, when the data ceases to be necessary for the initial purpose for which it was collected, and what they do with the data at that point. For instance, the FTC illustrates, a subscription-based children’s app provider would want to ask what it does with the data when a parent closes an account, a subscription is not renewed or an account becomes inactive. If the information is still necessary for billing purposes, the company must determine how much longer it needs the information.
The FTC provides the following questions that companies want to ask to ensure compliance:
- What types of personal information do you collect from children?
- What is your stated purpose for collecting the information?
- How long do you need to retain the information for the initial purpose?
- Does the purpose for using the information end with an account deletion, subscription cancellation or account inactivity?
- When it’s time to delete information, are you doing it securely?
If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.
Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.
When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:
- Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
- Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
- Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
- Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
- Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”
In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.
There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:
“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”
Compare that description to NIST’s definition of information security:
“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”
From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.
Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:
“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”
Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.
On May 24, 2018, the Federal Trade Commission granted final approval to a settlement (the “Final Settlement”) with PayPal, Inc., to resolve charges that PayPal’s peer-to-peer payment service, Venmo, misled consumers regarding certain restrictions on the use of its service, as well as the privacy of transactions. The proposed settlement was announced on February 27, 2018. In its complaint, the FTC alleged that Venmo misrepresented its information security practices by stating that it “uses bank-grade security systems and data encryption to protect your financial information.” Instead, the FTC alleged that Venmo violated the Gramm-Leach-Bliley Act’s (“GLBA’s”) Safeguards Rule by failing to (1) have a written information security program; (2) assess the risks to the security, confidentiality and integrity of customer information; and (3) implement basic safeguards such as providing security notifications to users that their passwords were changed. The complaint also alleged that Venmo (1) misled consumers about their ability to transfer funds to external bank accounts, and (2) misrepresented the extent to which consumers could control the privacy of their transactions, in violation of the GLBA Privacy Rule.
The Final Settlement prohibits Venmo from misrepresenting “any material restrictions on the use of its service, the extent of control provided by any privacy settings, and the extent to which Venmo implements or adheres to a particular level of security.” Venmo also must make certain transaction- and privacy-related disclosures to consumers and refrain from violating the Privacy Rule and Safeguards Rule. Venmo is required to obtain biennial third-party assessments of its compliance with the Rules for 10 years, which, according to the FTC, is “[c]onsistent with past cases involving violations of Gramm-Leach-Bliley Act Rules.”
On May 8, 2018, Senator Ron Wyden (D–OR) demanded that the Federal Communications Commission investigate the alleged unauthorized tracking of Americans’ locations by Securus Technologies, a company that provides phone services to prisons, jails and other correctional facilities. Securus allegedly purchases real-time location data from a third-party location aggregator and provides the data to law enforcement without obtaining judicial authorization for the disclosure of the data. In turn, the third-party location aggregator obtains the data from wireless carriers. Federal law restricts how and when wireless carriers can share certain customer information with third parties, including law enforcement. Wireless carriers are prohibited from sharing certain customer information, including location data, unless the carrier has obtained the customer’s consent or the sharing is otherwise required by law.
To access real-time location data from Securus, Senator Wyden’s letter alleges, correctional officers can enter any U.S. wireless phone number and upload a document purporting to be an “official document giving permission” to obtain real-time location data about the wireless customer. According to the letter, Securus does not take any steps to verify that the documents actually provide judicial authorization for the real-time location surveillance. The letter requests that the FCC investigate Securus’ practices and the wireless carriers’ failure to maintain exclusive control over law enforcement access to their customers’ location data. The letter also calls for a broader investigation into the customer consent that each wireless carrier requires from other companies before sharing customer location information and other data. Separately, Senator Wyden also sent a letter to the major wireless carriers requesting an investigation into the safeguards in place to prevent the unauthorized sharing of wireless customer information.
In response, the FCC confirmed that it has opened an investigation into LocationSmart, reportedly the third-party vendor that sold the location data to Securus. Senator Wyden provided comment to the website Ars Technica that the “location aggregation industry” has functioned with “essentially no oversight,” and urged the FCC to “expand the scope of this investigation and to more broadly probe the practice of third parties buying real-time location data on Americans.”
When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.
To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.
For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):
This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.
If you can read this,
you must follow the rules that
are explained below.
When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.
Don’t share without a
contract any information
All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.
Know your duties for
safeguarding company info.
Use it properly.
By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.
On April 27, 2018, the Federal Trade Commission issued two warning letters to foreign marketers of geolocation tracking devices for violations of the U.S. Children’s Online Privacy Protection Act (“COPPA”). The first letter was directed to a Chinese company, Gator Group, Ltd., that sold the “Kids GPS Gator Watch” (marketed as a child’s first cellphone); the second was sent to a Swedish company, Tinitell, Inc., marketing a child-based app that works with a mobile phone worn like a watch. Both products collect a child’s precise geolocation data, and the Gator Watch includes geofencing “safe zones.”
Importantly, in commenting on its ability to reach foreign companies that target U.S. children, the FTC stated that “[t]he COPPA Rule applies to foreign-based websites and online services that are involved in commerce in the United States. This would include, among others, foreign-based sites or services that are directed to children in the United States, or that knowingly collect personal information from children in the United States.”
In both letters, the FTC warned that it had specifically reviewed the foreign operators’ online services and had identified potential COPPA violations (i.e., a failure to provide direct notice or obtain parental consent prior to collecting geolocation data). The FTC stated that it expected the companies to come into compliance with COPPA, including in the case of Tinitell, which had stopped marketing the watch in an effort to adhere to COPPA’s ongoing obligation to keep children’s data secure.
On May 4, 2018, St. Kitts and Nevis’ legislators passed the Data Protection Bill 2018 (the “Bill”). The Bill was passed to promote the protection of personal data processed by public and private bodies.
Attorney General the Honourable Vincent Byron explained that the Bill is largely derived from the Organization of Eastern Caribbean States model and “seeks to ensure that personal information in the custody or control of an organization, whether it be a public group like the government, or private organization, shall not be disclosed, processed or used other than the purpose for which it was collected, except with the consent of the individual or where exemptions are clearly defined.”
Read more about the Bill.
On May 1, 2018, the Information Security Technology – Personal Information Security Specification (the “Specification”) went into effect in China. The Specification is not binding and cannot be used as a direct basis for enforcement. However, enforcement agencies in China can still use the Specification as a reference or guideline in their administration and enforcement activities. For this reason, the Specification should be taken seriously as a best practice in personal data protection in China, and should be complied with where feasible.
The Specification constitutes a best practices guide for the collection, retention, use, sharing and transfer of personal information, and for the handling of related information security incidents. It includes (without limitation) basic principles for personal information security, notice and consent requirements, security measures, rights of data subjects and requirements related to internal administration and management. The Specification establishes a definition of sensitive personal information, and provides specific requirements for its collection and use.
Read our previous blog post from January 2018 for a more detailed description of the Specification.
On April 30, 2018, the Federal Trade Commission announced that BLU Products, Inc. (“BLU”), a mobile phone manufacturer, agreed to settle charges that the company allowed ADUPS Technology Co. Ltd. (“ADUPS”), a third-party service provider based in China to collect consumers’ personal information without their knowledge or consent, notwithstanding the company’s promises that it would keep the relevant information secure and private. The relevant personal information allegedly included, among other information, text message content and real-time location information. On September 6, 2018, the FTC gave final approval to the settlement in a unanimous 5-0 vote.
The FTC’s complaint alleged that BLU falsely claimed that the company (1) limited third-party collection of data from users’ devices to information needed to perform requested services, and (2) implemented appropriate physical, technical and administrative safeguards to protect consumers’ personal information. The FTC alleged that BLU in fact failed to implement appropriate security procedures to oversee the security practices of its service providers, including ADUPS, and that as a result, ADUPS was able to (and did in fact) collect sensitive personal information from BLU devices without consumers’ knowledge or consent. ADUPS allegedly collected text message contents, call and text logs with full telephone numbers, contact lists, real-time location data, and information about applications used and installed on consumers’ BLU devices. The FTC alleged that BLU’s lack of oversight allowed ADUPS to collect this information notwithstanding the fact that ADUPS did not need this information to perform the relevant services for BLU. The FTC further alleged that preinstalled ADUPS software on BLU devices “contained common security vulnerabilities that could enable attackers to gain full access to the devices.”
The terms of the proposed settlement prohibit BLU from misrepresenting the extent to which it protects the privacy and security of personal information and requires the company to implement and maintain a comprehensive security program. The company also must undergo biannual third-party assessments of its security program for 20 years and is subject to certain recordkeeping and compliance monitoring requirements.
Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.
The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.
Product Management at a Large Firm
In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.
Access to Customers
Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.
Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.
Access to Expertise
Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.
Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)
Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.
What It’s Like at a Large Firm
In a nutshell, these are the characteristics inherent to product management roles at large companies:
- An established sales organization, which provides access to customers
- Potentially-conflicting priorities and incentives with groups and individuals within the organization
- Rigid organizational structure and bureaucracy
- Potentially-easier access to funding for sophisticated projects and complex products
- Possibly-easier access to the needed expertise
- Well-defined career development roadmap
I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!
I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.
Product Management in a Startup
One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.
What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.
Experimenting With Capabilities
Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.
Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.
In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.
People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)
Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.
However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.
What It’s Like at a Startup
Here are the key aspects of performing product management at a startup:
- Ability and need to iterate faster to get feedback
- Willingness and need to take higher risks
- Lower bureaucratic burden and red tape
- Much harder to reach customers
- Often fewer resources to deliver on the roadmap
- Fluid designation of responsibilities
I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.
There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.
Which Setting is Best for You?
Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.
The Canadian government recently published a cabinet order stating that the effective date for breach notification provisions in the Digital Privacy Act would be November 1, 2018. At that time, businesses that experience a “breach of security safeguards” would be required to notify affected individuals, as well as the Privacy Commissioner and any other organization or government institution that might be able to reduce the risk of harm resulting from the breach.
Canada has had mandatory breach notification regulations at the provincial level, and many companies have also voluntarily reported breaches to the federal Privacy Commissioner, so most organizations should be well-equipped to meet the November 1 compliance deadline.
On March 6, 2018, Singapore’s Ministry of Communications and Information announced that Singapore has joined the APEC Cross-Border Privacy Rules (“CBPR”) and Privacy Recognition for Processors (“PRP”) systems. As we previously reported, Singapore submitted its intent to join both systems in July 2017.
Singapore becomes the sixth APEC economy to join the CBPR system, joining the U.S., Mexico, Canada, Japan and South Korea, and the second APEC economy to join the PRP system, after the U.S. The decision to join will mean that once the CBPR are fully operationalized in Singapore, through a local Accountability Agent that will certify companies, Singapore-based organizations will be able certify to the CBPR and rely on them as a cross-border data transfer mechanism. Other APEC economies actively working on joining the CBPR and PRP systems include Australia, Chinese Taipei and the Philippines.
The APEC CBPR system is a regional, multilateral cross-border data transfer mechanism and an enforceable privacy code of conduct developed for businesses by the 21 APEC member economies. The CBPR system implements the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework.
As we previously reported, the APEC PRP system allows information processors to demonstrate their ability to effectively implement an information controller’s privacy obligations related to the processing of personal information. The PRP also enables information controllers to identify qualified and accountable processors, as well as to assist small- or medium-sized processors that are not widely known to gain visibility and credibility.
On February 28, 2018, the Federal Trade Commission issued a report, titled Mobile Security Updates: Understanding the Issues (the “Report”), that analyzes the process by which mobile devices sold in the U.S. receive security updates and provides recommendations for improvement. The Report is based on information the FTC obtained from eight mobile device manufacturers, and from information the Federal Communications Commission collected from six wireless carriers.
The Report raises a number of issues concerning the frequency and length of time that mobile devices are patched for security vulnerabilities, including:
- The complexity of the mobile ecosystem leads to a lag time between discovery of vulnerabilities and the issuance of patches.
- Formal support periods and update schedules are rare, and vary widely in application.
- Many device manufacturers fail to maintain regular records about update support decisions, patch development time, carrier testing time, deployment time or uptake rate.
- Manufacturers provide little information to the public about support period, update frequency or end of update support.
While the Commission commends device manufacturers, carriers and operating system developers that have contributed to providing effective security updates, it also makes several recommendations to improve the security update process:
- Consumer Education: Government, industry and advocacy groups should work together to educate consumers about the significance of security update support and consumers’ role in the operating system update process.
- Length of Security Updates: Device manufacturers, operating system developers and wireless carriers should ensure that all mobile devices receive operating system security updates for a period of time that is consistent with consumers’ reasonable expectations.
- Keep and Share Support Data: Companies involved in the security update process should consider keeping and consulting records about support length, update frequency, customized patch development time, testing time and uptake rate; they also should consider sharing this information with partners to fashion appropriate policies and practices.
- Security-only Updates: Industry should continue to streamline the security update process, including by patching vulnerabilities through security-only updates, when the benefits of more immediate action outweigh the convenience of a bundled security-functionality update.
- Minimum Guaranteed Support Periods: Device manufacturers should consider adopting and disclosing minimum guaranteed security support periods (and update frequency) for their devices; they also should consider giving device owners prompt notice when security support is about to end (and when it has ended), so that consumers can make informed decisions about device replacement or post-support use.
On February 26, 2018, the United States Court of Appeals for the Ninth Circuit ruled in an en banc decision that the “common carrier” exception in the Federal Trade Commission Act is “activity-based,” and therefore applies only to the extent a common carrier is engaging in common carrier services. The decision has implications for FTC authority over Internet service providers, indicating that the FTC has authority to bring consumer protection actions against such providers to the extent they are engaging in non-common carrier activities. The Federal Communications Commission (“FCC”) has previously ruled that Internet access service is not a common carrier service subject to that agency’s jurisdiction.
The Ninth Circuit’s decision arose from a case brought by the FTC against AT&T Mobility, LLC (“AT&T”), regarding AT&T’s “data-throttling practice,” by which “the company reduced customer broadband data speed without regard to actual network congestion” when a customer’s mobile data usage exceeded a specified limit. The FTC brought an action under Section 5 of the FTC Act, alleging that the practice was unfair and deceptive. AT&T moved to dismiss the action, arguing that it was exempt from the FTC’s Section 5 authority on the basis of the “common carrier exception,” in which “common carriers subject to the Acts to regulate commerce” are exempt from Section 5 enforcement authority. The court held that the common carrier exception is activity based, not “status-based,” and applies only to the extent an entity is engaging in common carrier activities. Accordingly, AT&T could not claim Section 5 exemption based on the argument that its overall status was that of a common carrier, and the Ninth Circuit denied its motion to dismiss. The Chairman of the FCC and Acting Chair of the FTC both expressed approval of the court’s decision.
On February 22, 2018, the Federal Trade Commission (“FTC”) published a blog post that provides tips on how consumers can use Virtual Private Network (“VPN”) apps to protect their information while in transit over public networks. The FTC notes that some consumers are finding VPN apps helpful in protecting their mobile device traffic over Wi-Fi networks at coffee shops, airports and other locations. Through a VPN app, a user can browse websites and use apps on their mobile devices, still shielding the traffic from prying eyes as it transmits via public networks.
On February 12, 2018, in a settled enforcement action, the U.S. Commodity Futures Trading Commission (“CFTC”) charged a registered futures commission merchant (“FCM”) with violations of CFTC regulations relating to an ongoing data breach. Specifically, the FCM failed to diligently supervise an information technology provider’s (“IT vendor’s”) implementation of certain provisions in the FCM’s written information systems security program. Though not unprecedented, this case represents a rare CFTC enforcement action premised on a cybersecurity failure at a CFTC-registered entity.
According to the CFTC, a defect in a network-attached storage device installed by the FCM’s IT vendor left unencrypted customers’ records and other information stored on the device unprotected from cyber-exploitation. The defect left the information unprotected for nearly 10 months and led to the compromise of this data after the FCM’s network was accessed by an unauthorized, unaffiliated third party. The IT vendor failed to discover the vulnerability in subsequent network risk assessments, notwithstanding the fact that the unauthorized third party had blogged about exploiting the vulnerability at other companies. The FCM did not learn about the breach of its systems until directly contacted by the third party.
The CFTC charged the FCM under Regulation 166.3, which requires that every CFTC registrant “diligently supervise the handling [of confidential information] by its partners, officers, employees and agents,” and Regulation 160.30, which requires all FCMs to “adopt policies and procedures that address administrative, technical and physical safeguards for the protection of customer records and information.” The CFTC noted that an FCM may delegate the performance of its information systems security program’s technical provisions, including those relevant here. But in contracting with an IT vendor as its agent to perform these services, the FCM cannot abdicate its responsibilities under Regulation 166.3, and must diligently supervise the IT vendor’s handling of all activities relating to the registered entity’s business as a CFTC registrant.
To settle the case, the FCM agreed to (1) pay a $100,000 civil monetary penalty and (2) cease and desist from future violations of Regulation 166.3. The CFTC noted the FCM’s cooperation during the investigation and agreed to reduce sanctions as a result.
On February 6, 2018, the Federal Trade Commission (“FTC”) released its agenda for PrivacyCon 2018, which will take place on February 28. Following recent FTC trends, PrivacyCon 2018 will focus on privacy and data security considerations associated with emerging technologies, including the Internet of Things, artificial intelligence and virtual reality. The event will feature four panel presentations by over 20 researchers, including (1) collection, exfiltration and leakage of private information; (2) consumer preferences, expectations and behaviors; (3) economics, markets and experiments and (4) tools and ratings for privacy management. The FTC’s press release emphasizes the event’s focus on the economics of privacy, including “how to quantify the harms that result when companies fail to secure consumer information, and how to balance the costs and benefits of privacy-protective technologies and practices.”
PrivacyCon 2018, which is free and open to the public, will take place at the Constitution Center conference facility in Washington, D.C. The event will also be webcast on the FTC website and live tweeted using the hashtag #PrivacyCon18.
On February 5, 2018, the Federal Trade Commission (“FTC”) announced its most recent Children’s Online Privacy Protection Act (“COPPA”) case against Explore Talent, an online service marketed to aspiring actors and models. According to the FTC’s complaint, Explore Talent provided a free platform for consumers to find information about upcoming auditions, casting calls and other opportunities. The company also offered a monthly fee-based “pro” service that promised to provide consumers with access to specific opportunities. Users who registered online were asked to input a host of personal information including full name, email, telephone number, mailing address and photo; they also were asked to provide their eye color, hair color, body type, measurements, gender, ethnicity, age range and birth date.
The FTC alleges that Explore Talent collected the same range of personal information from users who indicated they were under age 13 as from other users, and made no attempts to provide COPPA-required notice or obtain parental consent before collecting such information. Once registered on ExploreTalent.com, all profiles, including children’s, became publicly visible, and registered adults were able to “friend” and exchange direct private messages with registered children. The FTC alleges that, between 2014 and 2016, more than 100,000 children registered on ExploreTalent.com. As part of the settlement, Explore Talent agreed to (1) pay a $500,000 civil penalty (which was suspended upon payment of $235,000), (2) comply with COPPA in the future and (3) delete the information it previously collected from children.
On February 1, 2018, the Singapore Personal Data Protection Commission (the “PDPC”) published its response to feedback collected during a public consultation process conducted during the late summer and fall of 2017 (the “Response”). During that public consultation, the PDPC circulated a proposal relating to two general topics: (1) the relevance of two new alternative bases for collecting, using and disclosing personal data (“Notification of Purpose” and “Legal or Business Purpose”), and (2) a mandatory data breach notification requirement. The PDPC invited feedback from the public on these topics.
“Notification of Purpose” as a new basis for an organization to collect, use and disclose personal data.
In its consultation, the PDPC solicited views on “Notification of Purpose” as a possible new basis for data processing. In its Response, the PDPC noted that it intends to amend its consent framework to incorporate the “Notification of Purpose” approach (also called “deemed consent by notification”), which will essentially provide for an opt-out approach.
Under that approach, organizations may collect, use and disclose personal data merely by providing (1) some form of appropriate notice of purpose in situations where there is no foreseeable adverse impact on the data subjects, and (2) a mechanism to opt out. The PDPC will issue guidelines on what would be considered “not likely to have any adverse impact.” The approach will also require organizations to undertake risk and impact assessments to determine any such possible adverse impacts. Where the risk assessments determine a likely adverse impact, the approach may not be used. Also, the “Notification of Purpose” approach may not be used for direct marketing purposes.
The PDPC will not specify how organizations will be required to notify individuals of purpose, and will leave it to organizations to determine the most appropriate method under the circumstances, which might include a general notification on a website or social media page. The notification must, however, include information on how to opt out or withdraw consent from the collection, use or disclosure. The PDPC also said it would provide further guidance on situations where opt-out would be challenging, such as where large volumes of personal data are collected by sensors, for example.
“Legitimate Interest” as a basis to collect, use or disclose personal data.
In its consultation, the PDPC also sought feedback on a proposed “Legal and Business Purpose” ground for processing personal information. In its Response, the PDPC said that based on the feedback, it intends to adopt this concept under the EU term “legitimate interest.” The PDPC will provide guidance on the legal and business purposes that come within the ambit of “legitimate interest,” such as fraud prevention. “Legitimate interest” will not cover direct marketing purposes. The intent behind this ground for processing is to enable organizations to collect, use and disclose personal data in contexts where there is a need to protect legitimate interests that will have economic, social, security or other benefits for the public or a section thereof, and the processing should not be subject to consent. The benefits to the public or a section thereof must outweigh any adverse impacts to individuals. Organizations must conduct risk assessments to determine whether they can meet this requirement. Organizations relying on “legitimate interest” must also disclose this fact and make available a document justifying the organization’s reliance on it.
Mandatory Data Breach Notification
Regarding the 72-hour breach notification requirement it proposed in the consultation, the PDPC acknowledged in its Response that the affected organization may need time to determine the veracity of a suspected data breach incident. Thus, it stated that the time frame for the breach notification obligation only commences when the affected organization has determined that a breach is eligible for reporting. This means that when an affected organization first becomes aware that an information security incident may have occurred, the organization still has time to conduct a digital forensic investigation to determine precisely what has happened, including whether any breach of personal information security has happened at all, before the clock begins to run on the 72-hour breach notification deadline. From that time, the organization must report the incident to the affected individuals and the PDPC as soon as practicable, but still within 72 hours.
The PDPC requires that the digital forensic investigation be completed within 30 days. However, it still allows that the investigation may continue for more than 30 days if the affected organization has documented reasons why the time taken to investigate was reasonable and expeditious.
Both the Centre for Information Policy and Leadership and Hunton & Williams LLP filed public comments in the PDPC’s consultation.
On January 18, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its updated Working Documents, which include a table with the elements and principles found in Binding Corporate Rules (“BCRs”) and Processor Binding Corporate Rules (the “Working Documents”). The Working Documents were adopted by the Working Party on October 3, 2017, for public consultation.
In its comments, CIPL recommends several changes or clarifications the Working Party should incorporate in its final Working Documents.
Comments Applicable to Both Controller and Processor BCRs
- The Working Documents should clarify that, with respect to the BCR application, providing confirmation of assets to pay for damages resulting from a BCR-breach by members outside of the EU does not extend to fines under the GDPR. Additionally, the Working Party should clarify that access to sufficient assets, such as a guarantee from the parent company, is sufficient to provide valid confirmation.
- The Working Document should confirm that bringing existing BCRs in line with the GDPR requires updating the BCRs in line with the Working Documents and sending the updated BCRs to the respective supervisory authority.
- The Working Party should clarify that companies currently in the process of BCR approval through a national mutual recognition procedure should be treated the same as fully approved BCRs, and must simply update the BCRs in line with the GDPR.
Comments Applicable to BCR Controllers (“BCR-C”) Only
- The Working Party should clarify that companies with approved BCR-C do not have to implement additional controller-processor contracts reiterating the processors’ obligations under Article 28(3) of the GDPR with respect to internal transfers between controllers and processors within the same group of companies.
- The Working Party should also clarify that BCRs only need to include the requirement that individuals benefitting from third-party beneficiary rights be provided with the information as required by Article 13 and 14 of the GDPR. The BCRs do not need to restate the actual elements of these provisions.
Comments Applicable to BCR Processors Only
- The Working Documents should emphasize that an individual’s authority to enforce the duty of a processor to cooperate with the controller is limited to situations where cooperation is required to allow the individual to exercise their rights or to make a complaint.
- The Working Party should remove the requirement that processors must open their facilities for audit, and clarify that the completion of questionnaires or the provision of independent audit reports are sufficient to meet the requirements of Article 28(3)(h). Furthermore, the Working Documents should make clear that certifications can be used in accordance with Article 28(5) to demonstrate compliance with Article 28(3)(h).
General BCR Recommendations
- The Working Party should clarify that BCR-approved companies are deemed adequate and transfers between two BCR-approved companies (either controllers or processors) or transfers from any controller (not BCR-approved) to a BCR-approved controller are permitted.
- The status for existing and UK-approved BCRs post-Brexit should be clarified, along with the future role of the UK ICO with regard to BCRs and the situation for new BCR applications post-Brexit.
- The Working Party should highlight the importance of BCR interoperability with other transfer mechanisms, and propose that the EU Commission consider and promote such interoperability through appropriate means and processes.
- The Working Party should recommend the EU Commission consider third-party BCR approval by approved certification bodies or “Accountability Agents” and/or a self-certified system for BCRs, which would streamline the BCR approval process and facilitate faster processing times.
To read the above recommendations in more detail, along with all of CIPL’s other recommendations on BCRs, view the full paper.
CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.
On January 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Transparency (the “Guidelines”). The Guidelines were adopted by the Working Party on November 28, 2017, for public consultation.
CIPL acknowledges and appreciates the Working Party’s emphasis on user-centric transparency and the use of layered notices to achieve full disclosure, along with its statements on the use of visualization tools and the importance of avoiding overly technical or legalistic language in providing transparency. However, CIPL also identified several areas in the Guidelines that would benefit from further clarification or adjustment.
In its comments to the Guidelines, CIPL recommends several changes or clarifications the Working Party should incorporate in its final guidelines relating to elements of transparency under the EU GDPR, information to be provided to the data subject, information related to further processing, exertion of data subjects’ rights, and exceptions to the obligation to provide information.
Some key recommendations include:
- Clear and Concise yet Comprehensive Disclosure: The Guidelines should more clearly acknowledge the tension between asking for clear and concise notices and including all of the information required by the GDPR and recommended by the Working Party. CIPL believes Articles 13 and 14 of the GDPR already require sufficient information, and the risk-based approach gives organizations the opportunity to prioritize which information should be provided.
- Consequences of Processing: The Working Party should amend their “best practice” recommendation that controllers “spell out” what the most important consequences of the processing will be. The Working Party should clarify that in providing information beyond what is required under the GDPR, controllers must be able to exercise their judgement on whether and how to provide such information.
- Use of Certain Qualifiers: CIPL recommends removing the Working Party’s statement that qualifiers such as “may,” “might,” “some,” “often” and “possible” be avoided in privacy statements. Sometimes these terms are more appropriate than others. For instance, saying certain processing “will occur” is not as accurate as “may occur” when it is not certain whether the processing will in fact occur.
- Proving Identity Orally: The Guidelines state that information may be provided orally to a data subject on request, provided that their identity is proven by other non-oral means. CIPL believes the Working Party should revise this statement, as voice recognition or verbal identity confirming questions and answers are valid mechanisms of proving one’s identity orally.
- Updates to Privacy Notices: The Working Party should remove its suggestion that any changes to an existing privacy statement or notice must be notified to individuals. CIPL believes communications to individuals should be required only for changes having a significant impact.
- Reminder Notices: The Working Party should remove the recommendation that the controller send reminder notices to individuals when processing occurs on an ongoing basis, even when they have already received the information. This is not required by the GDPR and individuals may feel overwhelmed or frustrated by such constant reminders. Individuals should, however, be able to easily pull such information from an accessible location.
- New Purposes of Processing: The Guidelines should amend the statement and example suggesting that in addition to providing individuals new information in connection with a new purpose of processing, the controller, as a matter of best practice, should re-provide the individual with all of the information under the notice requirement received previously. CIPL believes this could potentially distract individuals from focusing on any new key information which could undermine transparency, and it should be up to the data controller to determine whether the re-provision of information would be useful.
- Active Steps: The Working Party should clarify its statement that individuals should not have to take “active steps” to obtain information covered by Articles 13 and 14 of the GDPR, to the effect that clicking links to access notices would not constitute taking an “active step.”
- Compatibility Analyses: The Working Party states that in connection with processing for compatibility purposes, organizations should provide individuals with “further information on the compatibility analysis carried out under Article 6(4).” CIPL believes such a requirement undermines transparency, as the information would provide little benefit to an individual’s understanding of the organization’s data processing, and burden organizations who have to reform, redact, compose and deliver such information.
- Disproportionate Efforts: The Guidelines should acknowledge that the disproportionate efforts clause (Article 14(5)(b)) can be relied upon by controllers for purposes other than archiving in the public interest, scientific or historical research purposes or for statistical purposes (e.g., confirming identity or preventing fraud). The Working Party should also revise its statement that controllers who rely on Article 14(5)(b) should have to carry out a balancing exercise to assess the effort of the controller to provide the information versus the impact on the individual if not provided with the information. The GDPR does not require this and the disproportionality at issue refers to the disproportionality between the effort associated with the provision of such information and the intended data use.
To read the above recommendations in more detail along with all of CIPL’s other recommendations on transparency, view the full paper.
CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.
On January 28, 2018, Facebook published its privacy principles and announced that it will centralize its privacy settings in a single place. The principles were announced in a newsroom post by Facebook’s Chief Privacy Officer and include:
- “We give you control of your privacy.”
- “We help people understand how their data is used.”
- “We design privacy into our products from the outset.”
- “We work hard to keep your information secure.”
- “You own and can delete your information.”
- “Improvement is constant.”
- “We are accountable.”
In conjunction with the publication of the privacy principles, Facebook also announced the creation of a new privacy center and an educational video campaign for its users that focuses on advertising, reviewing and deleting old posts, and deleting accounts. The videos will appear in users’ news feeds and will be refreshed throughout the year.
Recently, the General Services Administration (“GSA”) announced its plan to upgrade its cybersecurity requirements in an effort to build upon the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012, that became effective on December 31, 2017.
The first proposed rule, GSAR Case 2016-G511 “Information and Information Systems Security,” will require that federal contractors “protect the confidentiality, integrity and availability of unclassified GSA information and information systems from cybersecurity vulnerabilities and threats in accordance with the Federal Information Security Modernization Act of 2014 and associated Federal cybersecurity requirements.” The proposed rule will apply to “internal contractor systems, external contractor systems, cloud systems and mobile systems.” It will mandate compliance with applicable controls and standards, such as those of the National Institute of Standards and Technology, and will update existing GSAR clauses 552.239-70 and 552.239-71, which address data security issues. Contracting officers will be required to include these cybersecurity requirements into their statements of work. The proposed rule is scheduled to be released in April 2018. Thereafter, the public will have 60 days to offer comments.
The second proposed rule, GSAR Case 2016-G515 “Cyber Incident Reporting,” will “update requirements for GSA contractors to report cyber incidents that could potentially affect GSA or its customer agencies.” Specifically, contractors will be required to report any cyber incident “where the confidentiality, integrity or availability of GSA information or information systems are potentially compromised.” The proposed rule will establish a timeframe for reporting cyber incidents, detail what the report must contain and provide points of contact for filing the report. The proposed rule is intended to update the existing cyber reporting policy within GSA Order CIO-9297.2 that did not previously undergo the rulemaking process. Additionally, the proposed rule will establish requirements for contractors to preserve images of affected systems and impose training requirements for contractor employees. The proposed rule is scheduled to be released in August 2018, and the public will have 60 days to comment on the proposed rule.
Although the proposed rules have not yet been published, it is anticipated that they will share similarities with the Department of Defense’s new cybersecurity requirements, DFAR Section 252.204-7012.
On January 30, 2018, the UK Court of Appeal ruled that the Data Retention and Investigatory Powers Act (“DRIPA”) was inconsistent with EU law. The judgment, pertaining to the now-expired act, is relevant to current UK surveillance practices and is likely to result in major amendments to the Investigatory Powers Act (“IP Act”), the successor of DRIPA.
In the instant case, the Court of Appeal ruled that DRIPA was inconsistent with EU law as it permitted access to communications data when the objective was not restricted solely to fighting serious crime. Additionally, the Court held that DRIPA lacked adequate safeguards since it permitted access to communications data without subjecting such access to a prior review by a court or independent administrative authority. The ruling follows the judgment of the Court of Justice of the European Union (“CJEU”), to which the Court of Appeal referred questions regarding the instant case in 2015.
The IP Act, which was enforced in 2017, largely replicates and further expands upon the powers contained in DRIPA. Though the present judgment does not change the way UK law enforcement agencies can currently access communications data for the detection and disruption of crime under the IP Act, the UK government is currently facing a separate case challenging the IP Act in the High Court, due to be heard in February 2018.
Reacting to the 2016 ruling of the CJEU, the UK government in late 2017 published a consultation document and proposed amendments to the IP Act which aimed to address the judgment of the CJEU. The proposed changes were deemed to fall short of the CJEU ruling by Liberty, the UK human rights organization bringing the proceedings against the IP Act in the High Court.
The present case and the future ruling of the High Court on the IP Act could impact the UK significantly when Brexit negotiations turn to discussions on adequacy and data sharing between the UK and the EU. UK surveillance legislation that is incompatible with EU data protection law could bring a halt to data flows between EU and UK law enforcement agencies and organizations.
On January 23, 2018, the New York Attorney General announced that Aetna Inc. (“Aetna”) agreed to pay $1.15 million and enhance its privacy practices following an investigation alleging it risked revealing the HIV status of 2,460 New York residents by mailing them information in transparent window envelopes. In July 2017, Aetna sent HIV patients information on how to fill their prescriptions using envelopes with large clear plastic windows, through which patient names, addresses, claims numbers and medication instructions were visible. Through this, the HIV status of some patients was visible to third parties. The letters were sent to notify members of a class action lawsuit that, pursuant to that suit’s resolution, they could purchase HIV medications at physical pharmacy locations, rather than via mail order delivery.
In addition to the monetary penalty, the settlement also requires Aetna to change its standard mailing practices and hire an independent consultant to oversee its compliance with the terms of the settlement. A spokesperson for Aetna said that the company is “implementing measures designed to ensure something like this does not happen again as part of our commitment to best practices in protecting sensitive health information.”
As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.
Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."Click to read the full article: New World, New Rules: Securing the Future State.
On January 25, 2018, the Standardization Administration of China published the full text of the Information Security Technology – Personal Information Security Specification (the “Specification”). The Specification will come into effect on May 1, 2018. The Specification is voluntary, but could become influential within China because it establishes benchmarks for the processing of personal information by a wide variety of entities and organizations. In effect, the Specification constitutes a best practices guide for the collection, retention, use, sharing and transfer of personal information, and for the handling of related information security incidents.
The Specification divides personal information into two categories: personal information and sensitive personal information. “Sensitive personal information” includes personal information such as financial information, identifying information (such as an ID card, social insurance card, passport or driver’s license) and biological identifying information. The Specification provides specific requirements for the collection and use of sensitive personal information, as well as a sample functional interface with a data subject which could be incorporated by an enterprise in its products or services for the collection of sensitive personal information. The sample functional interface is a template for an interactive web page or software that is designed in accordance with the Specification, shows information such as the purpose, scope and transfer of personal information, and contains a checkbox to obtain consent.
According to the Specification, personal information must be retained for only the minimum extent necessary, and must be deleted or anonymized after the expiration of the retention period. Encryption measures must be adopted whenever sensitive personal information is retained. When a personal information controller ceases to provide a product or service, it must inform the relevant data subjects and must delete or anonymize all personal information retained in relation to the data subjects.
When an enterprise uses personal information, it must adopt controls on access and restrictions on the display of the information. The use of personal information must not go beyond the purpose stated when collecting it. Personal data subjects have the right to request correction, deletion and copies of personal information that pertains to them, as well as the right to withdraw their consent to the collection and use of the personal information. An enterprise must respond to the request of a data subject for correction, deletion or copying once it has verified his or her identity.
When an enterprise engages a third party to process personal information, it must conduct a security assessment to ensure that the processor possesses sufficient security capabilities. The enterprise must also require the third party to safeguard the personal information, and must also supervise the third party’s processing of the personal information. If an enterprise needs to share or transfer personal information, it must conduct a security assessment and adopt security measures, inform the data subjects of the purpose of the sharing or transfer and of the categories of recipients, and obtain the consent of the data subjects.
An enterprise must formulate a contingency plan for security incidents that involve personal information and conduct emergency drills at least once a year. In the event of an actual data breach incident, the enterprise must inform the affected data subjects by email, letter, telephone or other reasonable and efficient method. The notice must include information such as the substance of the incident and its impact, remedial measures that have been taken or will be taken, suggestions for the data subjects on how to reduce risks, remedial measures made available to data subjects, and the responsible person and his or her contact information.
The Specification requires entities to clarify which of their departments and staff would be responsible for the protection of personal information, and to establish a system to evaluate impacts on the security of personal information. Enterprises must also implement staff training and audit the security measures which they have adopted to protect personal information.
This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.
Responsibilities of a Product Manager
- Determine what to build, not how to build it.
- Envision the future pertaining to product domain.
- Align product roadmap to business strategy.
- Define specifications for solution capabilities.
- Prioritize feature requirements, defect correction, technical debt work and other development efforts.
- Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
- Participate in the handling of issue escalations.
- Sometimes take on revenue or P&L responsibilities.
Defining Product Capabilities
- Understand gaps in the existing products within the domain and how customers address them today.
- Understand your firm’s strengths and weaknesses.
- Research the strengths and weaknesses of your current and potential competitors.
- Define the smallest set of requirements for the initial (or next) release (minimum viable product).
- When defining product requirements, balance long-term strategic needs with short-term tactical ones.
- Understand your solutions key benefits and unique value proposition.
Strategic Market Segmentation
- Market segmentation often accounts for geography, customer size or industry verticals.
- Devise a way of grouping customers based on the similarities and differences of their needs.
- Also account for the similarities in your capabilities, such as channel reach or support abilities.
- Determine which market segments you’re targeting.
- Understand similarities and differences between the segments in terms of needs and business dynamics.
- Consider how you’ll reach prospective customers in each market segment.
Engagement with the Sales Team
- Understand the nature and size of the sales force aligned with your product.
- Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
- Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
- Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
- Assess what other products are “competing” for the sales team’s attention, if applicable.
- Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
- Gather sales’ negative and positive feedback regarding the product.
- Understand which market segments and use-cases have gained the most traction in the product’s sales.
The Pricing Model
- Understand the value that customers in various segments place on your product.
- Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
- Compute your ongoing costs related to maintaining the product and supporting its users.
- Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
- Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
- Define the approach to offering volume pricing discounts, if applicable.
- Define the model for compensating the sales team, including resellers, if applicable.
- Establish the pricing schedule, setting the priced based on perceived value.
- Account for the minimum desired profit margin.
Product Delivery and Operations
- Understand the intricacies of deploying the solution.
- Determine the effort required to operate, maintain and support the product on an ongoing basis.
- Determine for the technical steps, personnel, tools, support requirements and the associated costs.
- Document the expectations and channels of communication between you and the customer.
- Establish the necessary vendor relationship for product delivery, if necessary.
- Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
- Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
- Obtain the appropriate audits and certifications.
Product Management at Startups
- Ability and need to iterate faster to get feedback
- Willingness and need to take higher risks
- Lower bureaucratic burden and red tape
- Much harder to reach customers
- Often fewer resources to deliver on the roadmap
- Fluid designation of responsibilities
Product Management at Large Firms
- An established sales organization, which provides access to customers
- Potentially-conflicting priorities and incentives with groups and individuals within the organization
- Rigid organizational structure and bureaucracy
- Potentially-easier access to funding for sophisticated projects and complex products
- Possibly-easier access to the needed expertise
- Well-defined career development roadmap
Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.
On January 9, 2018, the FTC issued a paper recapping the key takeaways from the FTC’s and National Highway Traffic Safety Administration’s June 2017 workshop on privacy and security issues involving connected cars. The workshop featured representatives from consumer groups, industry, government and academia.
Below are some of the key takeaways from the FTC’s paper:
- Many companies throughout the connected car ecosystem will collect data from vehicles for various purposes, including (1) car manufacturers (such as geolocation data in the event of a crash); (2) manufacturers of infotainment systems (such as data about consumers to allow them to use apps or connect to the Internet); and (3) third-party dongle providers (such as information about driving habits and diagnostic information). The types of data collected could include aggregate data, non-sensitive personal data and sensitive personal data.
- Consumers may be concerned about secondary, unexpected uses of their data, including the potential selling of their data to third parties, or the use of their data (such as vehicle app usage data) for targeted advertising purposes.
- Workshop participants indicated that addressing consumer privacy concerns is critical to consumer acceptance and adoption of the emerging technologies behind connected cars. Participants suggested that different approaches, such as with respect to consumer choice (i.e., the ability to opt out), may be needed based on whether the collected consumer data is safety-critical or not.
- Connected cars pose cybersecurity risks that potentially can be exploited by hackers, and there are various motivations for such attacks (such as monetary gain and nation-state crime). Participants suggested some best practices to help mitigate these risks, including with respect to information sharing, networking design, risk assessment and standard setting.
On January 8, 2018, the FTC announced an agreement with electronic toy manufacturer, VTech Electronics Limited and its U.S. subsidiary, settling charges that VTech violated the Children’s Online Privacy Protection Act (“COPPA”) by collecting personal information from hundreds of thousands of children without providing direct notice or obtaining their parent’s consent, and failing to take reasonable steps to secure the data it collected. Under the agreement, VTech will (1) pay a $650,000 civil penalty; (2) implement a comprehensive data security program, subject to independent audits for 20 years; and (3) comply with COPPA. This is the FTC’s first COPPA case involving connected toys and the Internet of Things.
What were the hottest privacy and cybersecurity topics for 2017? Our posts on the EU General Data Protection Regulation (“GDPR”), EU-U.S. Privacy Shield, and the U.S. executive order on cybersecurity led the way in 2017. Read our top 10 posts of the year.
Article 29 Working Party Releases GDPR Action Plan for 2017
On January 16, 2017, the Article 29 Working Party (“Working Party”) published further information about its Action Plan for 2017, which sets forth the Working Party’s priorities and objectives in the context of implementation of the GDPR for the year ahead. The Action Plan closely follows earlier GDPR guidance relating to Data Portability, the appointment of Data Protection Officers and the concept of the Lead Supervisory Authority, which were published together by the Working Party on December 13, 2016. Continue reading…
Privacy Shield: Impact of Trump’s Executive Order
On January 25, 2017, President Trump issued an Executive Order entitled “Enhancing Public Safety in the Interior of the United States.” While the Order is primarily focused on the enforcement of immigration laws in the U.S., Section 14 declares that “Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.” This provision has sparked a firestorm of controversy in the international privacy community, raising questions regarding the Order’s impact on the Privacy Shield framework, which facilitates lawful transfers of personal data from the EU to the U.S. While political ramifications are certainly plausible from an EU-U.S. perspective, absent further action from the Trump Administration, Section 14 of the Order should not impact the legal viability of the Privacy Shield framework. Continue reading…
CNIL Publishes Six Step Methodology and Tools to Prepare for GDPR
On March 15, 2017, the French data protection authority (the “CNIL”) published a six step methodology and tools for businesses to prepare for the GDPR that will become applicable on May 25, 2018. Continue reading…
German DPA Publishes English Translation of Standard Data Protection Model
On April 13, 2017, the North Rhine-Westphalia State Commissioner for Data Protection and Freedom of Information published an English translation of the draft Standard Data Protection Model. The SDM was adopted in November 2016 at the Conference of the Federal and State Data Protection Commissioners. Continue reading…
President Trump Signs Executive Order on Cybersecurity
On May 11, 2017, President Trump signed an executive order (the “Order”) that seeks to improve the federal government’s cybersecurity posture and better protect the nation’s critical infrastructure from cyber attacks. The Order also seeks to establish policies for preventing foreign nations from using cyber attacks to target American citizens. Read the full text of the Order.
Bavarian DPA Tests GDPR Implementation of 150 Companies
Article 29 Working Party Releases Opinion on Data Processing at Work
The Working Party recently issued its Opinion on data processing at work (the “Opinion”). The Opinion, which complements the Working Party’s previous Opinion 08/2001 on the processing of personal data in the employment context and Working document on the surveillance of electronic communications in the workplace, seeks to provide guidance on balancing employee privacy expectations in the workplace with employers’ legitimate interests in processing employee data. The Opinion is applicable to all types of employees and not just those under an employment contract (e.g., freelancers). Continue reading…
New Data Protection Enforcement Provisions Take Effect in Russia
As reported in BNA Privacy Law Watch, on July 1, 2017, a new law took effect in Russia allowing for administrative enforcement actions and higher fines for violations of Russia’s data protection law. The law, which was enacted in February 2017, imposes higher fines on businesses and corporate executives accused of data protection violations, such as unlawful processing of personal data, processing personal data without consent, and failure of data controllers to meet data protection requirements. Whereas previously fines were limited to 300 to 10,000 rubles ($5 to $169 USD), under the new law, available fines for data protection violations range from 15,000 to 75,000 rubles ($254 to $1,269 USD) for businesses and 3,000 to 20,000 rubles ($51 to $338 USD) for corporate executives. Continue reading…
CNIL Publishes GDPR Guidance for Data Processors
On September 29, 2017, the French Data Protection Authority published a guide for data processors to implement the new obligations set by the GDPR. Continue reading…
Article 29 Working Party Releases Guidelines on Automated Individual Decision-Making and Profiling
On October 17, 2017, the Working Party issued Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (the “Guidelines”). The Guidelines aim to clarify the GDPR’s provisions that address the risks arising from profiling and automated decision-making. Continue reading…
In its decision, the CNIL found that WhatsApp violated the French Data Protection Act of January 6, 1978, as amended (Loi relative à l’informatique, aux fichiers et aux libertés) by: (1) sharing data with Facebook without an appropriate legal basis, (2) not providing sufficient notice to the relevant data subjects, and (3) not cooperating with the CNIL during the investigation.
Lack of Legal Basis
While WhatsApp shares its users’ data with Facebook for both business intelligence and security purposes, the CNIL focused its analysis on the “business intelligence” purpose. WhatsApp represented that such sharing was based on consent and legitimate interest as legal grounds. In its analysis of both legal bases, the CNIL concluded that:
- WhatsApp cannot rely on consent to share users’ data with Facebook for “business intelligence” purposes on the grounds that: (1) the consent is not specific enough, and only refers to the messaging service and improving Facebook’s services, and (2) the consent is not freely given, as the only way for a user to object to such processing is to uninstall the application.
- WhatsApp cannot rely on a legitimate interest to share users’ data with Facebook for “business intelligence” purposes because the company has not implemented sufficient safeguards to preserve users’ interests or fundamental rights. There is no mechanism for the users to refuse the data sharing while continuing to use the application.
Lack of Notice to Data Subjects
The CNIL found that WhatsApp did not provide sufficient notice on the registration form to data subjects about sharing personal data with Facebook.
Lack of Cooperation with the CNIL
The CNIL found that WhatsApp did not provide necessary cooperation during the investigation, such as refusing to provide the CNIL with data pertaining to a sample of French users on the basis that such request conflicts with U.S. law.
The CNIL’s Requests
In its formal notice, the CNIL requires WhatsApp to, within one month:
- cease sharing users’ data with Facebook for the purpose of “business intelligence” without a legal basis;
- provide a notice to data subjects that complies with the French Data Protection Act, and informs them of the purposes for which the data is shared with Facebook and their rights as data subjects;
- provide the CNIL with all the sample personal data requested (i.e., all data shared by WhatsApp with Facebook for a sample of 1,000 French users); and
- confirm that the company has complied with all of the CNIL’s requests above within the one month deadline.
If WhatsApp fails to comply with the terms of the formal notice within one month, the CNIL may appoint an internal investigator, who may propose that the CNIL imposes sanctions against the company for violations of the French Data Protection Act.
On December 12, 2017, the Article 29 Working Party (“Working Party”) published its guidelines on transparency under Regulation 2016/679 (the “Guidelines”). The Guidelines aim to provide practical guidance and clarification on the transparency obligations introduced by the EU General Data Protection Regulation (“GDPR”). The transparency obligations require controllers to provide certain information to data subjects regarding the processing of their personal data. Key takeaways from the Guidelines include:
- “Clear and plain language” must be used: Information should be provided in a manner that is easy to understand and avoids complex sentences and language structures. Language must also be unambiguous, and avoid abstract terminology or equivocal language (e.g., conditional tenses and qualifying terms, such as “may,” “might” or “some”). In particular, where information is provided to children or other vulnerable people, the vocabulary, style and tone of the language must be adapted appropriately.
- Information must be “in writing or by other means”: Where a controller maintains a website, the Working Party recommends using electronically layered privacy notices. Other electronic means can be used to provide information to data subjects, including “just-in-time” contextual pop-up notices, 3D touch or “hover-over” notices and privacy dashboards. The chosen method must be appropriate for the circumstances.
- Information “may be provided orally”: Controllers may provide information orally if the identity of the data subject is clear. This does not apply to the provision of general privacy information to prospective customers or users whose identity currently cannot be verified. Oral information may be provided on a person-by-person basis or by automated means. Where automated means are adopted, the Working Party recommends the implementation of measures that allow data subjects to re-listen to the information, for example, through pre-recorded messages that can be replayed. In this context, controllers must maintain records and be able to demonstrate that (1) the data subject requested that information is provided orally, (2) where necessary, the identity of the data subject was verified, and (3) information was in fact provided to the data subject.
- Information must be provided free of charge: Controllers are prohibited from charging fees for the provision of processing information to data subjects. The provision of information also cannot be made conditional upon entry into a financial transaction.
- Content of the notice: With respect to the content of information to be provided to data subjects, the Guidelines refer to Articles 13 and 14 of the GDPR and the Annex to the Guidelines, which list the categories of information that must be included in the notices. The Working Party also clarifies that all categories of information to be provided pursuant to Articles 13 and 14 of the GDPR are of equal importance. The Working Party recommends that controllers provide data subjects with an overview of the consequences of the processing as it affects them, in addition to the information prescribed by the Articles.
- Changes to the notice: The Guidelines emphasize that the transparency requirements apply throughout the processing process. Any subsequent changes to a privacy notice must be communicated to data subjects. In this respect, the Guidelines recommend controllers explain to data subjects any likely impact that the changes may have on them. Where processing occurs on an ongoing basis, controllers are recommended to inform and periodically remind data subjects of the scope of the data processing.
- Timing: Information must be provided to data subjects at the commencement phase of the processing cycle when personal data is obtained and, in the case of personal data that is obtained indirectly, within a reasonable period (and no later than one month) following the receipt of the personal data. Where personal data is obtained indirectly and is to be used for communications with data subjects, information must be provided, at the latest, at the time of the first communication, but in any event within one month of receipt.
- Exceptions to the obligation to provide information: The Guidelines explain that exceptions to the obligation to provide information to data subjects about the processing of their personal data must be interpreted and applied narrowly. In addition, it stresses the importance of accountability for controllers. Where controllers seek to rely on exceptions, then as a general rule they must be able to demonstrate the circumstances or reasons that justify reliance on those exemptions (e.g., demonstrate the reasons why providing the information would prove impossible or involve disproportionate efforts).
The Guidelines state that controllers must review all information provided to data subjects regarding the processing of their personal data prior to May 25, 2018. The Working Party is accepting comments on the Guidelines until January 23, 2018.
As reported in BNA Privacy Law Watch, on December 6, 2017, health care provider 21st Century Oncology agreed to pay $2.3 million to settle charges by the Department of Health and Human Services’ (“HHS”) Office for Civil Rights (“OCR”) that its security practices led to a data breach involving patient information. The settlement was made public in the company’s December 6, 2017, bankruptcy filing. The HHS charges stemmed from a 2015 data breach involving the compromise of Social Security numbers, medical diagnoses and health insurance information of at least 2.2 million patients. OCR found that 21st Century Oncology failed to perform risk assessments on its systems or implement effective security protocols to protect patient information. As part of the settlement, 21st Century Oncology did not admit liability but did agree, in addition to the $2.3 million payment, to undertake a revision of its information security policies and procedures and to implement certain information security measures, including risk assessments.
On December 12, 2017, the Federal Trade Commission hosted a workshop on informational injury in Washington, D.C., where industry experts, policymakers, researchers and legal professionals considered how to best characterize and measure potential injuries and resulting harms to consumers when information about them is misused or inappropriately protected.
Acting FTC Chairwoman Maureen Ohlhausen delivered opening remarks at the commencement of the day-long workshop and noted the key goals of the meeting were to (1) better identify different types of privacy injury, (2) explore frameworks for quantitatively measuring and estimating the risk of harm, and (3) better understand how consumers and businesses weigh the risks of increased exposure to privacy injuries against the benefits of personal information use. Another stated goal was to determine when FTC intervention may be warranted.
The four panel workshop began with a discussion of types of informational injuries that can and do occur in the marketplace, followed by a discussion of potential factors to consider in assessing consumer injury. Later in the afternoon, the discussion turned to business and consumer perspectives on the benefits, costs and risks of collecting and sharing data. The workshop concluded with a panel on different methods for and challenges in measuring injury.
- Injuries 101: The first panel discussed negative outcomes that arise from unauthorized access to and misuse of consumers’ personal data. The discussion included an examination of the broad range of injuries that can occur. This was not limited to common informational injuries, such as financial harms resulting from identity theft, but also included lesser known harms such as medical and biometric identity theft, doxing (which is the public release of documents people wish to keep private), stalker ware apps, algorithmic decision making, discrimination based on knowledge of sensitive data points, predictive policing and the personalization of services.
Panelists called on the FTC to take a number of measures to further study these informational risks and injuries, including studying different types of identity theft distinctly and not limiting this to one general topic, and writing reports on substantive harms that have meaningful impacts on people’s lives and the potential solutions.
More generally, panelists called for efforts to understand harms to come up with the appropriate measures and to take a multifactorial approach, considering different expertise and different victims and stakeholders. Such measures should include the creation of a clear set of societal norms for tech platforms and the development of ethical frameworks to guide information use.
- Potential Factors in Assessing Injury: The second panel discussed potential factors in assessing consumer injury, including types of injury, magnitude and the sensitivity of consumer data. Consideration was given to whether the same factors apply in both the privacy and security contexts, the risk of potential injury versus realized injury and when government intervention is warranted.
Panelists were presented with two consumer harm and injury hypotheticals (one in a privacy context, based on retail tracking and marketing, and one in a security context, based on unauthorized access to company consumer data) and asked to assess at which stage of the hypothetical they believed consumer injury was taking place. Responses varied with some noting that, in the retail tracking hypothetical, until actual harm is realized, no consumer injury has taken place, while others stated that retail tracking to determine aggregate consumer interest in a product could be enough to cause injury. Panelists were then asked at which stage of the hypotheticals they believed government intervention should occur. Some panelists stated it should occur if the information is sensitive, while others noted over-enforcement can be a deterrent to new technologies.
With respect to the data security hypothetical, panelists were asked the same question of which stage they believed injury occurred. Responses varied again, largely on similar logic, with some noting that unless actual harm is realized through the use of breached data, no injury occurs, and others taking the line that unauthorized access to consumer data alone is enough to constitute injury.
With respect to enforcement, one panelist noted that the FTC can look at these issues in a broader way than the court system. For instance, it can look at social harms in ways that courts cannot. Further, the unfairness doctrine under Section 5 of the FTC Act was mentioned as having the potential to facilitate the FTC in exploring how to assess risk and harm.
Panelists also discussed the role of consumer expectations in determining (1) whether there was injury; (2) whether there should be a distinction between the collection of information and use of information (whereby use, but not collection, may result in injury); (3) risks associated with the use or failure to use sensitive data; (4) the role of considering countervailing benefits in assessing net injury; (5) whether quantifiability of harm is an effective or sufficient criterion for cognizable injury in the privacy context; and (6) the role of the market in mediating the issue of acceptable privacy risks.
- Business and Consumer Perspectives: The third panel examined how businesses and consumers perceive and evaluate the benefits, costs and risks of data collection and sharing in light of potential benefits and injuries. The panel also discussed considerations businesses take into account when choosing privacy and data security practices, and consumer decision making regarding sharing their information.
With respect to the business perspective, one panelist noted that when businesses try to assess risk they start by looking at the benefits, and most businesses go through privacy impact assessments to mitigate risks to an acceptable level in light of benefits. Another panelist took the view that businesses overestimate the benefits of data uses and are not internalizing the risks. A third panelist noted that business perspectives vary from sector to sector.
With respect to the consumer perspective, panelists noted that consumers view data as one aspect of the transaction and are willing to pay with information rather than money. They may not, however, be aware of what disclosing their information means and consumer education efforts to date have largely been ineffective. One panelist noted that default options are extremely important because people usually do not make choices if they do not fully understand them. Too many choices, however, can lead to complexity and can overburden consumers.
The session concluded with one panelist recommending that the FTC pursue other methods than the traditional approaches of transparency, notice, choice and consent, noting that these have been tried in the past and do not work. The data economy is too complex and a constantly moving target. In addition, it has to be considered that other areas of law and regulation (e.g., environment, nutrition, conflict resolution and arbitration, etc.) make similar demands on consumer attention through transparency, thereby adding to the burden on consumers. Panelists also suggested looking at what people do rather than what they say about privacy. One panelist stated that watching the big industry players and understanding their responsible data practices is an effective path forward. It was also suggested that consumers have only so much time to make choices and that responsible and ethical information use by companies is the way forward in protecting consumers.
- Measuring Injury: The final panel examined methods for and challenges in assessing informational injuries. Discussion points included how to quantify injury and the risk of injury, as well as how consumer choice and stated preferences can be accounted for.
Panelists noted that most work in measuring injury has been conducted through surveys. A key issue raised in this regard is the privacy paradox. In a survey, most people will state they care about privacy but do not act accordingly. Actual, rather than reported, preferences may be more insightful, but one panelist cautioned that this issue is complex and that one cannot generalize that “revealed action” is a better indicator than “stated preferences.” There may be other explanations for why people act the way they do other than for privacy-related reasons. Building on this point, one panelist noted the cyber insurance market shows what customers are willing to pay for privacy, but acknowledged the limitations and rarity of personal cyber insurance coverage.
Panelists agreed that further research is needed to get an understanding on baseline risk and that to measure causal links, we need to have a better understanding of what causes injury to happen. One panelist called for more research on what prevents harm from happening. For the FTC and other government agencies going forward, panelists asked for thought to be given to new risks hitting consumers more directly, such as ransomware, and to consider appropriate remedies, taking into account the costs to the consumer. Another suggestion was to identify occasions of injury where there is no effect on individuals.
Andrew Stivers, Deputy Director for Consumer Protection in the Bureau of Economics of the FTC, delivered closing remarks and emphasized the importance to the FTC of continued work on informational injury.
The FTC will accept public comments on the workshop until January 26, 2018. Details regarding submissions can be found in the detailed public notice about the workshop.
Recently, the EU’s Article 29 Working Party (the “Working Party”) adopted guidelines (the “Guidance”) on the meaning of consent under the EU General Data Protection Regulation (“GDPR”). In this Guidance, the Working Party has confirmed that consent should be a reversible decision where a degree of control must remain with the data subject. The Guidance provides further detail on what is necessary to ensure that consent satisfies the requirements of the GDPR:
- Freely given. Consent is not valid where there is an imbalance of power or where it is conditioned to the performance of a contract. In addition, consent must be granular and given separately for each data processing operation, and there should be no detriment to the data subject if the data subject elects to withdraw his or her consent.
- Specific. Consent must be given for the processing of personal data for a specific purpose.
- Informed. To be fully informed, the following information must be provided to the data subject before consent is given: (1) the identity of the data controller; (2) the purpose of each of the processing operations for which consent is sought, (3) the personal data that will be collected based on consent; (4) the existence of the right to withdraw consent; (5) information about the use of the personal data for decisions based solely on automated processing, including profiling; and (6) if the consent relates to transfers of personal data outside the EEA, information about the possible risks of personal data transfers to third-party countries in the absence of an adequacy decision and appropriate safeguards.
- Clear affirmative action. Consent must be an unambiguous indication of the data subject’s wishes and accordingly, must be given by a statement or by a clear affirmative action which signifies agreement to the processing of personal data relating to the data subject.
Meaning of Explicit Consent
The Guidance also provides further information on the meaning of “explicit” consent, which is obtained for the processing of special categories of data, the transfer of personal data outside the EEA, or for automated individual decision-making. The Guidance states that for consent to be “explicit,” the data subject must give an express statement of his or her consent, for example, by expressly confirming his or her consent in an explicit statement. In the electronic context, an express statement of consent could be given by the data subject by filling in an electronic form, sending an email, uploading a scanned document or using an electronic signature.
The Working Party indicates that data controllers are free to develop methods to demonstrate that consent has been validly obtained in a way that is fitting with their daily operations, and the GDPR is not prescriptive in this regard. Nevertheless, to demonstrate that consent was validly given, the data controller must be able to prove, in each individual case, that a data subject has given consent. In addition, the Guidance indicates that data controllers should retain records of consent only for so long as necessary for compliance with legal obligations to which it is subject, or for the establishment, exercise or defense of legal claims. The information retained should not go beyond what is necessary to demonstrate that valid consent has been obtained.
The GDPR requires parental consent in relation to the processing of children’s personal data in the context of information society services (e.g., a website or video streaming service) offered directly to children. The GDPR does not, however, specify the means that should be used to verify whether a user is a child or to obtain the consent of the child’s parents. The Guidance suggests that data controllers should adopt a proportionate approach based on the inherent risk associated with the processing and the available technology solutions. For example, the Working Party suggests that in low-risk scenarios, verification of parental responsibility by email may be sufficient, but in higher risk scenarios, more rigorous methods may be used, such as requiring the parent to make a £/$/€ 0.01 payment to the controller via a bank transaction. The Working Party recognizes, however, that verification may be challenging in a number of circumstances, and this will be taken into account when deciding whether the controller has taken “reasonable” efforts to ensure that parental consent has been obtained.
The Guidance indicates that consent which has been obtained prior to the GDPR will continue to be valid under the GDPR, provided it meets the conditions for consent required by the GDPR. The Working Party notes, in this regard, that existing consents must meet all GDPR requirements if they are to be valid, including the requirement that the data controller is able to demonstrate that consent was validly obtained. Thus, the Working Party is of the view that any consents which are presumed to be valid, but of which no record is kept, will not be valid under the GDPR. Similarly, existing consents that do not meet the “clear affirmative action” requirement under the GDPR, for example, because they were obtained by means of a pre-checked box, also will not be valid under the GDPR.
For processing operations in relation to which existing consent will no longer be valid, the Working Party recommends that data controllers (1) seek to obtain new consent in a way that complies with the GDPR, or (2) rely on a different legal basis for carrying out the processing in question. If a data controller is unable to do either of those things then the processing activities concerned should cease.
Recently, the FTC and FCC announced their intent to enter into a Memorandum of Understanding (“MOU”) under which the agencies would coordinate their efforts following the adoption of the Restoring Internet Freedom Order (the “Order”). As we previously reported, if adopted, the Order would repeal the rules put in place by the FCC in 2015 that prohibit high-speed internet service providers (“ISPs”) from stopping or slowing down the delivery of websites and from charging customers extra fees for high-quality streaming and other services.
The MOU identifies a number of ways in which the FTC and FCC will facilitate their joint and common goals, including:
- The FCC will review informal consumer complaints concerning the compliance of ISPs with the disclosure obligations set forth in the new transparency rule (and take enforcement actions against ISPs as appropriate). Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization and congestion management.
- The FTC will investigate and take enforcement actions against ISPs for unfair, deceptive or otherwise unlawful acts or practices, including those pertaining to the accuracy of disclosures made by ISPs pursuant to Order requirements.
- The agencies may coordinate and cooperate to develop guidance for consumers to assist in their understanding of ISP practices.
- The FCC and FTC will share consumer complaints pertaining to the Order’s requirements to the extent feasible and subject to the agencies’ requirements and policies governing, among other things, the protection of confidential, personally identifiable or non-public information.
- The FCC and FTC will share relevant investigative techniques and tools, intelligence, technical and legal expertise, and best practices in response to reasonable requests for such assistance from either agency.
The FCC is scheduled to vote on the draft Order on December 14, 2017.
On December 1, 2017, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Personal Data Breach Notification (the “Guidelines”). The Guidelines were adopted by the Working Party on October 3, 2017, for public consultation.
The EU General Data Protection Regulation (“GDPR”) introduces specific breach notification obligations for data controllers and processors. CIPL’s comments on the Guidelines commend the Working Party for drawing lessons from the experiences of other jurisdictions where breach notification has been a longstanding requirement. Additionally, CIPL’s comments welcome the discussions surrounding at what point a data controller is deemed to be aware of a personal data breach, as well as the recognition of the need to allow a phased notification of the supervisory authority in some circumstances.
CIPL’s comments, however, also emphasize several key issues that it believes need further clarification. The key recommendations for improving the Guidelines include the following:
- The definition of an “availability breach” used in the Guidelines does not fit the GDPR’s Article 4(12) definition of a “personal data breach.” The Working Party should revise the definition of an “availability breach” in the Guidelines to refer only to a breach in which there is an accidental or unlawful loss or destruction of personal data.
- The Working Party should, when discussing the term “data breach,” distinguish between a personal data breach per Article 4(12) of the GDPR and a “notifiable” personal data breach per Articles 33 and 34.
- Some of the examples discussed in the section on Risk Assessment fail to include an analysis of both the severity and the likelihood of a breach resulting in a risk to individuals’ rights and freedoms. Several of the breach examples in Annex B should be amended to reflect both aspects of the risk assessment.
- Data controllers should not be required to continuously reassess the risk posed by a past data breach in light of future technological developments long after the breach occurred. The Guidelines should clarify that such a reassessment need only be undertaken if a major breakthrough occurs immediately or within a short time period after the breach.
Criteria to Consider in Assessing Breach Risk
- To help supervisory authorities manage the number of breach notifications they receive and to enable them to deal with those reports effectively, a threshold of breach size for internal administrative purposes might be established. The threshold size (e.g., between 250 and 500 individuals) should be consistent across all jurisdictions.
- The Working Party should also consider setting a threshold for the number of individuals affected by a breach that would trigger the requirement to notify the supervisory authorities, except where the breach poses a high risk to individual rights and freedoms.
- The imputation that any data breach involving a large number of individuals or special categories of personal data should automatically be deemed to have a likelihood of risk to individuals’ rights and freedoms should be eliminated. The likelihood and severity of the risks should be considered regardless of the number affected or the type of personal data involved.
Timing of Notification
- The Guidelines should clarify that the 72 hour deadline for notification does not begin until after the data controller has completed an investigation that results in awareness that the incident involved personal data and is likely to result in a risk to individuals’ rights and freedoms.
- An organization’s decision to hire a forensics firm or engage in a technical investigation does not automatically mean the organization is aware of a notifiable breach.
- The Working Party should make clear that as part of a phased notification, a data controller may avail itself of a mechanism for keeping reported information confidential until its investigation is complete.
- The description of a data processor’s timeline to notify a data controller about a breach should be changed from “immediate” to “prompt.” Immediate notification is an unclear and unrealistic expectation that could imply that data processors should notify data controllers of any and every security incident, without any prior investigation.
- The Working Party should clarify that joint data controllers can designate responsibility for notification, or jointly notify the supervisory authority and jointly communicate with affected individuals.
Supervisory Authority to Notify
- The Guidelines should clarify which supervisory authority should be notified by a data controller that does not have an establishment in the EU, and which authority should be notified by a data controller when a breach affects only individuals not located in the jurisdiction of the data controller’s lead authority.
Methods of Communication to Individuals
- The potential drawbacks of email and SMS as a sole communication method for notifying individuals about a personal data breach should be highlighted, as these communication channels are fraud-prone.
CIPL’s comments were developed based on input by the private sector participant’s in CIPL’s ongoing GDPR Implementation Project, which includes more than 85 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.
Recently, FCC Chairman Ajit Pai released a draft of the Restoring Internet Freedom Order (the “Order”). If adopted, the Order would repeal the rules put in place by the FCC in 2015 that prohibit high-speed internet service providers (“ISPs”) from stopping or slowing down the delivery of websites and from charging customers extra fees for high-quality streaming and other services.
The Order would reverse the FCC’s 2015 landmark decision to classify broadband Internet access as a “telecommunications service” subject to the regulatory obligations under Title II of the Communications Act of 1934 (the “Act”). By reclassifying broadband Internet as an “information service” under Title I of the Act, the Order asserts that this “light-touch” approach will “promote investment and innovation” by removing the application of “laws of a bygone era” to broadband Internet service. Additionally, the Order would require ISPs to disclose their network management practices, performance and commercial terms of service to consumers, to provide individual consumers the ability to decide what broadband service best meets their needs.
The Order also would return jurisdiction to regulate broadband privacy and data security practices to the FTC, which, according to the Order, would “enable the FTC to apply its extensive privacy and data security expertise to provide the uniform online privacy protections that consumers expect and deserve.” In 2016, the FCC adopted Broadband Consumer Privacy Rules that required ISPs to provide consumers with increased choice, transparency and security over their personal information. Earlier this year, Congress voted to nullify the Privacy Rules, prohibiting the FCC from adopting anything “substantially similar.” Acting FTC Chairman Maureen K. Ohlhausen issued a statement in response to the draft Order, stating that the FTC “stands ready to protect broadband subscribers from anticompetitive, unfair, or deceptive acts and practices just as we protect consumers in the rest of the Internet ecosystem.”
The FCC is scheduled to vote on the draft Order on December 14, 2017.
Recently, the Federal Trade Commission released the final agenda for a workshop being held on December 12, 2017, that will address the various consumer injuries that result from the unauthorized access to or misuse of consumers’ personal information.
Following opening remarks by Acting FTC Chairman Maureen K. Ohlhausen, the workshop will include four panel discussions on (1) the various types of injuries that result from information security incidents; (2) the factors used to assess such “informational injuries,” including the type and magnitude of the injury and the sensitivity of personal information involved in the incident; (3) how businesses and consumers evaluate the costs and benefits of sharing their information in light of potential benefits and injuries; and (4) the different methods used to assess and quantify informational injury and the challenges associated with such methods.
The workshop is open to the public and will be webcast live on the FTC’s website. Additional information on the workshop is available on the FTC’s website.
On November 23, 2017, the Australian Attorney-General’s Department announced that it will move forward with an application to participate in the APEC Cross Border Privacy Rules (“CBPR”) system. The announcement follows comments received from a July 2017 consultation by the Australian Government regarding the implications of Australia’s possible participation in the system. Over the next months, the Attorney-General’s Department will work with the Office of the Australian Information Commissioner and businesses to implement the CBPR system requirements.
Australia’s announcement marks the third major development for the CBPR system in 2017. South Korea officially joined in June and Singapore submitted its notice of intent to join the APEC CBPR and the APEC Privacy Recognition for Processors System in July.
The APEC CBPR system is a regional, multilateral, cross-border data transfer mechanism and enforceable privacy code of conduct developed for businesses by the 21 APEC member economies. The CBPRs implement the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework.
CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.
Jan, why did you and your team decide to join CrowdStrike?
Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.
What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?
A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.
The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.
What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.
I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.
What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?
Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.
In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.
On November 8, 2017, the FTC announced a settlement with Georgia-based online tax preparation service, TaxSlayer, LLC (“TaxSlayer”), regarding allegations that the company violated federal rules on financial privacy and data security. According to the FTC’s complaint, malicious hackers were able to gain full access to nearly 9,000 TaxSlayer user accounts between October 2015 and December 2015. The hackers allegedly used the personal information contained in the users’ accounts, including contact information, Social Security numbers and financial information, to engage in tax identify theft and obtain tax refunds through filing fraudulent tax returns. The FTC charged TaxSlayer with violating the Gramm-Leach-Bliley Act’s Safeguards Rule and Privacy Rule.
As part of the settlement, TaxSlayer is prohibited from violating the Safeguards Rule and the Privacy Rule for 20 years, and for 10 years must obtain biennial third-party assessments of its compliance with these rules.
On November 7, 2017, the Standing Committee of the National People’s Congress of China published the second draft of the E-commerce Law (the “Second Draft”) and is allowing the general public an opportunity to comment through November 26, 2017.
The Second Draft applies to e-commerce activities within the territory of China. One significant change from the first draft is that the Second Draft omits the first draft’s definition of “personal information” of e-commerce users and the detailed requirements concerning the collection and use of personal information of such users. Instead, the Second Draft would require that, when collecting and using personal information of users, e-commerce operators comply with rules established under the Cybersecurity Law of China and other relevant laws and regulations.
Pursuant to the Second Draft, e-commerce operators would be required to provide users with clear methods and procedures for accessing the users’ information, making corrections or deleting the users’ information, or closing user accounts. Also, e-commerce operators would be restricted from imposing unreasonable conditions on users when they request access, correction or deletion of information, or closure of their accounts.
The Second Draft also would require operators of e-commerce platforms to adopt measures, technological and otherwise, to protect network security, and to adopt contingency plans for cybersecurity incidents. In the event of an actual cybersecurity incident, an operator of an e-commerce platform would be required to immediately put its contingency plan into action, take remedial measures and report the incident to the relevant authorities.
Recently, the Office of the Privacy Commissioner of Canada (“OPC”) issued its 2017 Global Privacy Enforcement Network Sweep results (the “Report”), which focused on certain privacy practices of online educational tools and services targeted at classrooms. The OPC examined the privacy practices of two dozen educational websites and apps used by K-12 students. The “sweep” sought to replicate the consumer experience by interacting with the websites and apps, and recording the privacy practices and controls in place. The overarching theme of the Report is “user controls over personal information,” which the OPC further refined into four subthemes: (1) transparency, (2) consent, (3) age-appropriate collection and disclosure, and (4) deletion of personal information.
- Transparency. The OPC found that, although all of the websites and applications had privacy statements, only 78 percent were readily accessible at registration. The OPC underscored the importance of having clear and thorough descriptions of the organization’s privacy practices. The Report recommends as a best practice the “layered” approach, whereby organizations list short privacy statements that link to more detailed descriptions of how the organization processes personal information. The Report also recommends that organizations provide parents with printouts explaining their privacy practices.
- Consent. The Report highlights the importance of obtaining meaningful, age-appropriate consent from students or parents for the processing of students’ personal information, in accordance with the Personal Information and Electronic Documents Act (“PIPEDA”). Pursuant to PIPEDA, consent is valid only if it is reasonable to expect that the individual whose personal information is collected would understand the nature, purpose and consequences of the collection, use or disclosure to which they are consenting. Accordingly, to obtain meaningful consent of children under the age of 13, PIPEDA requires the consent of a parent or guardian. For children aged 13 to the provincial age of majority, PIPEDA requires that the consent process be adapted to the child’s level of maturity. The Report found that many of the apps and websites reviewed did not have different consent mechanisms for younger and older students. The OPC highlighted as a best practice a mechanism that would send an email to parents with instructions for how to sign their under-13 child up for the service, and kid-friendly explanations of consent mechanisms for children over the age of 13. Additionally, the OPC found that more than a third of the apps and websites reviewed obtained only the consent of the teachers, and not the students or parents, in violation of PIPEDA.
- Deletion. The final area the Report focused on was the ability for users to request to have their personal information collected by the website or app deleted. Pursuant to PIPEDA, organizations must delete or anonymize personal information that is no longer required for the purpose for which it was collected. Over a third of the apps and websites reviewed by OPC did not have procedures in place to allow students or parents to delete students’ personal information. The Report recommends that websites and apps provide students and parents with a straightforward procedure to delete students’ personal information and implement and enforce data retention schedules.
On October 17, 2017, the French Data Protection Authority (“CNIL”), after a consultation with multiple industry participants that was launched on March 23, 2016, published its compliance pack on connected vehicles (the “Pack”) in line with its report of October 3, 2016. The Pack applies to connected vehicles for private use only (not to Intelligent Transport Systems), and describes the main principles data controllers must adhere to under both the current French legislation and the EU General Data Protection Regulation (“GDPR”).
The CNIL distinguishes between the following three scenarios:
1. “IN -> IN” scenario
The data collected in the vehicle remains in that vehicle and is not shared with a service provider (e.g., an eco-driving solution that processes data directly in the vehicle to display eco-driving tips in real time on the vehicle’s dashboard).
2. “IN -> OUT” scenario
The data collected in the vehicle is shared outside of the vehicle for the purposes of providing a specific service to the individual (e.g., when a pay-as-you-drive contract is purchased from an insurance company).
3. “IN -> OUT -> IN” scenario
The data collected in the vehicle is shared outside of the vehicle to trigger an automatic action by the vehicle (e.g., in the context of a traffic solution that calculates a new route following a car incident).
In addition to listing the provisions already included in its report of October 3, 2016, the CNIL analyzes in detail the three scenarios described above and provides recommendations on the:
- purposes for which the data can be processed;
- legal bases controllers can rely upon;
- types of data that can be collected;
- required retention period;
- recipients of the data and use of processors;
- content of the notice to data subjects;
- applicable rights of individuals with respect to the processing;
- security measures to adopt; and
- registration obligations that may arise under current law.
Beyond being a helpful guide for data controllers to refer to when implementing such tools in vehicles, the Pack might help preview how supervisory authorities will interpret various GDPR provisions.
On October 24, 2017, an opinion issued by the EU’s Advocate General Bot (“Bot”) rejected Facebook’s assertion that its EU data processing activities fall solely under the jurisdiction of the Irish Data Protection Commissioner. The non-binding opinion was issued in relation to the CJEU case C-210/16, under which the German courts sought to clarify whether the data protection authority (“DPA”) in the German state of Schleswig-Holstein could take action against Facebook with respect to its use of web tracking technologies on a German education provider’s fan page without first providing notice.
Although Facebook’s EU data processing activities are handled jointly by Facebook, Inc. in the U.S. and Facebook Ireland, its European headquarters, Facebook has a number of subsidiaries in other EU Member States that promote and sell advertising space on the social network. In line with Directive 95/46/EC and the Google Spain decision, Bot held that the processing of personal data via cookies, which Facebook used to improve its targeting of advertisements, had to be considered as being in the context of the activities of the German establishment. It therefore followed that Facebook fell under the jurisdiction of the German DPA and other DPAs in which its subsidiaries engaged in the promotion and sale of advertising space.
The opinion is non-binding and Facebook awaits the CJEU’s verdict. It should be noted, however, that most CJEU verdicts follow the prior opinions of Advocate Generals. Also, this situation may be interpreted differently under the EU’s General Data Protection Regulation (“GDPR”), which replaces existing EU Member State data protection laws based on Directive 95/46/EC when it enters into force on May 25, 2018. Under the GDPR, the One-Stop-Shop mechanism will see the DPA in an organization’s main EU establishment take the role of lead authority. In other EU Member States where the organization has establishments, DPAs will be regarded as ‘concerned authorities,’ but any regulatory action will be driven by the lead authority—which in Facebook’s case likely is the Irish Data Protection Commissioner.
In our final two segments of the series, industry leaders Lisa Sotto, partner and chair of Hunton & Williams’ Privacy and Cybersecurity practice; Steve Haas, M&A partner at Hunton & Williams; Allen Goolsby, special counsel at Hunton & Williams; and Eric Friedberg, co-president of Stroz Friedberg, along with moderator Lee Pacchia of Mimesis Law, continue their discussion on privacy and cybersecurity in M&A transactions and what companies can do to minimize risks before, during and after a deal closes. They discuss due diligence, deal documents and best practices in privacy and data security. The discussion wraps up with lessons learned in the rapidly changing area of data protection in M&A transactions, and predictions for what lies ahead.
Watch the full videos: Segment 3 – Before, During and After a Deal and Segment 4 – Lessons Learned and Outlook for the Future.