Monthly Archives: December 2014

Episode #180: Open for the Holidays!

Not-so-Tiny Tim checks in with the ghost of Christmas present:

I know many of you have been sitting on Santa's lap wishing for more Command Line Kung Fu. Well, we've heard your pleas and are pushing one last Episode out before the New Year!

We come bearing a solution for a problem we've all encountered. Ever try to delete or modify a file and receive an error message that the file is in use? Of course you have! The real problem is trying to track down the user and/or process that has the file locked.

I have a solution for you on Windows, "openfiles". Well, sorta. This time of year I can't risk getting on Santa's bad side so let me add the disclaimer that it is only a partial solution. Here's what I mean, let's look for open files:

C:\> openfiles

INFO: The system global flag 'maintain objects list' needs
to be enabled to see local opened files.
See Openfiles /? for more information.

Files opened remotely via local share points:

INFO: No shared open files found.

By default when we run this command it gives us an error that we haven't enabled the feature. Wouldn't it be nice if we could simply turn it on and then look at the open files. Yes, it would be nice...but no. You have to reboot. This present is starting to look a lot like a lump of coal. So you need know that you will encounter the problem before it happens so you can be ready for it. Bah-Humbug!

To enable "openfile" run this command:

C:\> openfile /local on

SUCCESS: The system global flag 'maintain objects list' is enabled.
This will take effect after the system is restarted.

...then reboot.

Of course, now that we've rebooted the file will be unlocked, but we are prepared for next time. So next time when it happens we can run this command to see the results (note: if you don't specify a switch /query is implied):

C:\> openfiles /query

Files Opened Locally:

ID Process Name Open File (Path\executable)
===== ==================== ==================================================
8 taskhostex.exe C:\Windows\System32
224 taskhostex.exe C:\Windows\System32\en-US\taskhostex.exe.mui
296 taskhostex.exe C:\Windows\Registration\R00000000000d.clb
324 taskhostex.exe C:\Windows\System32\en-US\MsCtfMonitor.dll.mui
752 taskhostex.exe C:\Windows\System32\en-US\winmm.dll.mui
784 taskhostex.exe C:\..\Local\Microsoft\Windows\WebCache\V01tmp.log
812 taskhostex.exe C:\Windows\System32\en-US\wdmaud.drv.mui

Of course, this is a quite long list. You can use use "find" or "findstr" to filter the results, but be aware that long file names are truncated (see ID 784). You can get a full list by changing the format with "/fo LIST". However, the file name will be on a separate line from the owning process and neither "find" nor "findstr" support context.

Another oddity, is that there seems to be duplicate IDs.

C:\> openfiles /query | find "888"
888 chrome.exe C:\Windows\Fonts\consola.ttf
888 Lenovo Transition.ex C:\..\Lenovo\Lenovo Transition\Gui\yo_btn_g3.png
888 vprintproxy.exe C:\Windows\Registration\R00000000000d.clb

Different processes with different files, all with the same ID. This means that when you disconnect the open file you better be careful.

Speaking of disconnecting the files, we can do just that with the /disconnect switch. We can disconnect by ID (ill advised) with the /id switch. We can also disconnect all the files based on the user:

C:\> openfiles /disconnect /a jacobmarley

Or the file name:

C:\> openfiles /disconnect /op "C:\Users\tm\Desktop\wishlist.txt" /a *

Or even the directory:

C:\> openfiles /disconnect /op "C:\Users\tm\Desktop\" /a *

We can even run this against a remote system with the /s SERVERNAME option.

This command is far from perfect, but it is pretty cool.

Sadly, there is no built-in capability in PowerShell to do this same thing. With PowerShell v4 we get Get-SmbOpenFile and Close-SmbOpenFile, but they only work on files opened over the network, not on files opened locally.

Now it is time for Mr. Scrooge Pomeranz to ruin my day by using some really useful, built-in, and ENABLED features of Linux.

It's a Happy Holiday for Hal:

Awww, Tim got me the nicest present of all-- a super-easy Command-Line Kung Fu Episode to write!

This one's easy because Linux comes with lsof, a magical tool surely made by elves at the North Pole. I've talked about lsof in severalotherEpisodesalready but so far I've focused more on network and process-related queries than checking objects in the file system.

The simplest usage of lsof is checking which processes are using a single file:

# lsof /var/log/messages
rsyslogd 1250 root 1w REG 8,3 13779999 3146461 /var/log/messages
abrt-dump 5293 root 4r REG 8,3 13779999 3146461 /var/log/messages

Here we've got two processes that have /var/log/messages open-- rsyslogd for writing (see the "1w" in the "FD" column, where the "w" means writing), and abrt-dump for reading ("4r", "r" for read-only).

You can use "lsof +d" to see all open files in a given directory:

# lsof +d /var/log
rsyslogd 1250 root 1w REG 8,3 14324534 3146461 /var/log/messages
rsyslogd 1250 root 2w REG 8,3 175427 3146036 /var/log/cron
rsyslogd 1250 root 5w REG 8,3 1644575 3146432 /var/log/maillog
rsyslogd 1250 root 6w REG 8,3 2663 3146478 /var/log/secure
abrt-dump 5293 root 4r REG 8,3 14324534 3146461 /var/log/messages

The funny thing about "lsof +d" is that it only shows you open files in the top-level directory, but not in any sub-directories. You have to use "lsof +D" for that:

# lsof +D /var/log
rsyslogd 1250 root 1w REG 8,3 14324534 3146461 /var/log/messages
rsyslogd 1250 root 2w REG 8,3 175427 3146036 /var/log/cron
rsyslogd 1250 root 5w REG 8,3 1644575 3146432 /var/log/maillog
rsyslogd 1250 root 6w REG 8,3 2663 3146478 /var/log/secure
httpd 3081 apache 2w REG 8,3 586 3146430 /var/log/httpd/error_log
httpd 3081 apache 14w REG 8,3 0 3147331 /var/log/httpd/access_log

Unix-like operating systems track open files on a per-partition basis. This leads to an interesting corner-case with lsof: if you run lsof on a partition boundary, you get a list of all open files under that partition:

# lsof /
init 1 root cwd DIR 8,3 4096 2 /
init 1 root rtd DIR 8,3 4096 2 /
init 1 root txt REG 8,3 150352 12845094 /sbin/init
init 1 root DEL REG 8,3 7340061 /lib64/
init 1 root DEL REG 8,3 7340104 /lib64/
# lsof / | wc -l

Unlike Windows, Linux doesn't really have a notion of disconnecting processes from individual files. If you want a process to release a file, you kill the process. lsof has a "-t" flag for terse output. In this mode, it only outputs the PIDs of the matching processes. This was designed to allow you to easily substitute the output of lsof as the arguments to the kill command. Here's the little trick I showed back in Episode 22 for forcibly unmounting a file system:

# umount /home
umount: /home: device is busy
# kill $(lsof -t /home)
# umount /home

Here we're exploiting the fact that /home is a partition mount point so lsof will list all processes with files open anywhere in the file system. Kill all those processes and Santa might leave coal in your stocking next year, but you'll be able to unmount the file system!

German DPA Imposes 1.3 Million EUR Fine on Insurance Group for Violation of Data Protection Law

On December 29, 2014, the Commissioner for Data Protection and Freedom of Information of the German state Rhineland-Palatinate issued a press release stating that it imposed a fine of €1,300,000 on the insurance group Debeka. According to the Commissioner, Debeka was fined due to its lack of internal controls and its violations of data protection law. Debeka sales representatives allegedly bribed public sector employees during the eighties and nineties to obtain address data of employees who were on path to become civil servants. Debeka purportedly wanted this address data to market insurance contracts to these employees. The Commissioner asserted that the action against Debeka is intended to emphasize that companies must handle personal data in a compliant manner. The fine was accepted by Debeka to avoid lengthy court proceedings.

In addition to the monetary fine, the Commissioner imposed obligations on Debeka with respect to its data protection processes and procedures, including a requirement that Debeka’s employees obtain written consent from customers when they disclose their addresses. The insurance group also has appointed 26 data protection officers. The public prosecutor has initiated criminal proceedings against representatives of Debeka in this matter and those proceedings are ongoing.

FTC Warns Foreign-Based App Developer of Potential COPPA Violations

On December 22, 2014, the Federal Trade Commission announced that it notified China-based BabyBus (Fujian) Network Technology Co., Ltd., (“BabyBus”) that several of the company’s mobile applications (“apps”) appear to be in violation of the Children’s Online Privacy Protection Rule (the “COPPA Rule”). In a letter dated December 17, 2014, the FTC warned BabyBus of potential COPPA violations stemming from allegations that the company has failed to obtain verifiable parental consent prior to its apps collecting and disclosing the precise geolocation information of users under the age of 13.

BabyBus offers more than 60 free mobile apps marketed to children between the ages of one and six on popular app marketplaces in the U.S. In its letter, the FTC alleges that BabyBus apps are directed to children under the age of 13 in the U.S., and therefore, the foreign-based company is required to comply with the COPPA Rule by obtaining verifiable parental consent before collecting, using or disclosing the precise geolocation information of its users who are under the age of 13 in the U.S.

The letter recommends that BabyBus review all of its apps in order to ensure that the company lawfully collects personal information from children in accordance with the COPPA Rule’s legal requirements. Furthermore, the letter indicates that the FTC plans to review BabyBus apps again next month.

FTC Announces Settlement with T-Mobile in Mobile Cramming Case

On December 19, 2014, the Federal Trade Commission announced a settlement of at least $90 million with mobile phone carrier T-Mobile USA, Inc. (“T-Mobile”) stemming from allegations related to mobile cramming. This settlement amount will primarily be used to provide refunds to affected customers who were charged by T-Mobile for unauthorized third party charges. As part of the settlement, T-Mobile also will pay $18 million in fines and penalties to the attorneys general of all 50 states and the District of Columbia, and $4.5 million to the Federal Communications Commission.

Similar to another recent enforcement action, the FTC alleged in its complaint that T-Mobile billed its customers for certain services offered by third parties, including subscription services for “ringtones, wallpaper, and text messages providing horoscopes, flirting tips, celebrity gossip, and other similar information.” The FTC stated that customers did “not order or authorize” these services in many cases, and T-Mobile continued to bill its customers for these charges even after receiving a large number of customers’ complaints about unauthorized charges, as well as “industry auditor alerts, law enforcement and other legal actions, and news articles” indicating that third parties had not obtained valid authorization from customers for these charges. The FTC asserted that T-Mobile retained a percentage of the third party fees charged to customers, typically around 35%. The FTC also claimed that the third party charges were not “conspicuous” on customers’ bills, but were lumped together with other charges, such as charges for texting, and often buried toward the end of the bill.

The settlement requires T-Mobile to:

  • notify relevant customers of their right to receive refunds for unauthorized third party charges;
  • establish and implement a system to ensure that T-Mobile obtains prior express informed consent before billing customers for third party products or services;
  • send customers purchase confirmations for third party charges separate and apart from the customers’ bills for phone services;
  • provide customers with informational materials regarding third party charges, including blocking options;
  • adequately train its relevant personnel and appropriately respond to customers who contact T-Mobile to inquire about a third party charge;
  • provide specified records to a settlement administrator;
  • submit periodic compliance reports to the FTC; and
  • maintain records associated with third party charges.

The proposed settlement is subject to approval by the U.S. District Court for the Western District of Washington before becoming effective. The FTC has recently brought several cases related to mobile cramming, including against online marketing and advertising companies and AT&T Mobility, LLC.

Industry, Privacy Advocates Join Microsoft to Protect Customer Emails in Foreign Servers

On December 15, 2014, Microsoft reported the filing of 10 amicus briefs in the 2nd Circuit Court of Appeals signed by 28 leading technology and media companies, 35 leading computer scientists, and 23 trade associations and advocacy organizations, in support of Microsoft’s litigation to resist a U.S. Government’s search warrant purporting to compel the production of Microsoft customer emails that are stored in Ireland. In opposing the Government’s assertion of extraterritorial jurisdiction in this case, Microsoft and its supporters have argued that their stance seeks to promote privacy and trust in cross-border commerce and advance a “broad policy issue” that is “fundamental to the future of global technology.”

The Government issued a domestic search warrant to Microsoft under the Stored Communications Act, demanding that Microsoft hand over emails that it maintains and controls in a Microsoft data center in Dublin. Microsoft challenged the warrant but the U.S. District Court confirmed the Government’s right to obtain these emails. Microsoft then appealed to the Second Circuit on December 8, 2014.

According to Microsoft, the company stores private communications in data centers close to their customers for legitimate business reasons, in this case in its Irish datacenter so that European customers can retrieve their information more quickly and securely. Microsoft’s position in this litigation, now officially supported by leading stakeholders and experts, is that “the U.S. Government’s unilateral use of a search warrant to reach email in another country puts both fundamental privacy rights and cordial international relations at risk.”

Specifically, Microsoft and the amici are making the following key points:

  • The U.S. Government should more appropriately use treaties to obtain the information it needs from other countries, which will help ensure the application of the relevant legal protections available in those countries.
  • Allowing the U.S. Government to access emails in foreign jurisdictions would have a negative impact on foreign customers’ trust in American companies and undermine the customers’ privacy rights.
  • The U.S. Government’s policy on extraterritorial jurisdiction also would have a negative impact on U.S. customers if foreign countries adopted the same approach towards emails held in U.S. datacenters.
  • The policy would undermine the efficiencies of cloud computing.
  • The policy would undermine legal protections for reporters’ email that are housed in foreign jurisdictions.

Microsoft also called on the Obama Administration and the U.S. Congress to “engage in a holistic debate on the solutions to these issues and find a better way forward.”

FinCEN Assesses Penalty Against Former MoneyGram Compliance Officer

On December 18, 2014, the Financial Crimes Enforcement Network (“FinCEN”) issued a $1 million civil penalty against Thomas E. Haider, the former Chief Compliance Officer of MoneyGram International, Inc. (“MoneyGram”). In a press release announcing the assessment, FinCEN alleged that during Haider’s oversight of compliance for MoneyGram, he failed to adequately respond to thousands of customer complaints regarding schemes that utilized MoneyGram to defraud consumers. In coordination with FinCEN, the U.S. Attorney’s office in the Southern District of New York filed a civil complaint on the same day, seeking a $1 million civil judgment against Haider to collect on the assessment and requesting injunctive relief barring him from participating in the affairs of any financial institution located or conducting business in the United States.

According to the complaint, Haider was the Chief Compliance Officer of MoneyGram from 2003 to 2008 and privy to complaints received by the company’s fraud department regarding numerous fraud schemes that allegedly utilized MoneyGram to induce transfers of funds from victims. The complaint outlines claims that Haider was personally responsible for MoneyGram’s failure to meet its legal obligations under the Bank Secrecy Act (“BSA”); namely, to implement and maintain an effective anti-money laundering (“AML”) compliance program, and to timely file Suspicious Activity Reports (“SARs”). The judgment sought is based on provisions of the BSA and implementing regulations that authorize a $25,000 per day penalty for willful failures to maintain AML compliance programs and file SARs.

The case against Haider follows a series of statements from FinCEN Director Jennifer Shasky Calvery and other regulators stressing individual accountability. The use of civil enforcement tools to hold compliance officers and senior management individually liable for BSA deficiencies is a noted enforcement trend that will likely continue in 2015.


Wow, it's been a awhile since i haven't written anything new here...
So to answer many questions.. no i'm not dead, and will try to get active again a bit next year.

I'm not writing this due to explanation requests or people worried (even if i got solicited many time to write something) but more because i'm motivated again to write.
As i've said many times to the recurrent e-mails i receive and continue to receive (even after 7 months of inactivity!)
I've did a lot of changement in my life, and during this time i got better things to do than writing in a blog.
Principaly i had many personal issues to resolve.
It's also not the first time i repeat that i've a life and that i've always run this blog for fun and nonprofit like my other services such as
And sooner or later i will get bored and do a break although i've continued to update CCT, to don't leave people with nothing.

I changed of job also and shifted in the energy sector.
I wanted to get a job who combine my passion for mechanic and electronic.
And now i'm winding turbo-alternators for nuclear/hydraulic power plants around the world and governmental organisations. (pretty cool, huh?)
I can't tell you details obviously due to confidentiality clauses as it's critical, but making those huge machines/projects are quite awesome and the job is very meticulous.

I've joined also the administration of my local hackerspace, and now holds the position of treasurer.
I'm doing also various workshops mostly electronic/borderline related who take me time to prepare and organize.
In parallel i experiment myself also a lot, those who follow my youtube/twitter activity probably know what i mean, i received 2 day ago hydrofluoric acid.

2014 started a bit bad for me as i had a car crash the day of christmas and got the clavicle broken. Anyway globally it was a nice year, and off my blog i've met a lot of people like Horgh and many others.
Sadly i wasn't able to go to BotConf neither DahuCon this year due to my job... so maybe next year !

I've worked a bit also with Hackerstrip and released recently some codes for DarK-CodeZ #6, nothing fancy but it was fun to participate, thanks guys.
So that all, see you in 2015 for throwing cobblestones and breaking bones !

EU and U.S. Privacy Experts Meet to Develop Transatlantic Privacy “Bridges”

On December 14, 2014, the University of Amsterdam and the Massachusetts Institute of Technology issued a press release about two recent meetings of the EU-U.S. Privacy Bridges Project in Washington, D.C. (held September 22-23, 2014) and Brussels (held December 9-10, 2014). The Privacy Bridges Project is a group of approximately 20 privacy experts from the EU and U.S. convened by Jacob Kohnstamm, Chairman of the Dutch Data Protection Authority and former Chairman of the Article 29 Working Party, to develop practical solutions for bridging the gap between EU and U.S. privacy regimes and legal systems. Bojana Bellamy, President of the Centre for Information Policy Leadership at Hunton & Williams (the “Centre”), and Fred Cate, the Centre’s Senior Policy Advisor are members of this group.

During these meetings, government officials, academics and representatives from the private sector and civil society provided input on the group’s work in developing a practical framework for bridging the different EU and U.S. approaches to data privacy.

Over the next nine months, the group will outline a menu of privacy “bridges” in a consensus report that will be presented at the 2015 International Data Protection and Privacy Commissioners Conference on October 28-29, 2015, in Amsterdam. This conference will be dedicated to the Privacy Bridges topic.

The Privacy Bridges meetings are jointly organized by Daniel J. Weitzner of the Massachusetts Institute of Technology Information Policy Project and Nico van Eijk of the Institute for Information Law at the University of Amsterdam.

NLRB Reverses Register Guard; Grants Workers Right to Use Employer Email System for Section 7 Purposes

As reported in the Hunton Employment & Labor Perspectives Blog:

In Purple Communications, Inc., a divided National Labor Relations Board (“NLRB”) held that employees have the right to use their employers’ email systems for statutorily protected communications, including self-organization and other terms and conditions of employment, during non-working time. In making this determination, the NLRB reversed its divided 2007 decision in Register Guard, which held that employees have no statutory right to use their employer’s email systems for Section 7 purposes.

The NLRB reasoned that the Register Guard decision was “clearly incorrect” and focused “too much on employers’ property rights and too little on the importance of email as a means of workplace communication.” The NLRB, however, claims to have limited its decision by 1) applying it only to employees who have already been granted access to the employer’s email system in the course of their work; 2) permitting employers to justify a total ban on non-work use of email, including Section 7 use on non-working time, by demonstrating that special circumstances make the ban necessary to maintain production or discipline; and 3) permitting employers, absent justification of a total ban, to apply uniform and consistently enforced controls over its email system to the extent such controls are necessary to maintain production and discipline. Moreover, the decision did not address the issues of email access by third parties or any other type of electronic communication systems.

Employers, particularly those with “business only” restrictions on company email use, potentially face new exposure to unfair labor practice charges. As such, employers are now pressed to reconsider their existing email communication policies, possibly through modification or repeal depending on the restrictions in place. We have covered labor-related developments regarding email and social media communications in previous entries.

Centre for Information Policy Leadership Discusses Privacy Risk Management at the OECD Working Party on Security and Privacy in the Digital Economy

Former UK Information Commissioner and Centre for Information Policy Leadership (the “Centre”) Global Strategy Advisor Richard Thomas was invited to make a presentation at a roundtable on Privacy Risk Management and Next Steps at the Organization for Economic Cooperation and Development’s (“OECD’s”) 37th meeting of the Working Party on Security and Privacy in the Digital Economy (“Working Party”). The meeting was attended by governmental and regulatory officials from most OECD member countries, with various other participants and observers.

The event focused on several new references to “risk” in the 2013 revised OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data in time for the main 2016 Ministerial meeting.  In light of these new references, Thomas outlined the Centre’s Privacy Risk Framework project, including references to the Paris and Brussels workshops and the second white paper of the project, and explained the link between risk and accountability. In discussing the paper’s main themes, he emphasized, the need for consensus around risk management models, technical standards, best practices and risk assessment tools. Thomas also stressed the key role that the OECD could play in developing and building a multinational consensus around a taxonomy of data protection harms and benefits, and a framework for assessing them.

Another speaker provided some small and medium-sized enterprises’ (“SMEs”) perspective on “risk.” He stressed that IT start-ups are mainly run by innovative, risk-taking entrepreneurs and engineers who focus on product development, sales and investment, but often are unaware of privacy issues until it is too late. He thought that a privacy risk framework template would be extremely useful for SMEs to raise their awareness of data privacy and to enable them to address basic privacy issues and manage privacy risks proactively and early on.

During the discussion that followed, the participants made the following key points:

  • Privacy risk management is a wider (and more challenging) concept than security risk management, but could learn a lot from that field;
  • The insurance and venture capital industries know a lot about risk and potentially have a great deal to offer on this topic;
  • Both organizations and regulators must set priorities and a risk-based approach is the most promising way to do that;
  • Risk assessment is one part of risk management – which is an umbrella for risk assessment, risk mitigation and residual risk management;
  • It is vital to see risk management as a balancing test, factoring in both benefits and competing fundamental rights.

In response to fears expressed by civil society that a risk-based approach could weaken fundamental rights, Thomas reminded delegates that risk management does not alter rights or obligations, nor does it take away organizational accountability. Instead, looking at the likelihood and severity of harms from the individual’s  perspective should strengthen privacy protection in the real world, according to Thomas.

At the end of the discussion, the Working Party agreed that more work on privacy risk should be done by the OECD Secretariat within its 2015-16 work program.

Centre for Information Policy Leadership Points to New ISO Cloud Privacy Standard as “Accountability” Milestone

In an article entitled The Rise of Accountability from Policy to Practice and Into the Cloud published by the International Association of Privacy Professinals, Bojana Bellamy, President of the Centre for Information Policy Leadership at Hunton & Williams (the “Centre”), outlines the rapid global uptake of “accountability” as a cornerstone of effective data protection and points to the recent ISO 27018 data privacy cloud standard as one of the latest examples.

According to Bellamy, accountable organizations are now routinely expected to implement corporate privacy programs that (1) deliver effective privacy compliance and protections and (2) can be demonstrated internally to corporate boards and externally to business partners and regulators.

She places the new ISO cloud standard in a long line of other accountability schemes, noting that the ISO standard “is a significant development both for cloud computing and accountability.” It is an example of practical accountability in a new context and the key roles played by external verification and certification to generate both corporate and consumer trust.

Bellamy also emphasizes that to the extent accountability schemes are based on mutually agreed or commonly accepted privacy standards, they enable “interoperability” and bridge-building between jurisdictional and legal divides.

When the Press Aids the Enemy

Let's start with this- Freedom of the press is a critical part of any free society, and more importantly, a democratically governed society.

But that being said, I can't help but think there are times when the actions of the media aid the enemy. This is a touchy subject so I'll keep it concise and just make a few points that stick in my mind.

First, it's pretty hard to argue that the media looks for ever-more sensational headlines, truth be damned, to get clicks and drive traffic to their publication. Whether it's digital or actual ink-on-paper sensationalism sells, there's no arguing with that.

What troubles me is that like in the war on terrorism, the enemy succeeds in their mission when the media creates hysteria and fear. This much should be clear. The media tend to feed into this pretty regularly and we see this in some of the most sensational headlines from stories that should told in fact, not fantasy.

So when I came across this article on Buzzfeed called "The Messy Media Ethics Behind the Sony Hacks" it suddenly hit me - the media may very well be playing perfectly into the enemy's hands. The "Guardians of Peace" (GOP) in their quest to ruin Sony Pictures Entertainment have stolen an unfathomable amount of information. As Steve Ragan who has repeatedly written on about this and many other breaches tweeted that's 200Gb or 287,000 documents. That's mind-blowing.

This cache of data has proven to be yet-unreleased movies, marketing presentations, email exchanges between executives and attorneys, financial plans, employees' medical records and so much more. The GOP have made it clear their aim is to "punish" Sony Pictures Entertainment - and while we don't really have an insight as to the true motivations here, I think it's clear that releasing all this data is meant to severely negatively impact the business.

What has followed in the days since the announcement of the hack is a never-ending stream of "news" articles that I struggle to understand. There were articles like this one providing commentary and analysis on internal marketing department presentations. There were articles analyzing the internal and privileged (as far as I know, but I'm not a lawyer) communications between corporate legal counsel and Sony Pictures executives. There were articles talking about the release of SPE employee medical records. The hit-parade goes on and on... and I'm not linking over to any more of the trash because it embarrasses me.

Clearly, clearly, the mainstream media (and hell even the not-so-mainstream) have long lost their ethics. Some would claim that it's the "freedom of the press" that allows them to re-publish and discuss sensitive, internal documents. Others argue that since it's already in the public domain (available on BitTorrent) then it's fair game. Note: This was discussed during the Snowden release - and it was clear that classified information released to the public domain does not suddenly lose its classified status. I'm fairly certain this easily applies to the not-national-security type of assets as well. To be honest, this argument makes me question the intellectual integrity of some of the people who make it.

Anyway, back to my point. If the GOP wanted to destroy Sony Pictures Entertainment then hacking in and releasing secret information and intellectual property was only half the battle. The second half, unfortunately, is being picked up and executed by the media, bloggers, and talking heads putting out "analysis" on all this data. Publishing links to the hacked data, analyzing its contents, and looking for further embarrassing and ugly things to publish- the media should be ashamed of itself.

The hack alone wasn't going to damage SPEs image to where it has fallen now - the media is clearly complicity in this and it's a shame. I'm not an attorney so I question whether publishing and discussing confidential communications between an attorney and executive is ethical. Forget that, is it even legal? Journalists and bloggers continue to hide behind the "freedom of the press", and some folks even to blasting me for daring to question the absolute rights of the press. Except - the freedom of the press isn't absolute, as far as I know.

But whether it's legal, clearly there are ethical problems here. If you're in the media and you're poring over the confidential email communications stolen from Sony Pictures Entertainment systems, I emphasize stolen, and you're commenting on this - to what end? Arguing that the media is releasing this information because (a) it's already in the public domain and (b) it's "for the public good" is ludicrous.

Remember - while you're reveling in someone else's misery that you too may be a coincidental victim one day. Then it'll be your turn to have your private information released and analyzed and attacked as part of the next breach. Your recourse? None... Glass houses, journalists. Glass houses.

Sony Pictures – Lessons From a Real Worst-Case Scenario

There is a lot of junk floating around on the Internet and in the media regarding the Sony Pictures breach. Who did it? What were the motives? These are all being violently discussed in the Twitter-sphere and elsewhere, and if you happen to read the articles and blogs being churned out by the media your head is probably spinning right now.
While I don't think we (the public) generally know enough to be able to talk about the breach with any certainty yet - and perhaps we never will - there is an critical point here which I think is being missed.

What is the lesson the public should take away from the breach, and subsequent consequences?

Why nearly everyone has focused on the circus surrounding the breach itself - including the celebrity dirty laundry going public, un-released movies being leaked to bit torrent download sites, and the truckload of everything you never want to get out that's been dumped to the Internet - there is very little focus being given to the thing (or things) that we should all be taking away from this breach.

By now everyone should agree breaches are inevitable, and continuing to pour money into the black hole that is prevention is ridiculous. Let me be clear, I'm not saying to spend nothing on prevention, I'm simply pointing out the continuing folly of pouring ever more money and resources into prevention which we know will fail. So this can't be the lesson.

We all also know that segmentation of duties, data and processes should be a key point in every security program. We've been learning this lesson for almost 20 years now - and I can't help but feel that this push to an even faster delivery of IT services has made segmentation and segregation a near impossibility in  many large enterprises. I've watched CISOs try to leverage tools, network architectures, system re-designs and even cloud services -- much in vain as the result is data, processes and duties of all levels of risk end up in a big free-for-all. So, again, this isn't the lesson to learn.

Should the lesson be that we much not poke the bear? I mean, let's face it, if you look at this objectively outside the limited American viewpoint - Sony Pictures did antagonize North Korea quite a bit. Then again, recent information  made public by the Federal Bureau of Investigation (FBI) has indicated that North  Korea was in fact not the perpetrator of this breach. So maybe poking the bear isn't the problem, and anyway this is a lesson we as humans should learn in Kindergarten not in the corporate world.

So if you're still reading then like me you may be searching for a so what? moment. And to be honest, I am struggling to  provide one. So maybe it's not one thing that we need to learn but a much bigger set of things together. Maybe it's a lesson in humility, communications, planning, execution, operational efficiency, and crisis response all rolled into a heaping pile pushed down the hill and lit on fire. Maybe the bigger lesson we need to learn is that it's not one thing that we need to get right - but rather all of them have to just work well together, and be planned, practiced and tuned.

I seriously doubt anyone out there is planning and practicing for the kind of disaster Sony Pictures is facing right now. If every single piece of intellectual and secret property (including employee records, confidential communications, financials of all kinds, and more) you have was made public - where would you start to recover? Getting your IT systems back online is a good start, but that doesn't mean you can recover your business when your employees, partners, vendors, and customers are banging on your door demanding answers and action.

Maybe that's it then, maybe the lesson is that you can't always package up a lesson learned neatly with a bow based on someone's catastrophic incident. I think it's clear we all can be set ablaze in this manner. If it's not then it should be. So the question I pose to you is this - what's your take-away from the Sony  Pictures catastrophe?

As a side note, many people and articles have taken to calling this an "unprecedented" breach. I am inclined to agree but not for the technical reasons that are being rattled off. It's not because the method of attack was novel, or that there was likely an insider, or even the quantity and quality of the assets that were stolen - or heck even that everything is being made public in an embarrassment to the company. No I think this is unprecedented because we're seeing company executives apologizing to political leaders, civil rights activists fanning race-war flames with some of the email content published, and as one article put it "Sony is a pariah in Hollywood" right now. Folks - that's not good. This is a meltdown of a brutal nature the likes I don't believe we've seen before. This is a PR catastrophe.

As always, I'm interested in your thoughts... leave a comment, or hit me on Twitter.

In a Surprising Move, Congress Passes Four Cybersecurity Bills

In a flurry of activity on cybersecurity in the waning days of the 113th Congress, Congress unexpectedly approved, largely without debate and by voice vote, four cybersecurity bills that: (1) clarify the role of the Department of Homeland Security (“DHS”) in private-sector information sharing, (2) codify the National Institute of Standards and Technology’s (“NIST”) cybersecurity framework, (3) reform oversight of federal information systems, and (4) enhance the cybersecurity workforce. The President is expected to sign all four bills. The approved legislation is somewhat limited as it largely codifies agency activity already underway. With many observers expecting little legislative activity on cybersecurity before the end of the year, however, that Congress has passed and sent major cybersecurity legislation to the White House for the first time in 12 years may signal Congress’ intent to address systems protection issues more thoroughly in the next Congress.

On December 11, the House passed Senate legislation codifying DHS’s National Cybersecurity and Communications Integration Center (“NCCIC”) making it the central hub for public-private information sharing. That bill, the National Cybersecurity and Critical Infrastructure Protection Act of 2014 (“NCCIPA”), is the Senate version of similar legislation passed by the House this past summer. The NCCIPA now heading to the President is a pared-down version of the original House bill, leaving out a number of industry-desired provisions that would have eased cybersecurity information sharing with the NCCIC. Notably, industry has been calling for legal protections for companies engaged in sharing information with the government. Nevertheless, the version of the bill headed to the President lacks an extensive legal safe harbor for information-sharing. As well, this version of NCCIPA lacks language from the original House bill that explicitly gave SAFETY Act protections to cybersecurity products. Thus, while passage of NCCIPA is an important and largely unexpected step forward on cybersecurity policy, liability concerns will continue to hamper cybersecurity information sharing.

Later in the evening on December 11, the House and Senate passed the Cybersecurity Enhancement Act of 2014, which authorizes NIST to facilitate and support the development of voluntary, industry-led cyber standards and best practices for critical infrastructure. The bill essentially codifies the ongoing process begun earlier this year through which the NIST Cybersecurity Framework was developed. That process remains voluntary under the bill, with no new regulatory authority added to the Framework. The bill also authorizes the federal government to support research, raise public awareness of cyber risks, and improve the nation’s cybersecurity workforce.

Earlier in the week, on December 8, the Senate passed by voice vote and without debate the Federal Information Security Modernization Act of 2014, which overhauls the 12 year-old Federal Information Security Management Act (“FISMA”). This legislation replaces FISMA’s current requirement that agencies must file annual checklists that show the steps they have taken to secure their IT systems, and puts the Department of Homeland Security (“DHS”) in charge of “compiling and analyzing data on agency information security” and helping agencies install tools “to continuously diagnose and mitigate against cyber threats and vulnerabilities, with or without reimbursement.” DHS has been increasingly performing this role already and similar legislation passed the House of Representatives in April 2013. That bill, however, was subject to jurisdictional disagreements between the House Homeland Security and Oversight and Government Reform Committees. Surprisingly, Oversight and Government Reform Chairman Rep. Darrell Issa (R-CA) dropped objections to the Senate’s FISMA reform bill and the House passed it on Wednesday evening by voice vote. The House also passed the Senate’s Homeland Security Cybersecurity Workforce Assessment Act as a rider to the Border Patrol Agent Pay Reform Act.

This spate of cybersecurity legislation is more limited in scope than the measures that have been sought by the private sector. Indeed, rather than provide new cybersecurity tools, the bills approved by Congress largely make pre-existing actions official. Still, with the 113th Congress effectively ending this week, passage of any cybersecurity bills is very surprising. Legislative activity on cybersecurity this week indicates a seriousness by policymakers to confront issues vital to information systems protection. In its waning days, the Senate may be attempting to set its mark on future cybersecurity policy. For its part, the House’s sudden action on Senate cybersecurity bills may point to a willingness by House committees to overcome internal jurisdictional disagreements that have hampered similar legislation in the past. The significance here is the recognition by Congress that legislative success now builds momentum for systems-protection policies in the next Congress, such as information-sharing liability protection or data breach legislation. How the 114th Congress confronts those issues is important to businesses seeking to enter public-private partnerships and information-sharing agreements.

CJEU Adopts a Strict Approach to the Use of CCTV

On December 11, 2014, in response to a request for a preliminary ruling from the Supreme Administrative Court of the Czech Republic, the Court of Justice of the European Union (“CJEU”) ruled that the use of CCTV in the EU should be strictly limited, and that the exemption for “personal or household activity” does not permit the use of a home CCTV camera that also films any public space.The facts relate to a Czech national named František Ryneš. In 2007, following a series of attacks against his property and damage to his windows, Ryneš installed a CCTV system that recorded the entrance to his home, the public footpath and the entrance to the house opposite. The system recorded video (but not audio) on a continuous loop. Only Ryneš had direct access to the system and the footage it recorded. The Czech court accepted that his only reason for installing the system was “to protect the property, health and life of his family and himself.”

The system recorded footage of a subsequent attack on his property in which a window was broken by a shot from a catapult. Two suspects were identifiable from the footage, and the footage was provided to the police. One of the suspects argued that the CCTV system was unlawful. The Czech Data Protection Authority agreed, on the basis that:

  • Ryneš had filmed people on the street or entering the house opposite without their consent;
  • He had not provided sufficient notice of the CCTV system to those persons; and
  • As a data controller, he had failed to register with the Czech Data Protection Authority.

Ryneš appealed the decision. The CJEU was asked by the Czech court to clarify whether the use of CCTV for the purposes of protecting a private home fell within the “household” exemption in Article 3(2) of EU Data Protection Directive 95/46/EC. Pointing to its earlier decision in Costeja, the CJEU noted that EU data protection law is designed to ensure a high level of protection of the fundamental right of individuals to privacy. It concluded that Ryneš’ system did not fall within the “household” exemption because it filmed a public space, and the exemption is limited to “purely” personal or household activities.

The decision has significant implications for the use of CCTV in the EU. In particular, it is likely to require data protection authorities to revise their guidance on the use of CCTV. It also is possible that the use of CCTV evidence captured in public spaces will increasingly be challenged by defendants in criminal cases, on the basis that the evidence may not have been lawfully obtained.

HHS Reaches Settlement with Health Care Company Over Malware Breach

The Department of Health and Human Services (“HHS”) recently announced a resolution agreement and $150,000 settlement with Anchorage Community Mental Health Services, Inc. (“ACHMS”) in connection with a data breach caused by malware. ACHMS, which provides nonprofit behavioral health care services in Alaska, experienced a breach in March 2012 that affected the electronic protected health information (“ePHI”) of 2,743 individuals. After ACHMS reported the breach to the HHS Office for Civil Rights (“OCR”), OCR investigated ACHMS and found several HIPAA Security Rule violations, including that ACHMS had failed to:

  • conduct a risk assessment;
  • develop and implement policies and procedures to sufficiently reduce risks to ePHI to a reasonable and appropriate level; and
  • ensure that firewalls were in place and that its information technology resources were regularly updated with available patches.

In the resolution agreement, ACHMS agreed to pay a $150,000 settlement to HHS and enter into a Corrective Action Plan that requires ACHMS to:

  • submit its HIPAA Security Rule policies and procedures to OCR for review and approval;
  • officially adopt and distribute the HIPAA Security Rule policies and procedures after OCR has approved them;
  • obtain a signed compliance certification from members of its workforce that they will abide by the HIPAA Security Rule policies and procedures;
  • provide security awareness training for its workforce;
  • conduct an annual risk assessment that evaluates the risks to ePHI;
  • report any events of noncompliance with its HIPAA Security Rule policies and procedures; and
  • submit annual compliance reports to HHS for a period of two years.

In the Bulletin accompanying the resolution agreement, OCR Director Jocelyn Samuels stated that HIPAA compliance “requires a common sense approach” and should encompass “reviewing systems for unpatched vulnerabilities and unsupported software that can leave patient information susceptible to malware and other risks.”

View the resolution agreement.

Privacy Authorities Call on App Marketplaces to Require Privacy Policies

On December 9, 2014, a coalition of 23 global privacy authorities sent a letter to the operators of mobile application (“app”) marketplaces urging them to require privacy policies for all apps that collect personal information. Although the letter was addressed to seven specific app marketplaces, the letter notes that it is intended to apply to all companies that operate app marketplaces.

According to the letter, the 2014 Global Privacy Enforcement Network (“GPEN”) enforcement sweep, which was conducted in coordination with 26 privacy enforcement authorities, examined the types of permissions sought by more than 1,200 apps and the extent to which consumers were notified about each app’s privacy practices. The sweep found that numerous apps did not have a privacy policy, and that the practice of linking to a privacy policy was applied inconsistently across the apps. The letter indicates that, although most marketplaces allow app developers to link to a privacy policy, this practice does not appear to be mandatory.

According to the letter, mobile operating system developers and other app marketplace operators play an important role in consumers’ interactions with apps. The marketplace acts a landing spot where individuals can search for apps, read reviews, and access technical information about a particular app before downloading it, which enables individuals to make informed decisions about products in that marketplace.

The letter indicates that, “[g]iven the wide-range and potential sensitivity of the data stored in mobile devices, we firmly believe that privacy practice information (for example, privacy policy links) should be required (and not optional) for apps that collect data in and through mobile devices within an app marketplace store.” The letter adds that the relevant privacy enforcement authorities “expect a marketplace operator would put in practice, if it has not already, this advice, and implement the necessary protections, to ensure the privacy practice transparency of apps offered in their stores.”

We previously reported on the results of the GPEN enforcement sweep carried out in May 2014 to assess mobile app compliance with data protection laws.

View the letter.

New York Banking Regulator Announces New Cybersecurity Assessment Process

On December 10, 2014, the New York State Department of Financial Services (the “Department”) announced that it issued an industry guidance letter to all Department-regulated banking institutions that formally introduces the Department’s new cybersecurity preparedness assessment process. The letter announces the Department’s plans to expand its information technology examination procedures to increase focus on cybersecurity, which will become a regular, ongoing part of the Department’s bank examination process.

The guidance letter provides a list of topics that will be addressed in the Department’s cybersecurity examination process. The topics include:

  • Corporate governance issues related to cybersecurity;
  • Management of cybersecurity issues;
  • Resources devoted to information security and overall risk management;
  • The risks posed by shared infrastructure;
  • Protections against intrusion;
  • Information security testing and monitoring;
  • Incident detection and response processes;
  • Training of information of personnel;
  • Management of third party service providers;
  • Integration of information security into business continuity and disaster recovery policies and procedures; and
  • Cybersecurity insurance coverage and other third party protections.

The letter encourages all Department-regulated banks to view cybersecurity as an integral aspect of their overall risk management strategy. According to the Superintendent of Financial Services, Benjamin Lawsky, “[i]t is [the Department’s] hope that integrating a targeted cyber security assessment directly into [its] examination process will help encourage a laser-like focus on this issue by both banks and regulators…It is imperative that we move quickly to work together to shore up our lines of defense against these serious risks.”

The Department plans to schedule the cybersecurity examinations based on a comprehensive risk assessment of each New York State-chartered or licensed banking institution. In connection with this assessment, the Department will be sending a series of questions to banks requesting information on their current cybersecurity practices and management.

Article 29 Working Party Issues Joint Statement on European Values and Actions for an Ethical European Data Protection Framework

On December 8, 2014, the Article 29 Working Party (the “ Working Party”) and the French Data Protection Authority (the “CNIL”) organized the European Data Governance Forum, an international conference centered around the theme of privacy, innovation and surveillance in Europe. The conference concluded with the presentation of a Joint Statement adopted by the Working Party during its plenary meeting on November 25, 2014.

In developing the Joint Statement, the independent EU data protection authorities (“DPAs”) assembled in the Working Party deliver key messages on how to create an ethical European framework that “enables private companies and other relevant bodies to innovate and offer goods and services that meet consumer demand or public needs, whilst allowing national intelligence services to perform their missions within the applicable law but avoiding a surveillance society.” The Joint Statement is intended to remind all relevant stakeholders (private and public) of their joint responsibility in designing and applying such a framework to the collection and use of personal data. It defines the essential principles to be included in this framework, as well as key actions that all relevant stakeholders must undertake when ensuring compliance with EU data protection law. The principles and actions include the following:

Data Protection as a Fundamental Right
The Joint Statement recalls that personal data includes meta data and must not be treated solely as an economic asset.

Need to Balance Data Protection Rights with Other Fundamental Rights and the Need for Security
The Joint Statement acknowledges that data protection must be balanced with other fundamental rights (such as non-discrimination and freedom of expression) but also with the need to ensure public security.

Need to Strengthen Public Awareness and Individual Empowerment to Help Individuals Limit Their Exposure to Excessive Surveillance
According to the Joint Statement, key measures include privacy education and opening collective judicial actions to individuals in order to facilitate the reporting of widespread EU data protection violations.

No Secret, Massive and Indiscriminate Surveillance
The Joint Statement recalls that such surveillance, whether by public or private actors in the EU or elsewhere, is neither lawful or ethically acceptable. According to the Working Party, none of the legal data transfer mechanisms (whether Safe Harbor, Binding Corporate Rules or the European Commission’s Standard Contractual Clauses) provide a legal basis for transferring personal data to a non-EU public authority for the purpose of massive and indiscriminate surveillance.

Limits on the Retention, Access and Use of Personal Data by National Competent Authorities
The Joint Statement further recalls that unrestricted bulk retention of personal data for security purposes is not acceptable.

No Unrestricted Direct Access of Foreign Law Enforcement Authorities to the Data of Individuals Processed in the EU
The Joint Statement suggests that such access should be possible only under limited conditions, e.g., with the prior authorization of a public authority in the EU or in the context of a mutual legal assistance treaty. The Joint Statement makes clear that foreign requests must not be served directly to companies under EU jurisdiction.

Storage of Data in the EU as an Effective Way to Ensure Control by an Independent Authority
The Joint Statement emphasizes that public or private parties should store data in such a way that an independent authority can effectively control their compliance with the EU data protection requirements when collecting massive amounts of data that provides very precise information on an individual’s private life. According to the Joint Statement, the storage of the relevant data on EU territory is an effective way to facilitate the exercise of such control.

Adoption of the Proposed EU Data Protection Regulation in 2015
The Joint Statement advises that the Proposed EU General Data Protection Regulation should be adopted in 2015.

Mandatory Nature of EU Data Protection Rules under Public and Private International Law
The Joint Statement emphasizes that foreign laws or international agreements cannot override EU data protection rules nor can organizations derogate from them by contract.

In terms of next steps, the Working Party welcomes comments on its Joint Statement by all interested stakeholders, public and private. Such comments may be addressed at The Working Party announced that it will take these comments into account in its activities over the year 2015.

Article 29 Working Party Publishes Working Document on Surveillance

On December 5, 2014, the Article 29 Working Party (the “Working Party”) published a Working Document on surveillance, electronic communications and national security. The Working Party (which is comprised of the national data protection authorities (“DPAs”) of each of the 28 EU Member States) regularly publishes guidance on the application and interpretation of EU data protection law. Although its views are not legally binding, they are strongly indicative of the way in which EU data protection law is likely to be enforced.

The Working Document is specifically intended to address data protection issues arising out of the Snowden revelations that began in 2013 and the bulk data collection activities of various intelligence and security agencies. The Working Document is, in part, a follow up to the Working Party’s previous Opinion on surveillance, which was published earlier this year.

The Working Document examines the boundaries between the concepts of privacy and national security, and emphasizes the importance of privacy as a fundamental right in the EU. The Working Document concludes that the activities of intelligence and security agencies should not always fall within the scope of the national security exemption under EU data protection law, and that where the meaning of the term “national security” is unclear, the exemption should be construed narrowly.

The Working Party points out that, under a literal interpretation of the law, the national security of a non-EU country cannot be invoked as an exemption to the application of EU data protection law. However, the Working Party also acknowledges that where the national security interests of a non-EU country align with the national security interests of an EU Member State (e.g., the shared interests of EU Member States and the U.S. in combatting terrorism) the exemption may apply.

Notably, in the view of the Working Party, none of the existing EU cross-border data transfer mechanisms (i.e., Model Clauses, Binding Corporate Rules or Safe Harbor) can be used to justify the transfer of personal data out of the European Economic Area for mass surveillance purposes. Instead, the Working Party considers activities involving “massive, indiscriminate, secret and structural surveillance of personal data” as prima facie non-compliant with the principles of EU data protection law.

In addition to publishing the Working Document, on December 8, 2014, the Working Party published a Joint Statement outlining its views on surveillance of electronic communications and shared European data protection values in the context of an increasingly digital world. The Joint Statement shares many of the themes noted above and provides some insight into the views of EU DPAs on the future of data protection enforcement in this area.

A Breakdown and Analysis of the December, 2014 Sony Hack

Another incredibly far-reaching in-depth compromise of Sony Pictures has happened, this time by a group known as the Guardians of Peace (GOP). The new compromise has all of the excitement of the old events and more, as blaming North Korea for the attack in retaliation to a movie being released by Sony Pictures is all the rage. Risk Based Security has been keeping an updated timeline of the breach, analyzing the leaked documents, and providing links to additional information.

If you are looking for a comprehensive resource on the Sony Hack then please visit the following page:

NIST Releases Update on Implementation of Cybersecurity Framework

On December 5, 2014, the National Institute of Standards and Technology (“NIST”) released an update on the implementation of the Framework for Improving Critical Infrastructure Cybersecurity (“Framework”). NIST issued the Framework earlier this year in February 2014 at the direction of President Obama’s February 2013 Critical Infrastructure Executive Order. The update is based on feedback NIST received in October at the 6th Cybersecurity Framework Workshop as well as from responses to an August Request for Information.

The December 5 update reviews a number of issues related to Framework implementation. Most notably, the update reports that there is general awareness of the Framework among critical infrastructure sectors, though that awareness could be improved among smaller and medium-sized businesses. Stakeholders also indicated that the Framework, particularly the common practices outlined in the Framework’s Core, is providing a means to communicate expectations within and among companies and other entities in a sector. NIST found that although some stakeholders are using the Framework as a benchmark for operations, others are explicitly avoiding using the Framework in as a benchmark for operations. In that regard, NIST reports that among the Framework’s three components – the Core, Profile and Implementation Tiers – the Implementation Tiers “appear to be the least-used part of the Framework.” In other words, although the Framework is being adopted as a common means to examine cybersecurity systems, stakeholders are less likely to use the Framework to judge implementation of that system. Many stakeholders requested guidance on “real world” use of the Implementation Tiers. Others, though, continue to express reservation that the Framework could be used as a regulatory device.

NIST states that it is still too early to update the Framework as more time is needed to understand the current version. NIST indicates, however, that it will focus on providing guidance in the coming months on using the implementation tiers. In addition, NIST noted calls from stakeholders for regulatory agencies to promote the use of the Framework “by clear statements about the voluntary nature of the document.” While NIST currently does not have any formal opportunities to comment on the Framework, it is accepting feedback via at

Centre for Information Policy Leadership Publishes White Paper on “The Role of Risk in Data Protection”

The Centre for Information Policy Leadership at Hunton & Williams (the “Centre”) has published a second white paper in its multi-year Privacy Risk Framework Project entitled The Role of Risk in Data Protection. This paper follows the earlier white paper from June 2014 entitled A Risk-based Approach to Privacy: Improving Effectiveness in Practice.

The Centre’s Privacy Risk Framework Project is a continuation of the Centre’s earlier work on organizational accountability, and focuses specifically on risk assessments as an essential element of accountability. The Centre’s project intends to develop a coherent methodology for identifying and evaluating privacy risks and their impact on individuals, as well as the benefits associated with an organization’s proposed data processing. The methodology also intends to help organizations devise appropriate mitigations and controls. By enabling organizations to link privacy controls and mitigations more specifically to the actual risk of harm and benefits through the risk-based approach, organizations will be able to more effectively apply legal principles and obligations in practice, thereby improving both their legal compliance as well as their general accountability beyond compliance.

The new white paper on The Role of Risk in Data Protection discusses how privacy risk assessments or risk management techniques are already incorporated into many existing legal and regulatory regimes, interpreted by privacy regulators, and put into practice by responsible organizations. It also stresses that the risk-based approach neither changes existing legal requirements nor negate individuals’ data protection rights. Instead, it facilitates effective compliance with them. Risk assessment is an essential element of organizational accountability and helps deliver the accountability on the ground.

The paper also discusses in detail some of the key considerations in risk assessment and management, including:

  • its proper role in the context of privacy protection, both where there are existing data privacy laws and in absence of such laws;
  • the interaction between core elements of risk assessments such as harms, benefits and individual rights and interests;
  • the importance of determining both the likelihood and severity of harm associated with data processing;
  • the nature of the harms or impacts that must be considered;
  • the need for making risk assessment tools efficient, scalable and flexible; and
  • applying risk assessments to the entire lifecycle of data processing, from collection to disposal.

Finally, the paper also points out a number of issues that will have to be explored in greater detail in the future, such as:

  • a need to create consensus and a generally accepted “taxonomy” of relevant harms;
  • specific risk management models and technical standards;
  • integrating and aligning privacy risk assessment models with those used in other areas;
  • further clarification of concepts and terminology;
  • a better understanding of proportionality between risks, benefits and appropriate controls; and
  • risk assessments as an “interoperability tool” by enabling compliance with divergent national and sectoral legal requirements.

Article 29 Working Party Issues Working Document Proposing Cooperation Procedure for Issuing Common Opinions on Contractual Clauses

On November 26, 2014, the Article 29 Working Party (the “Working Party”) released a Working Document providing a cooperation procedure for issuing common opinions on whether “contractual clauses” comply with the European Commission’s Model Clauses (the “Working Document”).

The Working Document creates a cooperation procedure that enables companies who utilize contractual clauses, or the amended European Commission-approved Model Clauses (the “Company’s Clauses”), in different EU Member States to obtain the opinion of the competent data protection authorities (“DPAs”) regarding the compliance of the Company’s Clauses with local law. Article 26(2) of EU Data Protection Directive 95/46/EC (the “Directive”) allows DPAs to authorize contractual clauses as a means of offering adequate safeguards for the international transfer of personal data from the EU to non-EU jurisdictions. The European Commission previously had issued decisions (and templates) authorizing three sets of contractual clauses (known as the “Model Clauses”). Two of these decisions regulate transfers from data controllers to data controllers, and the third regulates transfers from data controllers to data processors. Controllers also are permitted to prepare their own ad hoc clauses, or amend the Model Clauses, provided they are approved by relevant DPAs.

In addition to the approval of ad hoc clauses, in many EU Member States, authorizations are required for Model Clauses, regardless of whether they have been amended. Jurisdictions that require such authorization include Austria, Bulgaria, Cyprus, Denmark, Estonia, France, Lithuania, Luxembourg, Malta, Poland, Romania, Slovenia and Spain, although an amendment to the law in Poland is scheduled to take effect on January 1, 2015.

In an attempt to streamline this approval process, the Working Document proposes the appointment of a Lead DPA to opine on whether the proposed contractual clauses conform to the Model Clauses. If the opinion of the Lead DPA is favorable, other DPAs will take the Lead DPA’s opinion into account when granting authorizations as required by their respective local laws. The process of mutual recognition will be similar to the system by which DPAs approve Binding Corporate Rules.

Selection of a Lead DPA

The Lead DPA for the cooperation procedure must be in an EU Member State from which the transfers will take place and the company must justify why it selected a particular DPA as the Lead DPA. The Working Document indicates that the following factors may be relevant:

  • the location from which the Company’s Clauses are decided and elaborated;
  • the place where most decisions in terms of the purposes and the means of the processing are taken;
  • the best location (in terms of management functions, administrative burden, etc.) for the handling of the application and the enforcement of the Company’s Clauses;
  • the Member States within the EU from which most transfers outside the EEA will take place; and
  • the location of the group’s European headquarters or the location of the company within the group with delegated data protection responsibilities.

Procedure for Obtaining DPA Opinion and Mutual Recognition

The procedure for obtaining an opinion on the compliance of the Company’s Clauses is as follows:

  • The Company selects a Lead DPA and reviewers, and submits its application and the Company’s Clauses. The company must list the EEA countries from which data will be transferred.
  • Within two weeks of receipt of the application, the Lead DPA will indicate whether it is willing to act as Lead DPA. If so, it will simultaneously forward the information to all competent DPAs (i.e., the DPAs in all countries from which transfers will take place) and identify the proposed reviewer DPA(s).
  • The proposed reviewer DPA(s) will indicate whether they agree to act as reviewers. The Lead DPA will be responsible for analyzing whether the Company’s Clauses conform to the Model Clauses.
  • Once the Lead DPA considers that the proposed contractual clauses are in conformity with the Model Clauses, it will provide a draft letter, the Company’s Clauses and its analysis to the co-reviewer(s), who should review the draft letter within a month.
  • The analysis and the Company’s Clauses will be communicated to the other competent DPAs.
  • DPAs who are part of the mutual recognition process will acknowledge receipt of the documentation and, when the draft letter indicates that the Company’s Clauses are in conformity with the Model Clauses, it will accept this opinion as a sufficient basis for providing their own national permit or authorization for the proposed model clauses (as required by local law).
  • DPAs that are not part of the mutual recognition process will have one month to comment on the draft letter. If there is no answer from those DPAs within the given timeframe, they will be deemed to have agreed to the draft letter.

Asia Pacific Privacy Authorities Hold 42nd Forum in Vancouver to Discuss Hot Global Privacy Topics

On December 2-4, 2014, Asia Pacific Privacy Authority (“APPA”) members and invited observers and guest speakers from government, the private sector, academia and civil society met in Vancouver, Canada, to discuss privacy laws and policy issues. At the end of the open session (or “broader session”) on day two, APPA issued its customary communiqué (“Communiqué”) containing the highlights of the discussions during both the closed session on day one and the open session on day two. A side event on Big Data will be held on the morning of day three (December 4).

According to the Communiqué, during the closed session on the first day, APPA member authorities and invited observers discussed a wide range of issues, including recent privacy developments in their respective jurisdictions, the theme for next year’s APPA Privacy Awareness Week, and the “right to be forgotten” and its applicability in member economies, which the APPA members will continue to examine. APPA members also welcomed Singapore’s Personal Data Protection Commission as APPA’s 17th member.

During the broader session on day two, APPA members were joined by invited speakers and observers to discuss issues such as the relationship between regulators and civil society, the risk-based approach to privacy compliance and accountability, wearable technology, the APEC Cross-Border Privacy Rules system, the right to be forgotten and organizational accountability.

The discussion on the risk-based approach to privacy compliance and accountability was led by the Centre for Information Policy Leadership at Hunton & Williams and three of its member companies. The Centre introduced the APPA group to its ongoing Privacy Risk Framework Project and discussed the role of risk assessments in enabling organizations to devise more effective privacy protections. The speakers clarified that a risk-based approach to privacy does not replace or change legal obligations, but facilitates better practical implementation of those obligations. It also improves an organization’s ability to create and maintain accountability beyond compliance requirements.

APPA is the principal forum for privacy authorities in the Asia-Pacific Region. APPA members meet twice a year to discuss recent developments, issues of common interest and cooperation. The Vancouver meeting was hosted by the Office of the Information and Privacy Commissioner of British Columbia and by the Office of the Privacy Commissioner of Canada.

Article 29 Working Party Issues Opinion on the Implementation of the CJEU Ruling in Costeja

On November 26, 2014, the Article 29 Working Party (the “Working Party”) published an Opinion (the “Opinion”) on the Guidelines on the Implementation of the Court of Justice of the European Union Judgment on “Google Spain and Google Inc. v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González” C-131/12 (the “Judgment” or “Costeja”). The Opinion constitutes guidance from the Working Party on the implementation of Costeja for search engine operators.

The Opinion consists of two parts: (1) the Working Party’s interpretation of the findings of Costeja with respect to search engines, and (2) a list of common criteria established by European data protection authorities (“DPAs”) for handling complaints concerning a search engine’s refusal to de-list certain links to information.

Part I: Interpretation of the Court of Justice of the European Union (“CJEU”) Judgment

Part I of the Opinion sets out an interpretation of the Working Party, much of which reiterates the judgment.

General and Scope of the Ruling

  • The right only affects the results obtained from searches made on the basis of a person’s name. The term “name,” however, should be interpreted to include different versions of a name or different spellings.
  • As a general rule, the rights of data subjects will prevail over the economic interests of search engine operators and that of Internet users to have access to personal information through the search engine. A balance has to be struck, however, and the outcome will depend on the nature and sensitivity of the processed data and on the interest of the public in having access to that particular information. The interest of the public will be significantly greater if the data subject plays a role in public life.
  • The impact of de-listing on individuals’ rights to freedom of expression and access to information will prove to be limited because (1) Costeja applies only to searches made on the basis of the data subject’s name (and accordingly the relevant information could be found with the use of other appropriate search terms), and (2) the de-listed information will remain available through direct access at the original source.
  • The ruling does not apply to “internal” search engines that have a restricted field of action, for example those on newspaper websites.

Data Subject Rights

  • Data subjects are not obligated to contact the original website in order to exercise their rights toward search engines.
  • In order for search engines to make the required assessment, data subjects must identify the specific URLs at issue, explain why they request de-listing, and indicate whether they fulfill a role in public life.
  • Most national data protection laws provide for flexibility in how data subjects may exercise their rights. While the development of specific notification methods by search engines, such as online procedures and forms, may have advantages, they must not be an exclusive way for data subjects to exercise their rights. Search engine operators must provide the opportunity for data subjects to submit requests in any way permitted by the national law of the data subject’s jurisdiction.
  • Where a removal request is refused, the search engine operator should provide a sufficient explanation to the data subject as to the reasons for the refusal.
  • The effective application of the Judgment requires that affected data subjects should be able to exercise their rights with the national subsidiaries of search engine operations in their EU Member States of residence.

Territorial Effect of De-listing

  • Limiting de-listing to EU domains on the grounds that users tend to access search engines via their national domains will not be sufficient to achieve complete compliance. Accordingly any de-listing should be effective across all relevant domains, including .com.
  • In practice, DPAs will focus on claims where there is a clear link between the data subject and the EU.

Communication with Affected Parties Including Webmasters

  • The practice of informing search engine users that some results to searches based on a person’s name have been de-listed could undermine the Judgment. This practice is acceptable only if the information is presented in such a manner that a user cannot determine if a specific individual has asked for the de-listing of results concerning him or her.
  • No provision of the Directive requires search engines to communicate to the original webmasters that results relating to their content have been de-listed, and there will not be a legal basis for making such notification routinely under the Directive.
  • However, search engines will often have a legitimate interest in contacting original publishers prior to taking making any de-listing decision, in particular where this is necessary to get a fuller understanding of the circumstances.

Part II: List of Common Criteria for Handling Complaints by DPAs

Part II of the Opinion sets out a list of common criteria (and associated commentary) to be used by DPAs in determining if a search engine provider’s refusal to de-list a search result is in compliance with data protection laws. The Opinion emphasizes that the list is flexible, and each of the various criteria identified need to be accounted for in a balancing exercise. Each case needs to be assessed on a case-by-case basis.

The criteria are:

  • Does the search result relate to a natural person?
  • Does the search result come up against the search on the data subject’s name?
  • Does the data subject play a role in public life? Is the data subject a public figure?
  • Is the data subject a minor?
  • Is the data accurate?
  • Is the data relevant and not excessive?
    • Does the data relate to the working life of the data subject?
    • Does the search result link to information which allegedly constitutes hate speech/slander/libel or similar offences in the area of expression against the complainant?
    • Is it clear that the data reflect an individual’s personal opinion or does it appear to be verified fact?
  • Is the information sensitive in the meaning of Article 8 of the Directive?
  • Is the data up-to-date? Is the data being made available for longer than is necessary for the purpose of the processing?
  • Is the data processing causing prejudice to the data subject? Does the data have a disproportionally negative impact on the data subject?
  • Does the search result link to information that puts the data subject at risk?
  • In what context was the information published?
    • Was the content voluntarily made public by the data subject?
    • Was the content intended to be made public? Could the data subject have reasonably known that the content would be made public?
  • Was the original content published in the context of journalistic purposes?
  • Does the publisher of the data have a legal right or obligation to make the personal data publicly available?
  • Does the data relate to a criminal offense?

Although the criteria are aimed at DPAs, they will serve as a useful starting point for search engine providers in determining their own criteria and processes for assessing de-listing requests.

Centre Discusses the Risk-Based Approach to Privacy and APEC-EU Interoperability at IAPP Brussels

At the International Association of Privacy Professionals’ (“IAPP’s”) recent Europe Data Protection Congress in Brussels, the Centre for Information Policy Leadership at Hunton & Williams (the “Centre”) led two panels on the risk-based approach to privacy as a tool for implementing existing privacy principles more effectively and on codes of conduct as a means for creating interoperability between different privacy regimes.


Bojana Bellamy, the Centre’s President, led a panel entitled Privacy Risk Framework and Risk-based Approach: Delivering Effective Data Protection in Practice. Together with Mikko Niva of Nokia Corporation and JoAnn Stonier of MasterCard, the panelists discussed the emergence of a new way of implementing effective privacy protections and calibrating privacy programs based on understanding the actual risks to individuals and benefits that are associated with processing personal data.

The “risk-based approach” to privacy, which currently is the focus of a Centre project to develop a widely accepted and coherent methodology for understanding risk, is already reflected in various forms in certain existing laws (e.g., EU Data Protection Directive, FTC Act), international privacy principles and guidelines (e.g., APEC and OECD), and privacy impact assessment guidance. Indeed, according to the panelists, risk assessments are not only required by various laws, they are an integral component of organizational accountability regardless of any legal requirements. Thus, the panelists discussed how their organizations currently employ risk assessments to implement such accountability and to ensure both compliance with the law and effective privacy protections. They also discussed how the risk-based approach focuses not only on the risk to organizations, but also on the risk to individuals. Finally, the panelist gave an overview of how the Council of the European Union that is working on the proposed EU General Data Protection Regulation has incorporated the risk-based approach as a general obligation and in specific provisions.

To avoid any misconception, the panelists stressed that the risk-based approach does not change or replace any applicable legal requirements and does not alter the rights of individuals under data protection laws. It simply is an additional tool that improves privacy compliance programs by linking privacy controls and mitigations to the likelihood and severity of the harms associated with processing personal data.


Markus Heyder, the Centre’s Vice President and Senior Policy Counselor, moderated a panel on EU BCRs and APEC CBPRs: Cornerstones of Future Interoperability. He and his co-panelists Florence Raynal of the French Data Protection Authority (“CNIL”), Christina Peters of IBM Corporation, Hilary Wandell of Merck & Co. and Daniel Pradelles of Hewlett-Packard, introduced the basics of the APEC Cross-Border Privacy Rules (“CBPRs”) to a mainly European audience. They also discussed the work that is currently being done by a joint working group of the Article 29 Working Party and the APEC Data Privacy Subgroup on the development of tools that help companies become certified and approved under both the EU Binding Corporate Rules (“BCRs”) and the CBPRs.

Florence Raynal explained the joint working group’s “Referential” from last March, which mapped the respective substantive requirements of the two systems to each other, identifying substantial overlap as well as some differences. She also explained the ongoing follow-up work to the Referential, whereby the working group is conducting case studies with the help of several companies that have or are seeking dual certification/approval under the CBPR and BCRs. The purpose of this exercise is to test the usefulness of the Referential and to consider what additional practical tools might be developed to enable companies to leverage compliance with one system into more efficient approval under the other.

The three company representatives told the audience how certification to one code of conduct such as the CBPRs or BCRs have helped facilitate effective internal accountability and compliance programs and positioned them well for achieving compliance and approval under the other system. The case studies are ongoing and will be formally discussed by the joint working group at the upcoming APEC Data Privacy Subgroup meetings in the Philippines in late January 2015.

Is Bigger Budget an Adequate Measure of Security Efficacy?

Bigger budgets - the envy of security professionals and the scourge of CISOs the world over. While we'd all like bigger budgets to make security better within our organizations, getting more money to spend isn't necessarily a harbinger of goodness to come.

Earlier a fantastic conversation broke out on Twitter, where else, and it started with this tweet from Tony Vargas retweeted by Adrian Sanabria:

The conversation got a little snarky about how throwing money at a problem clearly doesn't indicate that it'll get any more attention or be any closer to being solved. I then made a comment about the American budget and how spending more isn't really helping there - OK that's a stretch but the parallels are clear, I think.

Stephen Coplan made an interesting point which I've seen made many, many times - but I believe it to be false:
*point of clarification - Stephen pointed out that he's not implying more money equals more efficacy, and I don't intend to represent his comments as such.

I personally do not believe a bigger budget means anything specifically, so to equate higher budget with more relevance- I believe that to be false. I have personally witnessed first-hand how organizations take budget increases to spend wildly on necessary widgets, and then fail to operationalize. Security isn't about spending more, it never has been. In fact, the rapid increase in spending generally means that something went publicly wrong and the budget-holders are trying to make a public display of their sensitivity to fix the issues. Unfortunately all too often these are simply that - public displays with little follow-through.

I believe that rather than focus on how much more money an organization spends as a measure of their seriousness of addressing security issue, we should be focusing on resources. You see, resources is inclusive of everything necessary including the critical people aspect as well as the widgets and gadgets that come in 1U rack-mountable formats to address the issues. Better security comes from better training of existing resources, more executive backing, better communications, and more operational support. Better security comes from a shift in culture, and a willingness by security professionals to reach to the business side and align better to goals and needs, and the business folks making a concerted and serious effort to understand that security issues and breaches aren't just web site defacements anymore.

Security (or rather the criminal aspect of the game) is big business with highly industrialized and specialized trades and vertical markets. Addressing security as a technology problem will lead to more breaches, more lost revenue, productivity, shareholder value and trade secrets to name a few of the obvious. Security isn't a "their problem" anymore, in fact it never has been.

If you're at all paying attention to the absolute worst-case scenario that Sony Pictures is living through right now (Steve Ragan at CSO is churning out an excellent series on the matter, I highly recommend you give it a read) you are becoming painfully aware that we're past business disruption, web site defacements and DDoS. We're into business destruction of the kind that has the potential to cost a company hundreds of millions of dollars not just today, but for years to come.

What will it take for companies to take security seriously, and how will we measure that jump? I don't think the upward delta in budget size is the only indicator here. I believe we need to look at the overall resource allocation to understand whether security is being addressed as a cultural issue in the company, or whether we're just given more capital to buy shiny widgets with.

In the end, Casey John Ellis had the tweet that made our point eloquently. I think he said it best when it comes to the ability to "buy more stuff" for CISOs, in relation to that making a positive program-level impact on the organization-

...and this, my friends, about sums up my feelings on the matter.

Article 29 Working Party Issues Opinion on Device Fingerprinting

On November 25, 2014, the Article 29 Working Party (the “Working Party”) adopted Opinion 9/2014 (the “Opinion”) on device fingerprinting. The Opinion addresses the applicability of the consent requirement in Article 5.3 of the e-Privacy Directive 2002/58/EC (as amended by Directive 2009/136/EC) to device fingerprinting. As more and more website providers suggest using device fingerprinting instead of cookies for the purpose of providing analytics or for tracking purposes, the Working Party clarifies how the rules regarding user consent to cookies apply to device fingerprinting. Thus, the Opinion expands on Opinion 04/2012 on the Cookie Consent Exemption.

The Working Party concludes that Article 5.3 of the e-Privacy Directive applies to device fingerprinting, and thus indicates that third parties may process device fingerprints and gain access to or store information on the user’s terminal device only with the valid consent of the user (unless an exemption applies).

According to Article 5.3 of the e-Privacy Directive, EU Member States must ensure that the storing of information, or the gaining of access to information already stored, in the terminal equipment of a subscriber or user is allowed only if the subscriber or user has provided consent. In addition, the subscriber or user must have been provided with clear and comprehensive information in accordance with EU Data Protection Directive 95/46/EC. The Opinion defines fingerprint in broad terms, meaning that it includes a set of information that can be used to single out, link or infer a user, user agent or device over time. According to the Article 29 Working Party, this includes, but is not limited to, data derived from (1) the configuration of a user agent or device, or (2) data exposed by the use of network communications protocols. The Opinion also states that due to design choices when the Internet was developed, devices necessarily transmit information elements. And when a number of information elements are combined, the combination may provide a unique fingerprint for the device or application. Because a user may be associated with the device, he or she may be identifiable via the device fingerprint. In addition, the Working Party considers unique identifiers to be personal data. Therefore, if a fingerprint is generated through the storage of or access to information on a user’s terminal device, the e-Privacy Directive applies and user consent is required.

The Opinion also addresses the rules on possible exemptions from the consent requirement under Article 5.3 of the e-Privacy Directive as described in Opinion 04/2012. These rules exempt processing from the consent requirement if the technical storage or access is (1) “for the sole purpose of carrying out the transmission of a communication over an electronic communications network,” or (2) “strictly necessary in order for the provider of an information society service explicitly requested by the subscriber or user to provide the service.” It also provides practical guidance by providing six example scenarios, and indicating in each example whether the processing is exempt from the consent requirement. The six examples are:

  • First-party website analytics,
  • Tracking for online behavioral advertising,
  • Network provision,
  • User access and control,
  • User-centric security, and
  • Adapting the user interface to the device.

Poland Amends Its Personal Data Protection Act

On November 24, 2014, the Polish President Bronisław Komorowski signed into law a bill that was passed by Polish Parliament on November 7, 2014, which amends, among other laws, certain provisions of the Personal Data Protection Act 1997. As a result of the amendments, data controllers will be able to transfer personal data to jurisdictions that do not provide an “adequate level” of data protection without obtaining the prior approval of the Polish Data Protection Authority (Generalny Inspektor Ochrony Danych Osbowych or “GIODO”), provided that they meet certain requirements specified under the bill. In addition, the bill amends Polish law so that it is no longer mandatory to appoint an administrator of information security (administrator bezpieczeństwa informacji or “ABI”). An ABI is similar to a data protection officer but an ABI has narrower responsibilities that predominantly concern data security.

Currently, Poland is one of the EU Member States that requires obtaining the (1) prior written consent of every data subject or (2) prior consent of the GIODO to transfer personal data to third countries that do not provide an “adequate level” of data protection. As a result of the recent amendments, data controllers will be able to transfer personal data to third countries that do not provide an “adequate level” of data protection without the GIODO’s prior approval if they (1) execute standard contractual clauses approved by the European Commission, or (2) have implemented Binding Corporate Rules approved by the GIODO.

Under the new regime, the appointment of an ABI will be optional. The bill does impose duties on ABIs so if a data controller does not appoint an ABI, the data controller itself will have to assume the newly created duties (except the duty to prepare a report for a data controller). If the data controller appoints an ABI, the appointment and removal of the ABI must be registered with the GIODO within 30 days of the appointment or removal. The amendments also specify the role of an ABI and qualifications that an ABI must have, including a university education and sufficient knowledge of the provisions of the data protection law. Also, if the data controller appoints an ABI and notifies the GIODO of the appointment, the data controller will be released from the obligation to register its data filing system, unless it processes sensitive data.

The law becomes effective on January 1, 2015.

AVbytes Multirogue 2015

This Chameleon fake Antivirus is looking for the OS version (XP, Vista, Seven) and changes its name and skin: AVbytes Win 7 Antivirus 2015, AVbytes Win 8 Antivirus 2015, AVbytes Vista Antivirus 2015, (...). It detects fake infections and displays alert messages to scare users. It belongs to the Braviax/FakeRean family.

European Parliament Announces New European Data Protection Supervisor

On November 27, 2014, the European Parliament announced that it will appoint Giovanni Buttarelli as the new European Data Protection Supervisor (“EDPS”), and Wojciech Wiewiórowski as the Assistant Supervisor. The announcement has been expected since the Parliament’s Committee on Civil Liberties, Justice and Home Affairs voted on October 20, 2014 for Buttarelli and Wiewiórowski to be the Parliament’s leading candidates for the two positions. The final step of the process is for the Parliament and the Council of the European Union to jointly sign a nomination decision, after which Buttarelli and Wiewiórowski will formally take up their new roles.

Both are well-known figures in the European data protection circles. Buttarelli is the current Assistant Supervisor, a position he has held since 2009. He will replace Peter Hustinx, who has been the EDPS since 2004. Prior to joining the EDPS, Buttarelli spent 12 years with the Italian Data Protection Authority. Wiewiórowski has been the Inspector General of the Polish Data Protection Authority since June 2010. Earlier this year he was appointed to the role of Vice-Chair of the EU’s Article 29 Working Party, a committee of data protection authorities from each of the 28 EU Member States.

The EDPS is an independent regulatory body whose main goal is to ensure that the EU institutions and bodies abide by the principles of EU data protection law when they process personal data and develop new policies. The EDPS’s authority stems from Regulation 45/2001, which governs the processing of personal data by EU institutions. The EDPS also advises the European Commission, the Parliament and the Council of the European Union on proposals for new privacy legislation.

When Your Marquee Client Gets Hacked

There are people who will tell you that all PR is good PR. In my years in security I have seen both sides of that debate true. Lately though, particularly for security companies who are selling into the enterprise - this may be a double-edged sword that cuts deep.

Look at any reputable (and some not-so-much) security vendor's website and you'll notice there's always a page that gives you all the different logos of the companies who use their products. Most times the vendor pays dearly for that either through deep discounts, or some other concessions just to be able to use the reference. Generally this works to the vendor's advantage because seeing Vendor X used by your peers means that perhaps it's a good idea to give them a look.

Except, maybe, when those peers are getting hammered for being a data breach victim.

This has happened a few times recently with vendors touting big names as marquee clients- then the marquee client suffers a massive data breach. Interestingly enough, some sales people still use the fact that the client had the product running in their environment to push the sales agenda, but I don't think this is the approach they want.

Think about it.

Your big client gets hit while they're being hailed as using your product or service. Are you sure you want to claim victory? Most of these aren't little incidents, but rather the kinds of breaches that make lawyers cry.

There are two ways this presents itself-

First, your product or service supports either the defense, detection, response or recovery from the attack and subsequent breach. This bodes well, generally. If the organization made the investment in your product or service and you helped them decrease the amount of pain they and their customers have to go through - you win.

Second, your product was a bystander - neither helping nor hurting. This is where things get a little sketchy. Maybe you were sold the "SQL Injection Prevent-o-Matic" but your big e-commerce site was thoroughly ransacked using SQL Injection. There are two sub-plots that you can follow...

If your product or service detected or could have prevented, detected, or helped respond/recover from the attack but no one operationalized your product or service - you're in trouble.

Alternatively, if your product or service completely missed the attack and didn't provide value - you're in trouble.

I've watched companies present marquee customers all the time with little regard for what that means to their corporate brand. "This company just got hacked, true, but our product was right there telling them that they were getting hacked! If only they listened to our amazing product!" is perhaps the worst marketing pitch, ever. You know why? Because you're demonstrating that even though your product could do amazing things for your clients, your failure to teach your clients how to operationalize and be effective with your product at best makes the whole thing a bad investment. At very worse, it makes your product or service crap.

This is why I marvel when I hear that claim made - "They bought our stuff, if only they had used it properly...". It makes me crazy because you're taking a backhanded swipe at your client all while making a clear statement that you were part of the failure.

Folks security kit isn't magic. You don't claim victory by having it dropped off at your dock, or even having it in-line and blinking in your racks. Heck you don't even get credit if the console is up on someones screen. Only when it's fully operationalized do you get to claim credit, in a positive way.

Repeat after me - fully operationalized is how we claim success. I can't stress this enough. It's baffling that vendor and enterprise alike aren't fully getting this in wide adoption. Owning a Formula 1 car doesn't make a winning Formula 1 team. A good pit crew, managers, lots of practice, operational mechanics, management, a driver and good telemetry are just the start of it. Once you get all of the parts together you have to work out bugs until the whole thing is near-perfect. Then you push harder. That's how you operationalize security - otherwise you've failed.

Was the past better than now?

Here we go again — another article arguing whether the past was better or not (this one says “better”). These articles are tiresome, rehashing the debate whether technology is enabling or isolating and dehumanizing. But I’m interested in a different line of technology criticism: which parts of technology are a regression and what to do about that.

From the first stone tools, technology has both reflected us and changed us. When we became farmers, we became less portable and vulnerable to robbers, and it was possible to measure capital for the first time via a land’s quality and location.

When evaluating today’s technology, I think it’s important to keep a flexible point of view and not be limited by a linear view of history. For example, what would digital cash look like today if we had adopted a 10-year land ownership rotation back then? A linear progression from good to bad (or bad to good) ignores a more nuanced view that focuses on the good and bad, leading to an understanding what we can do about it.

Even though I work with developing new technology every day, I’m reticent to adopt it until I have time or motivation to review it thoroughly. There are two main reasons:

  1. Advances in technology often come with critical regressions
  2. What you use changes yourself, your way of thinking, and what you believe to be possible

The microwave oven was a huge advance in heating speed, but you lost the key aspect of temperature control. It is still difficult to find one that allows you to heat food to a particular temperature. Instead, you have to guess at the combination of watts and time. Software is even more plastic. You can be using code written by a 20-year-old Javascript newbie for reviewing the intricacies of your personal genome. Calling this entire technology a step forward or back is much too simplistic, and it lets said programmer off the hook for not knowing their own history.

Computer history should be a mandatory part of the curriculum. I don’t mean dry facts like the date the transistor was invented or which CPU first implemented pipelining. I mean criticism of historical choices in software or system design, and an analysis of how they could have been done differently.

Here are some example topics to get you started:

  1. Compare the Mac OS X sandboxing architecture to the Rainbow Series. Which is more usable? Compare and contrast the feature sets of each. Create an alternate history where modern Unix systems had thrown out UIDs and built on a data-centric privilege model.
  2. In terms of installation and removal, how do users expect iOS and Android devices to treat mobile apps? How does this compare to Windows programs or Linux packages? What are the potential side effects (in terms of system or filesystem changes, network activity, etc.) of installing a program? Running it? Removing it?
  3. Some developers have advocated “curl | sh” as an acceptable installation method as a replacement for native packages. They argue that there is no loss of security compared to downloading and installing a native package from an uncertain origin. Compare the functionality and risks of “curl | sh” to both a common package system (e.g., Debian dpkg) and an innovative system (e.g., Solaris IPS), focusing on operations like installing a package for the first time, upgrading it, installing programs with conflicting dependencies, etc. What is truly being lost, if anything?

Good design and engineering involves knowing what has come before, so we can move forward with as little loss as possible. Engineers should learn more about what has come before to avoid repeating the mistakes of the past. The past wasn’t better than the present, but ignoring it makes us all worse off than we could have been.