Monthly Archives: February 2014

SSL Bugs Likely to Have Insurance Coverage Implications

Hunton & Williams Insurance Litigation & Counseling partner Lon Berk reports:

The recently publicized Secure Sockets Layer (“SSL”) bug affecting Apple Inc. products raises a question regarding insurance coverage that is likely to become increasingly relevant as “The Internet of Things” expands. Specifically, on certain devices, the code used to set SSL connections contains an extra line that causes the program to skip a critical verification step. Consequently, unless a security patch is downloaded, when these devices are used on shared wireless networks they are subject to so-called “man-in-the-middle” security attacks and other serious security risks. Assuming that sellers of such devices may be held liable for damages, there may be questions about insurance to cover the risks.

Traditionally, products liability coverage is found in general liability policies. These policies, however, often contain exclusions cited by insurers to deny coverage for injuries relating to coding errors. One such exclusion bars coverage for damage to “impaired property” – essentially, property that has not sustained physical damage, but has been harmed by the insured’s work. Although at least one court has held that this exclusion precludes coverage for products that fail to function as intended due to coding errors, another court found the exclusion unintelligible and refused to enforce it.

A second exclusion often cited to restrict coverage is the “professional services” exclusion. Insurers may take the position that software engineering constitutes a “professional service” and, accordingly, liability caused by coding errors is not covered by their policies. Certain courts have accepted this interpretation notwithstanding the fact that it effectively renders products liability coverage illusory.

As The Internet of Things expands, an increasing number of everyday products will feature software components that may be susceptible to errors similar to the latest SSL bug. Accordingly, manufacturers should work with their insurance consultants to ensure that they are protected against all liabilities, including those arising out of coding errors in the devices and products they are developing.

Chairman of French Data Protection Authority Elected Chair of Article 29 Working Party

On February 27, 2014, Chairwoman of the French Data Protection Authority (the “CNIL”) Isabelle Falque-Pierrotin was elected Chairwoman of the Article 29 Working Party effective immediately. Ms. Falque-Pierrotin succeeds Jacob Kohnstamm, Chairman of the Dutch Data Protection Authority, who chaired the Article 29 Working Party for four years. The Working Party also elected two new Vice-Chairs: Wojciech Rafal Wiewiórowski of the Polish Data Protection Authority, and Gérard Lommel of the Luxembourg Data Protection Authority.

The Article 29 Working Party is composed of representatives from the EU Member States’ data protection authorities, the European Data Protection Supervisor and the European Commission. The Working Party elects its Chairs and Vice-Chairs for two year terms of office, which are renewable.

The CNIL announced that, under the presidency of Ms. Falque-Pierrotin, the Working Party will face two key challenges: (1) preparing the transition to the new governance provided for by the proposed General Data Protection Regulation, and (2) developing cooperation between data protection authorities on an international level.

Episode #175: More Time! We Need More Time!

Tim leaps in

Every four years (or so) we get an extra day in February, leap year. When I was a kid this term confused me. Frogs leap, they leap over things. A leap year should be shorter! Obviously, I was wrong.

This extra day can give us extra time to complete tasks (e.g. write blog post), so we are going to use our shells to check if the current year is a leap year.

PS C:\> [DateTime]::IsLeapYear(2014)
False

Sadly, this year we do not have extra time. Let's confirm that this command does indeed work by checking a few other years.

PS C:\> [DateTime]::IsLeapYear(2012)
True
PS C:\> [DateTime]::IsLeapYear(2000)
True
PS C:\> [DateTime]::IsLeapYear(1900)
False

Wait a second! Something is wrong. The year 1900 is a multiple of 4, why is it not a leap year?

The sun does not take exactly 365.25 days to get around the sun, it is actually 365.242199 days. This means that if we always leaped every four years we would slowly get off course. So every 100 years we skip the leap year.

Now you are probably wondering why 2000 had a leap year. That is because it is actually the exception to the exception. Every 400 years we skip skipping the leap year. What a cool bit of trivia, huh?

Hal, how jump is your shell?

Hal jumps back

I should have insisted Tim do this one in CMD.EXE. Isn't is nice that PowerShell has a IsLeapYear() built-in? Back in my day, we didn't even have zeroes! We had to bend two ones together to make zeroes! Up hill! Both ways! In the snow!

Enough reminiscing. Let's make our own IsLeapYear function in the shell:


function IsLeapYear {
year=${1:-$(date +%Y)};
[[ $(($year % 400)) -eq 0 || ( $(($year % 4)) -eq 0 && $(($year % 100)) -ne 0 ) ]]
}

There's some fun stuff in this function. First we check to see if the function is called with an argument ("${1-..."). If so, then that's the year we'll check. Otherwise we check the current year, which is the value returned by "$(date +%Y)".

The other line of the function is the standard algorithm for figuring leap years. It's a leap year if the year is evenly divisible by 400, or divisible by 4 and not divisible by 100. Since shell functions return the value of the last command or expression executed, our function returns whether or not it's a leap year. Nice and easy, huh?

Now we can run some tests using our IsLeapYear function, just like Tim did:


$ IsLeapYear && echo Leaping lizards! || echo Arf, no
Arf, no
$ IsLeapYear 2012 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 2000 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 1900 && echo Leaping lizards! || echo Arf, no
Arf, no

Assuming the current year is not a Leap Year, we could even wrap a loop around IsLeapYear to figure out the next leap year:


$ y=$(date +%Y); while :; do IsLeapYear $((++y)) && break; done; echo $y
2016

We begin by initializing $y to the current year. Then we go into an infinte loop ("while :; do..."). Inside the loop we add one to $y and call IsLeapYear. If IsLeapYear returns true, then we "break" out of the loop. When the loop is all done, simply echo the last value of $y.

Stick that in your PowerShell pipe and smoke it, Tim!

Hack Naked TV 14-15

FTP Passwords!! They are everywhere!! http://tinyurl.com/HNTV-FTP-Creds

Chargeware.. It is legal, but it can still get you shot. http://tinyurl.com/HNTV-EULA

Target breach and the state of phishing: http://tinyurl.com/HNTV-Target-Email

SANS 560 Orlando April 7th - 12th

http://tinyurl.com/SANS-560-Orlando

Please note the link and the dates in the video are wrong for SANS Orlando.

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.

RSA Conference 2014

I'm at the RSA Conference this week. I considered the point of view that perhaps there's something to be said for abstaining this year but ultimately my decision to maintain course was based on two premises: (1) RSA didn't know the NSA had a backdoor when they made the arrangement and (2) The conference division doesn't have much to do with RSA's software group.

Anyway, my plan is to take notes and blog or tweet about what I see. Of course, I'll primarily be looking at Identity and Access technologies, which is only a subset of Information Security. And I'll be looking for two things: Innovation and Uniqueness. If your company has a claim on either of those in IAM solutions, please try to catch my attention.

European Data Protection Supervisor Calls for Strengthening of EU Data Protection Laws

On February 21, 2014, Peter Hustinx, the European Data Protection Supervisor (“EDPS”), highlighted the need to enforce existing EU data protection law and swiftly adopt EU data protection law reforms as an essential part of rebuilding trust in EU-U.S. data flows.

Commission Safe Harbor Recommendations

In November 2013, following revelations of widespread government surveillance and access to EU personal data, the European Commission published an analysis of the U.S.-EU Safe Harbor Framework. The analysis included 13 recommendations for improving the Safe Harbor, a communication on rebuilding trust in EU-U.S. data flows and a communication on the functioning of the Safe Harbor.

Recommendations of the EDPS

In his Opinion on the Commission’s communications, Hustinx called for the following actions to help improve trust in EU-U.S. transfers of personal data:

  • Adoption of a general omnibus U.S. privacy law;
  • Effective enforcement of international data transfer mechanisms for the transfer of EU personal data outside of the EEA (including the Safe Harbor);
  • A review and strengthening of the Safe Harbor program, in line with the Commission’s recommendations;
  • The swift adoption of the package of EU data protection reforms;
  • Clear and consistent EU reforms addressing (1) the regulation of cross-border transfers of EU personal data, (2) the processing of personal data for law enforcement purposes, and (3) conflicts of law; and
  • Ensuring that the national security exemptions to the rights to privacy and confidentiality of communications, and to the protection of personal data, are used only where strictly necessary and are proportionate and in line with European case law.

 

Greek Presidency Issues Notes on Proposed EU Data Protection Regulation

On January 31, 2014, the Greek Presidency of the Council of the European Union issued four notes regarding the proposed EU Data Protection Regulation. These notes, discussed below, address the following topics: (1) one-stop-shop mechanism; (2) data portability; (3) data protection impact assessments and prior checks; and (4) rules applicable to data processors.

One-Stop-Shop Mechanism

The implementation of the one-stop-shop mechanism has led to a number of discussions regarding the European Data Protection Board’s powers versus those conferred to the data protection authority (“DPA”) where the entity maintains its headquarters. In a note addressing these concerns, the Presidency focuses on specific elements of the mechanism and suggests possible ways forward regarding the scope of application, the role and powers of the lead DPA, the cooperation between lead DPA and other concerned DPAs, the notification and enforcement of adopted measures, remedies for individuals, and the role of the European Data Protection Board.

For example, the Presidency suggests that the DPAs’ powers could be regrouped into three main categories (investigative powers, corrective powers and authorization powers) and offers two options with respect to the lead DPA’s role: either the lead DPA could make decisions for all three power categories, or the lead DPA’s decisions could be limited to corrective and authorization powers. Further, the Presidency suggests that any action to be taken within the territory of a Member State can only be carried out by the “local DPA” (including in the context of mutual assistance following a request from the “lead DPA” in certain cases, such as an audit). The Presidency also proposes to maintain the individual’s right under Directive 95/46/EC to complain to the DPA of his or her choice, and suggests clarifying that if the DPA rejects an unfounded complaint, the individual may bring proceedings in the courts of the same EU Member State.

Data Portability

Although the Presidency acknowledges that there is general support for a right to data portability, some delegations raised concerns regarding the risks for companies’ competitive positions, administrative burdens, and the scope of the “automated processing system” concept. Accordingly, the Presidency’s note suggests limiting portability rights to cases where personal data have been provided by the individual and the processing is based on consent or a contract and to Internet-related cases. The Council further recommends that controllers not be required to guarantee that they will directly transmit data to another entity that may be a competitor, and that controllers should have more flexibility with regard to the format they use to provide data portability (with a goal of reducing burden and costs for controllers). Finally, additional proposed provisions would ensure that data portability rights will not impinge on intellectual property rights, and the Council suggests clarifications regarding the controller’s right to retain data to the extent necessary to carry out contract obligations.

Data Protection Impact Assessment and Prior Checking

In a third note, the Presidency addresses issues relating to data protection impact assessments, which are intended to replace the current obligation to notify data protection authorities. Although there is a strong support for data protection impact assessments, discussions have revealed that Member States are concerned about burdens such as the cost associated with the mandatory assessment and other requirements. The Council now suggests only requiring the controller (not also the processor) to carry out the data protection impact assessment, and has prepared a list of processing activities that present specific risks (e.g., decisions based on profiling, making decisions using sensitive data, large-scale public monitoring and biometric and genetic systems). Further, the Presidency recommends clarifying the concept of processing concerning “a systematic and extensive evaluation of personal aspects relating to a natural person” and the reference to “filing systems.”

The same note contains proposals to respond to the uncertainties surrounding the “prior checking” requirement. Some Member States are concerned that data protection authorities will not have the capacity to handle these consultations, and question the practical effect of the consultation and the reasons for requesting that processors consult with the data protection authority. Accordingly, the Presidency offers several clarifications, including that only data controllers should be required to consult with the data protection authority, and that only residual risks cases should be subject to prior consultation.

Rules Applicable to Processors

In its note on rules for processors, the Presidency considers a number of improvements intended to clarify the framework governing the relationship between controllers and processors. The Presidency offers the following recommendations:

  • controllers may only use processors that provide sufficient guarantees;
  • the processor’s activities must be governed by a contract including a list of mandatory provisions; and
  • the contract with the processor must detail the subject-matter and duration of the contract, the nature and purpose of the processing, the type of personal data and categories of data subjects.

The Presidency also specifies the comprehensive duties that the processor owes to the controller while processing personal data, and suggests including provisions to facilitate basing the contract on standard contractual clauses adopted by either the European Commission or a data protection authority.

Puerto Rico Health Insurer Reports Record Fine Following PHI Breach Incident

Triple-S Management Corporation reported in the 8-K it recently filed with the U.S. Securities and Exchange Commission that its health insurance subsidiary, Triple-S Salud, Inc. (“Triple S”), which is Puerto Rico’s largest health insurer, will be fined $6.8 million for a data breach that occurred in September 2013. The civil monetary penalty, which is being levied by the Puerto Rico Health Insurance Administration, will be the largest fine ever imposed following a breach of protected health information.

According to the filing, in September 2013, Triple S mailed pamphlets to its Medicare Advantage beneficiaries that inadvertently displayed the beneficiaries’ Medicare Health Insurance Claim Numbers. Following the breach, which affected more than 13,000 individuals, Triple S conducted an investigation, notified affected individuals and reported the incident to Puerto Rican authorities as well as the Department of Health and Human Services’ Office for Civil Rights. Triple S also offered one year of credit monitoring at no charge to the affected individuals.

According to the 8-K, Triple S was notified of the pending sanctions on February 11, 2014. In addition to the proposed monetary penalty, Triple S will be required to suspend new enrollments of Dual Eligible Medicare beneficiaries and notify existing beneficiaries of their right to disenroll from the Triple S Medicare Advantage plan. In the 8-K, Triple S noted that it is responding to the allegations that it “failed to take all required steps in response to the breach” and has the right to request an administrative hearing on the issue. The 8-K concluded by noting that Triple S is “working to prevent this type of incident from happening again.”

View the 8-K.

Traditional Insurance Policies May Cover Cyber Risks

Hunton & Williams Insurance Litigation & Counseling partner Lon Berk reports:

Insurers often contend that traditional policies do not cover cyber risks, such as malware attacks and data breach events. They argue that these risks are not “physical risks” or “physical injury to tangible property.” A recent cyber attack involving ATMs, however, calls this line of reasoning into question.

The attack involved breaking open ATMs and inserting USB sticks containing a dynamic-link library (“DLL”) exploit. These types of attacks generally work by “tricking” a Windows application to load a malicious file with the same name as a required DLL . In this case, when the ATMs were rebooted they loaded the malicious code onto the machines. The perpetrators later entered a code into the ATMs that triggered the malware and enabled the withdrawal of all cash in the ATM.

These attacks demonstrate how a cyber risk can, in fact, be a risk of physical injury. To upload the malware, the attackers had to physically break open the ATMs to insert a foreign device (the USB stick), plainly causing a physical injury to tangible property. Indeed, injecting malware generally requires physical access to a device, whether over a wireless or wired network or through actual contact, and a physical rearrangement of memory. That said, the risk of physical injury associated with cyber crimes does not mean that policyholders should not buy appropriate cyber insurance. Insurers have incorporated exclusions in many traditional policies that may exclude coverage for damage caused by malicious code. But where those exclusions are limited, or absent, policyholders should check their traditional policies for coverage. Those polices may offer protection, even without a separate cyber insurance policy.

German Appeals Court Finds German Data Protection Law Applicable to Facebook

On January 24, 2014, the Chamber Court of Berlin rejected Facebook’s appeal of an earlier judgment by the Regional Court of Berlin in cases brought by a German consumer rights organization. In particular, the court: 

  • enjoined Facebook from, broadly, operating its “Find a Friend” functionality in a way that violates the German Unfair Competition Act;
  • enjoined Facebook from using certain provisions in (1) its terms and conditions, and (2) privacy notices concerning advertisements, licensing, personal data relating to third parties and personal data collected through other websites; and
  • mandated that Facebook provide users with more information about how their address data will be used by the “Find a Friend” functionality.

Similar to its earlier case against Apple, the German consumer rights organization successfully argued that German, not Irish, data protection law applied. Although other German courts have not always accepted this line of reasoning, the court followed it here and, notably, also held that a breach of data protection law also may constitute a breach of the Unfair Competition Act. This approach represents a new development in the data protection context. One of the conditions for consumer rights organizations to be able to commence legal proceedings is that there is a violation of the Unfair Competition Act. Recognizing data protection law violations as violations of the Unfair Competition Act therefore arguably makes it easier for consumer rights organizations to bring privacy-oriented cases. It also can be seen as part of a wider trend to improve the ability of German consumer rights organizations to sue for breaches of data protection law.

The Chamber Court’s ruling is not yet binding and is subject to appeal.

Joff Thyer on Django Static Code Analysis – Episode 362, Part 2 – February 13, 2014

DjangoSCA is a python based Django project source code security auditing system that makes use of the Django framework itself, the Python Abstract Syntax Tree (AST) library, and regular expressions. Django projects are laid out in a directory structure that conforms to a standard form using known classes, and standard file naming such as settings.py, urls.py, views.py, and forms.py. DjangoSCA is designed for the user to pass the root directory of the Django project as an argument to the program, from which it will recursively descend through the project files and perform source code checks on all python source code, and Django template files.

Interview with Paul Paget from Pwnie Express – Episode 362, Part 1 – February 13, 2014

Paul Paget was appointed CEO of Pwnie Express in August 2013 to help grow it into the leader for testing the security of remote operations. Joining Dave Porcello, the founder, and his outstanding team. The PWN Plug has created a hit and they aim to make it a standard around the world. It radically simplifies and reduces the cost of assessing security, especially in hard to reach out of the way part of an organization such as bank offices, stores and off shore facilities.

Forbes Data Breach Impacts Over 1 Millions Accounts

Today The Syrian Electronic army via their Twitter account @Official_SEA16 announced that they have leaked the Forbes WordPress user database not long after it was announced that they had managed to hack their website.

Eduard Kovacs from Softpedia has stated that the leak has a been uploaded to an IP address (91.227.222.39) which was also used last year in a defacement on http://marines.com/ as well.

This breach is quite substantial and includes 1,056,986 unique emails addresses and accounts with 844 of them being government (.GOV) and 14,572 educational accounts (.EDU). In addition, the dump contains credentials from a Forbes wp_users database and contains 564 Forbes.com based emails including administrators accounts.

Forbes has posted a statement to their Facebook page regarding the breach urging all users to reset their password on the Forbes network and on any other sites they may have used the same credentials.

Security message: Forbes.com was targeted in a digital attack and our publishing platform was compromised. Users’ email addresses may have been exposed. The passwords were encrypted, but as a precaution, we strongly encourage Forbes readers and contributors to change their passwords on our system, and encourage them to change them on other websites if they use the same password elsewhere. We have notified law enforcement. We take this matter very seriously and apologize to the members of our community for this breach.

As Eduard points out that although the passwords are encrypted, the email addresses are still very useful. In addition, it is not clear the type of the encryption used and there is still a potential that they can easily be decrypted. It is clear that this breach has the potential to pose a significant risk for many of their users.

Breakout of just a few type of email domains:
844 .GOV
14,572 .EDU
91,464 hotmail.com
3,460 mac.com
185, 271 yahoo.com
407,787 gmail.com
25,050 aol.com

Plasma HTTP

Advert:

Login:

Online bot:

offline bots:

Commands:

Statistics:

Logs:



Yeah take this lame article to second degree, i just talk about Plasma because i've promised to write something today on irc.

I'm not dead but there nothing interesting to review for the moment, only crappy bots
That also one of the reason i haven't talked of JackPos and all the rest.
I have some interesting things but it's too sensitive for the moment and when it's not the reason, it's due to people who request me to don't talk of a subject because they want to cover it 'first' for their company but who finaly write nothing, so i still wait (you know who you are)
e.g: ZeusVM, i wanted to talk about the weird version who appeared since some months now
a version who download from sites (on ssl and fastflux) a picture with a config embedded inside.. but well, fuck it now.
As i already told on a previous article, i may appear inactive but i'm not so inactive.
I've recently do this, i still continue to posts malwares, break things but without necessarily talking about it or just briefly like for jackTrash, and today: PlasmaTrash, and iTrashing.
I still continue to do trashy video, show trashy things on my hackerspace and talk about trashs on irc. (yeah that a lot of trash)
So for the moment, i just wait and see...

Broad Interpretations of Terrorism Exclusions Are Incompatible with Cyber Insurance

The scale of some recent cyber events has been extraordinary. Target reports that 70 million people (almost 25% of the U.S. population) were affected by its recent breach. CNN recently reported that in South Korea there was a breach that affected 40% of its citizens. The staggering impact of these events is leading companies to seek protection through both technology and financial products, such as insurance. Insurers typically attempt to avoid this sort of enormous exposure with terrorism exclusions, and it is reasonable to expect aggressive insurers to rely upon such exclusions to avoid their coverage obligations. In a client alert, a Hunton & Williams Insurance Litigation & Counseling partner outlines how after 9-11, insurers added terrorism exclusions to their policies in order to provide coverage for losses arising out of terrorism only if special coverage was acquired.

Read our full client alert.

French Data Protection Authority Revises Authorization on Whistleblowing Schemes

In a decision published on February 11, 2014, the French Data Protection Authority (“CNIL”) adopted several amendments to its Single Authorization AU-004 regarding the processing of personal data in the context of whistleblowing schemes (the “Single Authorization”).

Since 2005, companies in France have had to register their whistleblowing schemes with the CNIL either by self-certifying to the CNIL’s Single Authorization or by filing a formal request for approval with the CNIL. Companies that self-certify to the Single Authorization make a formal representation that their whistleblowing scheme complies with the pre-established conditions set out in this authorization. Until now, only the following whistleblowing schemes could benefit from the CNIL’s Single Authorization:

  • Whistleblowing schemes implemented to comply with a French legal obligation in the following areas: finance, accounting, banking and anti-corruption;
  • Whistleblowing schemes implemented to comply with Section 301(4) of the Sarbanes-Oxley Act or the Japanese Financial Instrument and Exchange Act, and to fight against anti-competitive practices.

Through the recent amendments, the CNIL has extended the scope of the Single Authorization to include (1) the fight against discrimination and harassment in the workplace, (2) health, hygiene and security in the workplace and (3) protection of the environment. Whistleblowing schemes that allow reporting in those areas may now benefit from the Single Authorization. The CNIL also clarified that, although anonymous reporting should not be encouraged, it may be tolerated, subject to two conditions. Specifically, anonymous reports may be processed if (1) the seriousness of the reported facts is established and factual elements are sufficiently detailed and (2) the processing of the anonymous report is performed with great caution (including a prior examination by the first recipient of the report regarding the opportunity to disseminate the report within the whistleblowing scheme). Whistleblowing schemes which do not comply with these conditions must be authorized by the CNIL on a case-by-case basis.

This is the second time that the CNIL revised its Single Authorization. In 2010, the CNIL extended the scope of the Single Authorization to acknowledge that companies may operate whistleblowing schemes in areas beyond financial issues.

FTC Announces Settlement with Online Gaming Company Falsely Claiming Compliance with the Safe Harbor Framework

On February 11, 2014, the Federal Trade Commission announced a proposed settlement with Fantage.com stemming from allegations that the company made statements in its privacy policy that deceptively claimed that Fantage.com was complying with the U.S.-EU Safe Harbor Framework.

The U.S.-EU Safe Harbor Framework is a cross-border data transfer mechanism that enables certified organizations to move personal data from the European Union to the United States in compliance with European data protection laws. To join the Safe Harbor Framework, a company must self-certify to the Department of Commerce that it complies with seven privacy principles (notice, choice, onward transfer, security, data integrity, access and enforcement) and related requirements that have been deemed to meet the EU’s adequacy standard.

According to the complaint filed by the FTC, Fantage.com, which makes an online role-playing game directed at children, deceptively claimed that it held a current U.S.-EU Safe Harbor certification, when in fact the company had allowed its certification to expire in June 2012. The complaint alleges that this conduct violates Section 5 of the FTC Act, however, the FTC does not allege any substantive violations of the Safe Harbor privacy principles or of the Children’s Online Privacy Protection Act.

The proposed settlement agreement prohibits the company from misrepresenting the extent to which it participates in any privacy or data security program sponsored by the government or any other self-regulatory or standard-setting organization.

In January 2014, the FTC announced settlements with twelve companies stemming from similar charges of falsely claiming compliance with the U.S.-EU Safe Harbor Framework.

Update: On June 25, 2014, the FTC approved the final settlement order with Fantage.com.

NIST Releases Final Cybersecurity Framework

On February 12, 2014, the National Institute of Standards and Technology (“NIST”) issued the final Cybersecurity Framework, as required under Section 7 of the Obama Administration’s February 2013 executive order, Improving Critical Infrastructure Cybersecurity (the “Executive Order”). The Framework, which includes standards, procedures and processes for reducing cyber risks to critical infrastructure, reflects changes based on input received during a widely-attended public workshop held last November in North Carolina and comments submitted with respect to a preliminary version of the Framework that was issued in October 2013.

Differences between the Framework and its preliminary version are generally editorial, and the Framework’s basic structure has remained substantially the same. However, in one notable change, the Framework no longer includes Appendix B, the “Methodology to Protect Privacy and Civil Liberties for a Cybersecurity Program.” Appendix B of the Preliminary Framework attracted significant opposition from industry because, among other things, of its breadth, prescriptive nature, and failure to reflect the standards contained in a wide range of successful privacy and data protection programs implemented by industry, in partnership with various government agencies. The Framework issued today removes Appendix B and replaces it with a general description of privacy issues that entities should consider in the section on “How to Use the Framework.”

Like the preliminary version, the Framework is broadly broken down into three components: (1) Framework Core, (2) Framework Implementation Tiers and (3) Framework Profile.

The Framework Core is organized into five overarching cybersecurity functions: (1) identify, (2) protect, (3) detect, (4) respond and (5) recover. Each function has multiple categories, which are more closely tied to programmatic activities. They include activities such as “Asset Management,” “Access Control” and “Detection Processes.” The categories, in turn, have subcategories, which are tactical activities that support technical implementation. Examples of subcategories include “[a]sset vulnerabilities are identified and documented” and “[o]rganizational information security policy is established.” The Framework Core includes informative references, which are specific sections of existing standards and practices that are common among various critical infrastructure sectors and illustrate methods to accomplish the activities described in each Subcategory.

The Framework Implementation Tiers describe how an organization views cybersecurity risk and the processes in place to manage that risk. The tiers range from Partial (Tier 1) to Adaptive (Tier 4) and describe an increasing degree of rigor and sophistication in cybersecurity risk management practice. Progression to higher tiers is encouraged when such a change would reduce cybersecurity risk and be cost effective.

The Framework Profile is the alignment of the functions, categories and subcategories with the organization’s business requirements, risk tolerance and resources. An organization may develop a current profile based on existing practices and a target profile that reflects a desired set of cybersecurity activities. A comparison of the two profiles may reveal gaps that establish a roadmap for reducing cybersecurity risk that is aligned with organizational and sector goals, considers legal and regulatory requirements and industry best practices, and reflects risk management priorities.

The Framework is a flexible document that gives users the discretion to decide which aspects of network security to prioritize, what level of security to adopt, and which standards, if any, to apply. This flexibility reflects vocal opposition by critical infrastructure owners and operators to new cybersecurity regulations.

The White House has emphasized repeatedly that the Framework itself does not include any mandates to adopt a particular standard or practice. However, Section 10 of the Executive Order directs sector-specific agencies to engage in a consultative process with the Department of Homeland Security, the Office of Management and Budget, and the National Security Staff to review the Framework and determine if current cybersecurity regulatory requirements are sufficient given current and projected risks. If such agencies deem the current regulatory requirements to be insufficient, then they “shall propose prioritized, risk-based, efficient, and coordinated actions…” This process could lead to new cybersecurity regulations in various sectors.

This regulatory review, in conjunction with the Framework being used by insurance underwriters and incentives the Administration is developing to encourage adoption of the Framework, likely will result in the Framework affecting standards of reasonableness in litigation relating to cybersecurity incidents.

German Ministry Moves on Privacy Litigation

On February 11, 2014, Germany’s Federal Minister of Justice and Consumer Protection announced that consumer rights organizations will soon be able to sue businesses directly for breaches of German data protection law. Such additional powers had already been contemplated by the German governing coalition’s agreement and the Minister now expects to present a draft law in April of this year to implement them.

If passed, the new law would bring about a fundamental change in how German data protection law is enforced. Currently, only the affected individuals as well as Germany’s criminal prosecutors and data protection authorities have legal standing to sue businesses for breaches of data protection law. Such proceedings are still relatively infrequent, in part due to the complexities and costs involved.

Consumer rights organizations, however, are sophisticated and well-funded. In the past, they have been very active in pursuing businesses for breaches of consumer protection legislation and unfair competition laws. Alleged data protection breaches often are featured in these proceedings, but consumer rights organizations had to rely on particular legal fact patterns to successfully argue their cases. The new law would likely change this and legal proceedings against businesses for data protection breaches would become more common in Germany.

Therefore, businesses subject to German data protection laws should take note of this development and consider whether their data processing practices meet the required standards.

Safer Internet Day: 4 Things You Might Not Realise Your Webfilter Can Do

Since it's Safer Internet Day today, I thought i'd use it as an excuse to write a blog post. Regular readers will know I don't usually need an excuse, but I always feel better if I do.

Yesterday, I was talking to our Content Filter team about a post on the popular Edugeek forum, where someone asked "is it possible to block adult content in BBC iPlayer?". Well, with the right web filter, the answer is "yes", but how many people think to even ask the question? Certainly we hadn't thought much about formalising the answer. So I'm going to put together a list of things your web filter should be capable of, but you might not have realised...


1. Blocking adult content on "TV catch up" services like iPlayer. With use of the service soaring, it's important that any use in education is complemented with the right safeguards. We don't need students in class seeing things their parents wouldn't want them watching at home. There's a new section of the Smoothwall blocklist now which will deal with anything on iPlayer that the BBC deem unsuitable for minors.

2. Making Facebook and Twitter "Read Only". These social networks are great fun, and it can be useful to relax the rules a bit to prevent students swarming for 4G. A read-only approach can help reduce the incidence of cyber-bullying and keep users more focused.

3. Stripping the comments out of YouTube. YouTube is a wonderful resource, and the majority of video is pretty safe (use Youtube for Schools if you want to tie that down further — your filter can help you there too). The comments on videos, however, are often at best puerile and at worst downright offensive. Strip out the junk, and leave the learning tool - win win!

4. Busting Google searches back down to HTTP and forcing SafeSearch. Everybody appreciates a secure service, but when Google moved their search engine to HTTPS secure traffic by default, they alienated the education community. With SSL traffic it is much harder to vet search terms, log accesses in detain, and importantly force SafeSearch. Google give you DNS trickery to force the site back into plain HTTP - but that's a pain to implement, especially on a Windows DNS server. Use your web filter to rewrite the requests, and have the best of both.

Interview with Brian Richardson, Interview with Chris Taylor, Drunken Security News – Episode 361 – February 6, 2014

Brian Richardson is a Senior Technical Marketing Engineer with Intel Software and Services Group. After fifteen years of external experience with BIOS and UEFI, Brian joined Intel in 2011 to focus on industry enabling for UEFI. Brian has a Master's Degree in Electrical Engineering from Clemson University, along with five US patents and a variety of seemingly disconnected hobbies involving video production. Brian has presented at Intel Developer Forum, UEFI Plugfest, Windows Ecosystem Summit and WinHEC. Brian can be contacted via twitter at @Intel_Brian and @Intel_UEFI.

Chris has been in IT security since the late 90’s with his first role in network support by monitoring IDS and explaining how hackers were breaking into places and what they did once they were in. He now specializes in intrusion analysis and runs the professional services side of CyTech Services, overseeing the commercial consulting and managed security services.

Plus, the stories of the week!

Working With the Darkleech Bitly Data

Data Driven Security took the time to analyze the raw data that I published in my recent post on Sucuri blog about how I used Bitly data to understand the scale of the Darkleech infection.

In their article, they have a few questions about data formats, meaning of certain fields and some inconsistencies, so I’ll try to answer their questions here and explain how I worked with the data.

So I needed to get information about all the links of the “grantdad” bitly account.

I checked the API and somehow missed the “link_history” API request (it was the first time I worked with the bitly API), so I decided to screenscrape the web pages where bitly listed the links creaded by grantdad. 1,000 pages with 10 links on each. Since pages didn’t contain all the information I needed I only collected the short links so that I could use them later in API calls to get more detailed information about each of them.

As you can see I was limited with the 10,000 links that bitly made available via their web interface. Not sure if I could get more links via that link_history API. Right now it returns “ACCOUNT_SUSPENDED” and when it was not suspended, API calls for known links beyond those 10,000 produced various errors.

When I compiled my list of 10,000 links I used the following API calls to get information about each of the links:

Referring domains

API:link/referring_domains — for each link, it returned a list of domains referring traffic to this link (in our case sites, containing the iframes) along with the number of clicks from each domain. This helped me compile this list of iframe loads per domain: http://pastebin.com/HYaY2yMb. Then I tried to resolve each of the domain names and created this list of infected domains per IP address: http://pastebin.com/Gxr51Nc1.
I also used this API call to get number of iframe loads for each bitly link.

Info

API:link/info – this gave me timestamps of the links and their long URLs (the rest information was not interesting for my research). Unfortunately, this particular API call is poorly documented so I can only guess what this timestamp mean. It is actually called “indexed“. But I guess it’s the time when the link was created. It is definiteley not the time of the first click because there were no registered clicks for many of the links. As a result, I compiled these datasets:
http://pastebin.com/UmkDZZp0
http://pastebin.com/w7Kq3ybV ,
which contain tab separated values of “bitly link id”, “timestamp”, “# of clicks/iframe loads”, and “long URL”.

At that point, I already had the number of iframe loads from the previous step (“link/referring_domains“). Then, for readability, I converted the numeric timestamp using this Python function datetime.datetime.fromtimestamp(). However, you can notice that the second dataset (for January 28th) has a different date format. Instead of “2014-01-25 19:10:07” it uses “Jan 28 23:10:19“. Why? Because of pastebin.com. Because it doesn’t allow unregistered users to post more than 500Kb of data (I deliberately post such data as a guest). Removing the year from the date allowed me to save 4 bytes on each row and fit the dataset in 500Kb.

Actually this 500Kb limit is the reason why I have separate pastebins for each date and only specify bitly ids instead of the full bitly links.

Countries

And finally, API:link/countries – for each link, it returned a list of countries referring traffic to this link along with the number of clicks from those countries. This helped me compile this list of iframe loads per country http://pastebin.com/SZJMw3vx.

The last dataset Feb 4-5

When I wrote my blogpost on February 5th, I noticed that there were new links available on the Bitly.com grantdad acount page. I began to browse pages of his links and figured that most of the links were the same (Jan 25 and 28) but around 1,700 of them were new (late February 4th and the beginning of February 5th). I immediately repeated the same procedure that included screenscraping, and 3 API calls for each new bitly link. After that I created this new dataset http://pastebin.com/YecHzQ1W and updated the rest datasets (domains, countries, etc).

Some more details

If you check the total numbers in these two datasets (domains) http://pastebin.com/HYaY2yMb and (countries) http://pastebin.com/SZJMw3vx you’ll see the different in total number of iframe loads: 87152 and 87269. I don’t know why. I used the same links just different API calls (referring_domains and countries) and the totals are supposed to be the same. I didn’t save the “per link” data so can’t tell exactly whether it was my error or the bitly API produces slightly inconsistent results. Anyway, the difference is neglectable for my calculations and doesn’t affect the estimations (given that I tried to underestimate when in doubt ).

I didn’t check for duplicates that guys from Data Driven security found in my datasets. They must have to do with my screenscraper and the way that Bitly.com displays links in user accounts. The total number of scraped links matched the number of links that Bitly reported for the user. The extra 121 clicks that these duplicates are responsible for are so close to the 117 difference that I mentioned above so I wonder if these numbers somehow connected? Anyway, this shouldn’t affect the accuracy of the estimates either.

Building the geo distribution map

For the map, I used the Geochart from Google visualization. Since the difference in numbers of clicks for the top 3 countries and the lower half of the list was 3 orders of magnitude, I had to re-scale the data so that we could still see the difference between countries with 50 iframe loads and 2 iframe loads.. I used a square of a natural logarithm for that

Math.pow( Math.round( Math.log(i)), 2)

This gave me a nice distribution from 0 to 100 and the below map.

Darkleech iFrame load geo distribution

IFrame Load Geo Distribution

Speculation on the link generation

After checking the link creation per minute-of-an-hour distribution, Bob Rudis wondered:

This is either the world’s most inconsistent (or crafty) cron-job or grantdad like to press ↑ + ENTER alot.

I guess, it was neither cron-job nor manual link creation. I think the [bitly] links were created on-demand. So if someone loads a page and Darkleech needs to inject an iframe then it (or rather the server it works with) generates a new bitly link. Given that this malware may lurk on a server, there may be time periods when no malicious code is being injection into any web pages across all the infected servers. At the same time, the bitly links with zero clicks may refer to page loads by various bots (including our own) when the malicious code is injected but the iframe is never loaded. On the other hand, the volume of bitly links with 0 clicks (~35%) suggests that there might really more complex link generation mechanism than a simple “on-demand” approach.

Anyway, that’s great when people double check our data and try to find new interesting patterns there. Make sure to let me know (here or on the Sucuri blog) if you find more patterns and inconsistencies, or have any comments on how the Darkleech works.

Related posts:

European Member States and ENISA Issue SOPs to Manage Multinational Cyber Crises

On February 5, 2014, the Member States of the EU and European Free Trade Association (“EFTA”) as well as the European Network and Information Security Agency (“ENISA”) issued Standard Operational Procedures (“SOPs”) to provide guidance on how to manage cyber incidents that could escalate to a cyber crisis.

Background
In 2009, the European Commission’s Communication on Critical Information Infrastructure Protection invited EU Member States to develop national contingency plans and organize regular exercises to enhance a closer pan-European Network and Information Security (“NIS”) cooperation plan.

In February 2013, the European Commission, together with the High Representative of the Union for Foreign Affairs and Security Policy, launched their cybersecurity strategy (“Strategy”) for the European Union. As part of this Strategy, the European Commission also proposed a draft directive on measures to ensure a common level of NIS across the EU (the “Directive”). The Directive introduces a number of measures, including the creation of a network to enable the national NIS authorities, the European Commission and, in certain cases, ENISA and the Europol Cybercrime Center, to share early warnings on risks and incidents, and to cooperate on further steps and organize exercises at the European level.

In this context, the EU/EFTA Member States developed the SOPs in collaboration with ENISA. The draft SOPs were tested during the pan-European cyber exercises organized by ENISA.

The SOPs
The SOPs include a list of contact points, guidelines, templates, workflows, tools and best practices to help European public authorities better understand the causes and impacts of multinational cyber crises and identify effective action plans. In particular, the SOPs emphasize the need to establish direct links to the decision makers at the strategic and political level in order to successfully manage multinational cyber crises.

ENISA continues to work with EU Member States to develop information security best practices and assist the Member States with the implementation of relevant EU legislation.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

FTC Announces Settlement with Medical Transcription Provider after Discovery of Patient Transcripts on the Internet

On January 31, 2014, the Federal Trade Commission announced a settlement with GMR Transcription Services, Inc. (“GMR”) stemming from allegations that GMR’s failure to provide reasonable security allowed certain patients’ medical transcripts to be exposed to the public on the Internet. The FTC issued an accompanying press release stating it was the FTC’s 50th data security settlement.

GMR provides audio transcription services for businesses in various sectors, including health care providers. GMR typically uses vendors to transcribe the audio file into text. In its complaint, the FTC alleged that GMR failed to ensure that a particular overseas vendor who provided medical transcription services provided adequate security for the text documents it created. Specifically, the FTC alleged that GMR failed to (1) require the vendor via contract to adopt and implement reasonable security measures to protect personal information, and (2) assess and verify whether the vendor employed adequate security measures to protect personal information. As a result of GMR’s deficient security practices, the complaint charged that patients’ medical transcripts, which included their names, Social Security numbers and medical and psychological health information, were available on the Internet.

The settlement, which terminates 20 years from its issuance, includes requirements that GMR:

  • establish, implement and maintain a comprehensive information security program;
  • obtain biennial assessments of its information security program from an independent third party auditor;
  • maintain, and submit to the FTC upon request, all information relied on to complete the biennial assessments of its information security program for three years and all documents related to its compliance with the settlement for five years;
  • deliver copies, and obtain signed acknowledgements, of the settlement to current and future GMR principals, officers, directors, employees and agents;
  • submit a report to the FTC detailing the manner and form of GMR’s compliance with the settlement.

In the FTC’s accompanying statement, the FTC said that “What started in 2002 with a single case applying established FTC Act precedent to the area of data security has grown into a vital enforcement program that has helped to increase protections for consumers and has encouraged companies to make safeguarding consumer data a priority.”

Update: On August 21, 2014, the FTC approved the final settlement order with GMR.

Interview with Jared DeMott, Windows Meterpreter’s Extended API – Episode 360, Part 1 – January 30, 2014

Jared DeMott is a principal security researcher at Bromium and has spoken at security conferences such as Black Hat, Defcon, ToorCon, Shakacon, DakotaCon, GRRCon, and DerbyCon. He is active in the security community by teaching his Application Security course.

Windows Meterpreter recently got some new capabilities thru the Extended API module by OJ Reeves also known as TheColonial. He added support for: *Interacting with the Clipboard *Query services *Window enumeration *Executing ADSI Queries We will cover in this Technical Segment the ADSI interface since it gives us a capacity in enterprise environments not available previously in meterpreter other than a module from Meatballs called enum_ad_computers.