Monthly Archives: October 2017

Hack Naked News #147 – October 31, 2017

Michael Santarcangelo discusses platform security architecture, Kaspersky, the Cyber Peace Corps, and more with Jason Wood on this episode of Hack Naked News!Full Show Notes:

Visit for all the latest episodes!

→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Bogus porn blackmail attempt from

This blackmail attempt is completely bogus, sent from a server belonging to the domain. From:    Hannah Taylor [] Reply-To: To:    contact@victimdomail.tld Date:    31 October 2017 at 15:06 Subject:    ✓ Tiскеt ID: DMS-883-97867 [contact@victimdomail.tld] 31/10/2017 03:35:54 Maybe this will change your life Signed

Design For Behavior, Not Awareness – The Falcon’s View

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes).

To me, there are three kinds of security awareness and education objectives:
1) Communicating new practices
2) Addressing bad practices
3) Modifying behavior

The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change.

Awareness as Communication

The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness.

Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added?

Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices.

Awareness as Behavior Design

The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there.

I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in numerous prior roles. However, it's only been the last few years that I've started to find, understand, and appreciate the underlying science and psychology that needs to be brought to bear on the topic. Most recently, I completed BJ Fogg's Boot Camp on behavior design, and that's the lens through which I now view most of these flaccid, ineffective, and frankly incompetent "awareness" programs. It's also what's led me to redefine "security awareness" as "behavioral infosec" in order to highlight the importance of applying better thinking and practices to the space.

Leveraging Fogg's models and methods, we learn that Behavior happens when three things come together: Motivation, Ability, and a Trigger (aka a prompt or cue). When designing for behavior change, we must then look at these three attributes together and figure out how to specifically address Motivation and Ability when applying/instigating a trigger. For example, if we need people to start following a better, preferred process that will help reduce risk to the organization, we must find a way to make it easy to do (Ability) or find ways to make them want to follow the new process (Motivation). Thus, when we tell them "follow this new process" (aka Trigger), they'll make the desired choice.

In this regard, technical and administrative controls should be buttressed by behavior design whenever a choice must be made. However, sadly, this isn't generally how security awareness programs view the space, and thus just focus on communication (a type of Trigger) without much regard for also addressing Motivation or Ability. In fact, many security programs experience frustration and failure because what they're asking people to do is hard, which means the average person is not able to do what's asked. Put a different way, the secure choice must be the easy choice, otherwise it's unlikely to be followed. Similarly, research has shown time and time again that telling people why a new practice is desirable will greatly increase their willingness to change (aka Motivation). Seat belt awareness programs are a great example of bringing together Motivation (particularly focused on negative outcomes from failure to comply, such as reality of death or serious injury, as well as fines and penalties), Ability (it's easy to do), and Triggers to achieved a desired behavioral outcome.

Overall, it's imperative that we start applying behavior design thinking and principles to our security programs. Every time you ask someone to do something different, you must think about it in terms of Motivation and Ability and Trigger, and then evaluate and measure effectiveness. If something isn't working, rather than devolving to a blame game, instead look at these three attributes and determine if perhaps a different approach is needed. And, btw, this may not necessarily mean making your secure choice easier so much as making the insecure choice more difficult (for example, someone recently noted on twitter that they simply added a wait() to their code to force deprecation over time)

Change Behavior, Change Org Culture

Another interesting aspect of this discussion on behavior design is this: organizational culture is the aggregate of behaviors and values. That is to say, when we can change behaviors, we are in fact changing org culture, too. The reverse, then, is also true. If we find bad aspects of org culture leading to insecure practices, we can then factor those back into the respective behaviors, and then start designing for behavior change. In some cases, we may need to break the behaviors into chains of behaviors and tackle things more slowly over time, but looking at the world through this lens can be quite enlightening. Similarly, looking at the values ensconced within org culture also let's us better understand motivations. People generally want to perform their duties, and do a reasonably decent job at it. This is generally how performance is measured, and those duties and performance measures are typically aligned against outcomes and - ultimately - values.

One excellent lesson that DevOps has taught us (there are many) is that we absolutely can change how the org functions... BUT... it does require a shift in org culture, which means changing values and behaviors. These sorts of shifts can be done either top-down or bottom-up, but the reality is that top-down is much easier in many regards, whereas bottom-up requires that greater consensus and momentum be built to achieve a breakthrough.

DevOps itself is cultural in nature and focuses heavily on changing behaviors, ranging from how dev and ops function, to how we communicate and interact, and so on. Shortened feedback loops and creating space for experimentation are both behavioral, which is why so many orgs struggle with how to make them a reality (that is, it's not simply a matter of better tools). Security absolutely should be taking notes and applying lessons learned from the DevOps movement, including investing in understanding behavior design.

To wrap this up, here are three quick take-aways:

1) Reinvent "security awareness" to be "behavioral infosec" toward shifting to a behavior design approach. Behavior design looks at Motivation, Ability, and Triggers in affecting change.

2) Understand the difference between controls (technical and administrative) and behaviors. Resorting to basic communication may be adequate if you're implementing controls that take away choices. However, if a new control requires that the "right" choice be made, you must then apply behavior design to the project, or risk failure.

3) Go cross-functional and start learning lessons from other practice areas like DevOps and even HR. Understand that everything you're promoting must eventually tie back into org culture, whether it be through changes in behavior or values. Make sure you clearly understand what you're trying to accomplish, and then make a very deliberate plan for implementing changes while addressing all appropriate objectives.

Going forward, let's try to make "cybersecurity awareness month" about something more than tired lines and vapid pejoratives. It's time to reinvent this space as "behavioral infosec" toward achieving better, measurable outcomes.

Incremental "Gains" Are Just Slower Losses – The Falcon’s View

Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch his TedX) in order to make meaningful gains in security practices. Anton even went so far as to write about it a little over a week ago (sorry for the delayed response - work travel). As promised, I feel it's important to counter his arguments a bit.

Anton started by asking, "[Can] you really jump to the "next curve" in security, or do you have to travel the entire journey from the old ways to the cutting edge?"

Of course, this is a sucker's question, and it belies misunderstanding the whole "jump to the next curve" argument, which was conceived by Kawasaki in relation to innovation, but can be applied to strategy in general. In speaking of the notion, Kawasaki says "True innovation happens when a company jumps to the next curve-or better still, invents the next curve, so set your goals high." And this, here, is the point of arguing for organizations to not settle for incremental improvements, but instead to aim higher.

To truly understand this notion in context, let's first think about what would be separate curves in a security practice vertical. Let's take Anton's example of SOCs, SIEM, log mgmt, and threat hunting. To me, the curves might look like this:
- You have no SOC, SIEM, log mgmt
- You start doing some logging, mostly locally
- You start logging to a central location and having a team monitor and manage
- You build or hire a SOC to more efficiently monitor and respond to alerts
- You add in stronger analytics, automation, and threat hunting capabilities

Now, from a security perspective, if you're in one of the first couple stages today (and a lot of companies are!), then a small incremental improvement like moving to central logs might seem like a huge advance, but you'd be completely wrong. Logically, you're not getting much actual risk reduction by simply dumping all your logs to a central place unless you're also adding monitoring, analytics, and response+hunting capabilities at the same time!

In this regard, "jump to the next curve" would likely mean hiring an MSSP to whom you can send all your log data in order to do analytics and proactive threat hunting. Doing so would provide a meaningful leap in security capabilities and would help an organization catch-up. Moreover, even if you spent a year making this a reality, it's a year well-spent, whereas a year spent simply enabling logs without sending them to a central repository for meaningful action doesn't really improve your standing at all.

In Closing

In the interest in keeping this shorter than usual, let's just jump to the key takeaways.

1) The point of "jump to the next curve" is to stop trying to "win" through incremental improvements of the old and broken, instead leveraging innovation to make up lost ground ground by skipping over short-term "gains" that cost you time without actually gaining anything.

2) The farther behind you are, the more important it is to look for curve-jumping opportunities to dig out of technical debt. Go read DevOps literature on how to address technical debt, and realize that with incremental gains, you're at best talking about maintaining your position, not actually catching up. Many organizations are far behind today and cannot afford such an approach.

3) Attacks are continuing to rapidly evolve, which means your resilience relies directly on your agility and ability to make sizable gains in a short period of time. Again, borrowing from DevOps, it's past time to start leveraging automation, cloud services, and agile techniques to reinvent the security program (and, really, organizations overall) to leap out of antiquated, ineffective practices.

4) Anton quipped that "The risks with curve jumping are many: you can jump and miss (wasting resources and time) or you can jump at the wrong curve or you simply have no idea where to jump and where the next curve is." To a degree, yes, this is true. But, in many ways, for organizations that are 5-10 years behind in practices (again, this applies to a LOT of you!), we know exactly where you should go. Even Gartner advice can be useful in this regard! ;) The worst thing you can do is decide not to take an aggressive approach to getting out of technical security debt for fear of choosing the "wrong" path.

5) If you're not sure where the curves are, here's a few suggestions:
- Identity as Perimeter - move toward Zero Trust, heavily leveraging federated identity/IDaaS
- Leverage an MSSP to central manage and monitor log data, including analytics and threat hunting
- Automate, automate, automate! You don't need to invest in expensive security automation tools. You can do a lot with general purpose IT automation tools (like Ansible, Chef, Puppet, Jenkins, Travis, etc.). If you think you need a person staring a dashboard, clicking a button when a color changes, then I'm sorry to tell you that this can and should be automated.
- If your org writes code, then adopt DevOps practices, getting a CI/CD pipeline built, with appsec testing integrated and automated.
- Heavily leverage cloud services for everything!

Good luck, and may the odds be ever in your favor! :)

French DPA Publishes a Compliance Pack Regarding Connected Vehicles

On October 17, 2017, the French Data Protection Authority (“CNIL”), after a consultation with multiple industry participants that was launched on March 23, 2016, published its compliance pack on connected vehicles (the “Pack”) in line with its report of October 3, 2016. The Pack applies to connected vehicles for private use only (not to Intelligent Transport Systems), and describes the main principles data controllers must adhere to under both the current French legislation and the EU General Data Protection Regulation (“GDPR”).   

The CNIL distinguishes between the following three scenarios:

1.     “IN -> IN” scenario

The data collected in the vehicle remains in that vehicle and is not shared with a service provider (e.g., an eco-driving solution that processes data directly in the vehicle to display eco-driving tips in real time on the vehicle’s dashboard).

2.     “IN -> OUT” scenario

The data collected in the vehicle is shared outside of the vehicle for the purposes of providing a specific service to the individual (e.g., when a pay-as-you-drive contract is purchased from an insurance company).

3.     “IN -> OUT -> IN” scenario

The data collected in the vehicle is shared outside of the vehicle to trigger an automatic action by the vehicle (e.g., in the context of a traffic solution that calculates a new route following a car incident).

In addition to listing the provisions already included in its report of October 3, 2016, the CNIL analyzes in detail the three scenarios described above and provides recommendations on the:

  • purposes for which the data can be processed;
  • legal bases controllers can rely upon;
  • types of data that can be collected;
  • required retention period;
  • recipients of the data and use of processors;
  • content of the notice to data subjects;
  • applicable rights of individuals with respect to the processing;
  • security measures to adopt; and
  • registration obligations that may arise under current law.

Beyond being a helpful guide for data controllers to refer to when implementing such tools in vehicles, the Pack might help preview how supervisory authorities will interpret various GDPR provisions.

Advocate General Rejects Facebook’s Claim of Sole Irish Jurisdiction in EU

On October 24, 2017, an opinion issued by the EU’s Advocate General Bot (“Bot”) rejected Facebook’s assertion that its EU data processing activities fall solely under the jurisdiction of the Irish Data Protection Commissioner. The non-binding opinion was issued in relation to the CJEU case C-210/16, under which the German courts sought to clarify whether the data protection authority (“DPA”) in the German state of Schleswig-Holstein could take action against Facebook with respect to its use of web tracking technologies on a German education provider’s fan page without first providing notice.

Although Facebook’s EU data processing activities are handled jointly by Facebook, Inc. in the U.S. and Facebook Ireland, its European headquarters, Facebook has a number of subsidiaries in other EU Member States that promote and sell advertising space on the social network. In line with Directive 95/46/EC and the Google Spain decision, Bot held that the processing of personal data via cookies, which Facebook used to improve its targeting of advertisements, had to be considered as being in the context of the activities of the German establishment. It therefore followed that Facebook fell under the jurisdiction of the German DPA and other DPAs in which its subsidiaries engaged in the promotion and sale of advertising space.

The opinion is non-binding and Facebook awaits the CJEU’s verdict. It should be noted, however, that most CJEU verdicts follow the prior opinions of Advocate Generals. Also, this situation may be interpreted differently under the EU’s General Data Protection Regulation (“GDPR”), which replaces existing EU Member State data protection laws based on Directive 95/46/EC when it enters into force on May 25, 2018. Under the GDPR, the One-Stop-Shop mechanism will see the DPA in an organization’s main EU establishment take the role of lead authority. In other EU Member States where the organization has establishments, DPAs will be regarded as ‘concerned authorities,’ but any regulatory action will be driven by the lead authority—which in Facebook’s case likely is the Irish Data Protection Commissioner.

VirusTotal += Cybereason

We welcome Cybereason scanner to VirusTotal. In the words of the company:

“Cybereason has developed a comprehensive platform to manage risk, including endpoint detection and response (EDR) and next generation antivirus (NGAV). Cybereason’s NGAV solution is underpinned by a proprietary machine learning (ML) anti-malware engine that was built and trained to block advanced attacks such as never-before-seen malware, ransomware, and fileless malware. The cloud-enabled engine conducts both static binary analysis and dynamic behavioral analysis to increase detection of known and unknown threats. Files submitted to VirusTotal will be analyzed by Cybereason’s proprietary ML anti-malware engine and the rendered verdicts will be available to VirusTotal users.”

Cybereason has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by SE Labs, an AMTSO-member tester.

Updated 3NT Solutions LLP / / IP ranges

When I was investigating IOCs for the recent outbreak of BadRabbit ransomware I discovered that it downloaded from a domain hosted on This IP belongs to a company called 3NT Solutions LLP that I have blogged about before. It had been three-and-a-half years since I looked at their IP address ranges so I thought I would give them a refresh. My personal recommendation

Hack Naked News #146 – October 24, 2017

Kaspersky has “nothing to hide”, the internet wants YOU, OS X malware runs rampant, WHOIS database slip-ups, and more. Jason Wood discusses an attack on critical US infrastructure on this episode of Hack Naked News!Full Show Notes:

Visit for all the latest episodes!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

CIPL Responds to CNIL and Irish DPC on Transparency and Data Transfers under the GDPR

The Centre for Information Policy Leadership at Hunton & Williams LLP (“CIPL”) recently submitted responses to the Irish Data Protection Commissioner (IDPC Response) and the CNIL (CNIL Response) on their public consultations, seeking views on transparency and international data transfers under the EU General Data Protection Regulation (“GDPR”).

The responses address a variety of questions posed by both data protection authorities (“DPAs”) and aim to provide insight on and highlight issues surrounding transparency and international transfers.

Key takeaways from the responses include:


  • Transparency under the GDPR should be approached in a way that is user-centric and promotes effective engagement and trusted relations with individuals, rather than solely focusing on legal compliance.
  • Prevalence and prominence should be given to information that is actionable or otherwise useful for individuals (to reassure them about data use or enable them to make choices).
  • Data privacy supervisory authorities should incentivize and allow more flexibility and innovation in the way organizations comply and deliver transparency under the GDPR, taking into account that there are vastly different types of organizations, from startups to multinationals.
  • The notice requirement should cover passively collected and observed data from an individual. Such data is collected from or on a data subject but without the data subject actively providing it to the data controller (e.g., data collection by CCTV recording, Bluetooth “beacons” or Wi-Fi tracking of the data subject). In addition, the requirement should cover data that was inferred or derived by a data controller from a set of personal data which was originally provided directly by a data subject under Article 14 of the GDPR, subject to applicable exceptions and appropriate to timing in relation to the delivery of the information under the specific data transaction.
  • Organizations should add to the information requirements of the GDPR only where necessary and where this is reasonable in light of the fair processing requirement. In CIPL’s view, the point of Recital 39, which seemingly expands upon the notice requirements of Article 13 and 14, is to capture the spirit of transparency, rather than add further and more specific privacy notice elements.
  • Information fatigue can be avoided, while ensuring compliance with transparency requirements, by (1) embedding transparency mechanisms as much as possible within the relevant product, service or technology; (2) providing the right amount and critical information upfront with an option to view further information; (3) delivering transparency by different methods and times, appropriate to context; (4) ensuring flexibility in how organizations provide information to individuals; and (5) utilizing the exemptions to the notice requirements.

International Data Transfers

  • If Standard Contractual Clauses (“SCCs”) remain a valid transfer mechanism, they will need to be brought in line with the GDPR. Given the substantial administrative work involved, companies should be permitted to rely on their existing SCCs and be provided with a reasonable time frame for transitioning to new SCCs once they are available.
  • There are currently no processor-to-processor SCCs. It is imperative that workable and commercially viable solutions are created to enable lawful transfers between EU-processors and non-EU processors and sub-processors. CIPL believes this should not necessarily be created by the EU Commission or the Article 29 Working Party/European Data Protection Board (“EDPB”), but instead that relevant industry stakeholders should lead the creation of model terms and clauses to cover processor-to-processor data transfers.
  • The Binding Corporate Rules (“BCRs”) approval process should be further streamlined and improved to facilitate more expedient processing times. This means that DPAs will need to dedicate more resources to BCR review and approvals and ensure more optimal sharing of information and expertise between different DPAs on this topic.
  • There are significant synergies between GDPR certification and BCRs. The two instruments are presented as separate concepts, but, arguably, BCRs are a de facto form of certification and should be leveraged and “upgraded” to GDPR certification under Articles 42 and 43 of the GDPR. Certification is a stamp of recognition that an organization is GDPR compliant; recognition should be extended to BCRs as a high and uniform level of compliance with the GDPR, as a robust privacy compliance program is a prerequisite to obtaining BCR approval. Companies that update their BCRs to comply with the GDPR should not be required to go through another comprehensive review and re-approval process, but should have a special “fast track” process for updating their BCRs in line with the GDPR and future GDPR certifications.
  • If BCRs are viewed as a “badge of recognition” for a company’s privacy program and receive approval by DPAs, then any data transfers to a BCR-approved company (and also between BCR-approved companies) should be allowed based on BCR compliance by the company or companies and without any additional necessary legal transfer mechanism (e.g., SCCs or derogations).
  • Developing GDPR certifications for purposes of data transfers should be a strategic priority for the Commission and/or EDPB. The ultimate goal should be to facilitate the interoperability of GDPR certifications with other transfer mechanisms such as the APEC Cross-Border Privacy Rules and other relevant certifications. Therefore, GDPR certifications, where possible, should avoid creating conflicting substantive and procedural requirements with other systems.

WAF and IPS. Does your environment need both?

WAF and IPS. Does your environment need both?

I have been in fair amount of discussions with management on the need for WAF, and IPS; they often confuse them and their basic purpose. It has been usually discussed after a pentest or vulnerability assessment, that if I can't fix this vulnerability - shall I just put an IPS or WAF to protect the intrusion/ exploitation? Or, sometimes they are considered as the silver bullet to thwart off the attackers instead of fixing the bugs. So, let me tell you - This is not good!

The security products are well suited to protect from something "unknown" or something that you have "unknowingly missed". It is not a silver bullet or an excuse to keep systems/ applications unpatched.

Security shouldn't be an AND/OR case. More the merrier only if they have been configured properly and each one of the product(s) has a different role to play under the flag of defense in depth! So, while I started this article as WAF vs. IPS - it's time to understand it's WAF and IPS. The ecosystem of your production environment is evolving and so is the threat landscape - it's more complex to protect than it was 5 years ago. Attackers are running at your pace, if not faster & a step ahead. These adversary as well piggy-back existing threats to launch their exploits. Often something that starts as simple as DDOS to overwhelm your networks, concedes in an application layer attack. So, network firewall, application firewall, anti-malware, IPS, SIEM etc. all have an important task and should be omnipresent with bells and whistles!

Nevertheless, whether it's a WAF or an IPS; each has it's own purpose and though they can't replace each other, they often have gray areas under which you can rest your risks. This blog will try to address these gray areas, and the associated differences to make life easier when it comes to WAF (Web Application Firewall) or IPS (Intrusion Prevention System). The assumption is both are modern products, and the IPS have deep packet inspection capabilities. Now, let's try to understand the infrastructure, environment and scope of your golden eggs before we can take a call which is the best way to protect the data,

  1. If you are protecting only the "web applications" running on HTTP sockets, then WAF is enough. IPS will be cherry on cake.
  2. If you are protecting all sorts of traffic - SSH, FTP, HTTP etc. then WAF is of less use at it can't inspect non HTTP traffic. I would recommend having a deep packet inspection IPS.
  3. WAF must not be considered as an alternative for traditional network firewalls. It works on the application layer and hence is primarily useful on HTTP, SSL (decryption), Javascript, AJAX, ActiveX, Session management kind of traffic.
  4. A typical IPS does not decrypt SSL traffic, and therefore is insufficient in packet inspection on HTTPS session.
  5. There is wide difference in the traffic visibility and base-lining for anomalies. While WAF has an "understanding" of traffic - HTTP GET, POST, URL, SSL etc. the IPS only understands it as network traffic and therefore can do layer 3/4 checks - bandwidth, packet size, raw protocol decoding/ anomalies but not the GET/ POST or session management.
  6. IPS is useful in cases where RDP, SSH or FTP traffic has to be inspected before it reaches the box to make sure that the protocol is not tampered or wrapped with another TCP packet etc.

Both the technologies have matured and have many gray areas of working but understand that WAF knows and capture the contents of HTTP traffic to see if there is a SQL injection, XSS or cookie manipulation but the IPS have very little or no understanding of the underlying application, therefore can't do much with the traffic contents. An IPS can't raise an alarm if someone is getting confidential data out, or even sending a harmful parameter to your application - it will let it through if it's a valid HTTP packet.

Now, with the information I just shared, try to have a conversation with your management on how to provide the best layered approach in security. How to make sure the network, and application is resilient to complex attacks and threats lurking at your perimeter, or inside.

Be safe.

Malware spam: "Order acknowledgement for BEPO/N1/380006006(2)"

A change to the usual Necurs rubbish, this fake order has a malformed .z archive file which contains a malicious executable with an icon to make it look like an Office document. Reply-To:    purchase@animalagriculture.orgTo:    Recipients [DY]Date:    24 October 2017 at 06:48Subject:    FW: Order acknowledgement for BEPO/N1/380006006(2)Dear All,Kindly find the attached Purchase order# IT/

FTC Issues Policy Statement on COPPA and Voice Recordings

On October 23, 2017, the Federal Trade Commission issued a policy enforcement statement providing additional guidance on the applicability of the Children’s Online Privacy Protection Rule (“COPPA Rule”) to the collection of children’s audio voice recordings. The FTC previously updated the COPPA Rule in 2013, adding voice recordings to the definition of personal information, which led to questions about how the COPPA Rule would be enforced against organizations who collect a child’s voice recording for the sole purpose of issuing a command or request.

In the policy statement, the FTC reiterated the COPPA Rule’s requirement that websites and online services directed at children obtain verifiable parental consent before collecting an audio recording of the voice of a child under age 13. The policy statement clarifies that the FTC will not pursue enforcement action against a website operator for not obtaining this consent before collecting a voice recording when (1) the voice recording is collected solely to replace written words (e.g., to perform a search), and (2) the recording is held for a brief period of time and only for that purpose. The FTC clarified that (1) the policy does not apply where the website operator requests personal information (e.g., name) via voice recording; (2) the website operator must provide clear notice of its collection and use of voice recordings, and its deletion policy, in its privacy policy; and (3) the website operator may not use the audio file for any other purpose before its destruction. Website operators must continue to comply with COPPA’s requirements in all other respects.

Startup Security Weekly #60 – It’s An Exit

Ten sales rules you should break, how to pitch a venture capitalist, guiding employees towards mental health, and updates from Duo Security, Contrast Security, and more on this episode of Startup Security Weekly!Full Show Notes: for all the latest episodes!

Article 29 Working Party Releases Guidelines on Automated Individual Decision-Making and Profiling

On October 17, 2017, the Article 29 Working Party (“Working Party”) issued Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (the “Guidelines”). The Guidelines aim to clarify the EU General Data Protection Regulation’s (“GDPR’s”) provisions that address the risks arising from profiling and automated decision-making.

The Guidelines are divided into five sections, outlined below, and these are followed by best practice recommendations intended to assist controllers in meeting the GDPR requirements on profiling and automated decision-making:

  1. Definitions of profiling and automated decision-making, and the GDPR’s approach to these concepts;
  2. Specific provisions on automated decision-making as defined in Article 22 of the GDPR;
  3. General provisions on profiling and Automated decision-making;
  4. Children and profiling; and
  5. Data protection impact assessments.

Key takeaways from the Guidelines include:

  • Profiling means gathering information about an individual (or a group of individuals) and analyzing their characteristics or behavior patterns to place them into a certain category or group, and/or to make predictions or assessments (e.g., about their ability to perform a task, interests or likely behavior).
  • There is a prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect, but there are exceptions to the rule. There should be measures in place to safeguard the data subject’s rights, freedoms and legitimate interests.
  • When engaging in automated decision-making under the Article 22(2)(a) exception (necessary for the performance of a contract), necessity should be interpreted narrowly. The controller must be able to show the profiling is necessary, taking into account whether a less privacy-intrusive method could be adopted.
  • The Working Party clarifies that with respect to providing meaningful information about the logic involved in automated decision-making, the controller should find simple ways to tell the data subject about the rationale behind, or the criteria relied on, in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm. The information provided should, however, be meaningful to the data subject.
  • Providing data subjects with information about the significance and envisioned consequences of processing surrounding automated decision-making means that information must be provided about intended or future processing, and how the automated decision-making might affect the data subject. For example, in the context of credit scoring, they should be entitled to know the logic underpinning the processing of their data and resulting in a yes or no decision, and not simply information on the decision itself.
  • “Legal Effect” means processing activity that has an impact on someone’s legal rights or affects a person’s legal status, or their rights under a contract.
  • Regarding the meaning of the phrase “similarly significantly affects him or her,” the threshold for significance must be similar to a legal effect, whether or not the decision has a legal effect. The effects of processing must be more than trivial and must be sufficiently great or important to be worthy of attention.
  • To qualify as human intervention, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. The review should undertake a thorough assessment of all the relevant data, including any additional information provided by the data subject.
  • The Working Party does not consider Recital 71 to be an absolute prohibition on solely automated decision-making relating to children, but notes that it should only be carried out in certain circumstances (e.g., to protect a child’s welfare).
  • Carrying out Data Protection Impact Assessments in the case of a systematic and extensive evaluation of personal aspects based on automated processing, including profiling, and on which decisions are based that produce legal effects or similarly significant effects, are not limited to “solely” automated processing/decisions.

The Working Party will accept comments on the guidelines until November 28, 2017.

How to Minimize Leaking

I am hopeful that President Trump will not block release of the remaining classified documents addressing the 1963 assassination of President John F. Kennedy. I grew up a Roman Catholic in Massachusetts, so President Kennedy always fascinated me.

The 1991 Oliver Stone movie JFK fueled several years of hobbyist research into the assassination. (It's unfortunate the movie was so loaded with fictional content!) On the 30th anniversary of JFK's death in 1993, I led a moment of silence from the balcony of the Air Force Academy chow hall during noon meal. While stationed at Goodfellow AFB in Texas, Mrs B and I visited Dealey Plaza in Dallas and the Sixth Floor Museum.

Many years later, thanks to a 1992 law partially inspired by the Stone movie, the government has a chance to release the last classified assassination records. As a historian and former member of the intelligence community, I hope all of the documents become public. This would be a small but significant step towards minimizing the culture of information leaking in Washington, DC. If prospective leakers were part of a system that was known for releasing classified information prudently, regularly, and efficiently, it would decrease the leakers' motivation to evade the formal declassification process.

Many smart people have recommended improvements to the classification system. Check out this 2012 report for details.

Paul’s Security Weekly #534 – Pizza the Hut

Wendy Nather of Duo Security is our featured interview, Joe Vest and Andrew Chiles of MINIS deliver a tech segment on borrowing Microsoft metadata and digital signatures to “hide” binaries, and in the security news, Microsoft hypocritically mocks Google, hacking child safety smart watches, five steps to building a vulnerability management program, Google Play introduces a bug bounty program, and why is technology outing sex workers?

Full Show Notes:

Visit for all the latest episodes!

Trump to Nominate New FTC Chair and Commissioner

On October 19, 2017, the White House announced that President Donald J. Trump plans to nominate two individuals to serve as commissioners of the Federal Trade Commission. President Trump selected Joseph Simons to lead the FTC as its chairman for a seven-year term, beginning September 26, 2017. Simons’ background primarily has focused on antitrust matters. From June 2001 to August 2003, he led the FTC’s antitrust initiative as Director of the FTC’s Bureau of Competition.

In addition, Rohit Chopra will be nominated to serve as a commissioner for the remainder of a seven-year term that is set to expire on September 25, 2019. Chopra has focused on consumer protection issues involving financial services, most recently as a Senior Fellow at the Consumer Federation of America. He previously served as Assistant Director at the Consumer Financial Protection Bureau.

With these two commissioner nominations, there remains one commissioner vacancy yet to be appointed among the FTC’s five commissioner seats.

European Parliament’s LIBE Committee Approves Amended ePrivacy Regulation

On October 19, 2017, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE Committee”) narrowly voted to approve an amended version of the e-Privacy Regulation (“Regulation”). The committee vote is an important step in the process within the European Parliament. This vote will be followed by a vote of the European Parliament in its plenary session on October 23-26. If the plenary also votes in favor, the European Parliament will have a mandate to begin negotiations with the Member States in the Council. If these negotiations (commonly known as “trilogue”) succeed, the Regulation will be adopted.

Also on October 19, 2017, the Centre for Information Policy Leadership at Hunton & Williams (“CIPL”) published a study on the impact of the proposed Regulation (the “Study”). The Study was prepared by Professor Niko Haerting of Haerting Rechtsanwaelte, Berlin, whom CIPL had asked for an independent expert opinion on the proposal.

The Study examines in detail the European Commission’s January 10, 2017 proposal on the Regulation. The Commission’s stated goal is to replace the existing ePrivacy Directive (”Directive”) with the Regulation at the same time the EU General Data Protection Regulation (“GDPR”) comes into effect on May 25, 2018.

Main Conclusions of the Study

  • The Regulation focuses on protecting individuals’ privacy mainly through its consent requirements. It would therefore be up to individuals to protect their own privacy by providing or refusing consent. Shifting the responsibility from businesses to individual consumers cannot be regarded as enhancing privacy protections. Moreover, this would ultimately undermine digital services in Europe.
  • In many cases, the Regulation’s rules deviate from the GDPR. This is bound to lead to legal uncertainty and will be harmful to European businesses. There is a direct conflict between the Regulation’s consent requirements and the more flexible approach in Art. 6 of the GDPR that requires consent in some cases but also allows for data processing without consent, such as when processing is necessary for the performance of a contract or when the service provider or a third party has a legitimate interest that outweighs the interests of data subjects.


The Study is published against the backdrop of today’s LIBE Committee vote. The vote was 31 votes in favor, 24 votes against and 1 abstention. The outcome of the plenary of the European Parliament (in a vote which is expected on October 26, 2017) is not clear and the negotiations with the Member States in the Council have yet to begin.

The main focus of both the GDPR and the Regulation/Directive is the protection of European citizens’ privacy. While the Regulation, like the Directive, is rooted in data protection for the telecommunications sector, it has a significantly wider impact.

The Regulation contains numerous references to the GDPR. According to Art. 1(3), the provisions of the Regulation are intended to “particularise and complement” the GDPR (“lex specialis”). At the same time, the Regulation aims to protect “fundamental rights and freedoms of natural and legal persons in the provision and use of electronic communications services” (Art. 1(1)) while ensuring “free movement of electronic communications data and electronic communications services” in the EU (Art. 1 (2)).

The Study focuses on the proposed new “cookie provisions” (Art. 8, 9 and 10) and on the proposed “interference provisions” (Art. 5, 6 and 7), including the “wiretapping provisions” of Art. 11. It also addresses some of the Regulation’s consequences for connected and autonomous cars.

In particular, the Study seeks to answer the following questions:

  • Practicability: Are the proposed provisions coherent and do their application on standard business models lead to reasonable results?
  • Overlap: Are the proposed provisions in line with the provisions of the GDPR? Are there contradictions?
  • Freedom of Communication: Do the proposed provisions foster the free flow of communication data in Europe, or do they, unintendedly, impose obstacles on communication?
  • User-Friendliness: Do the proposed provisions meet the expectations of reasonable users?

The Study’s Key Findings

  • With the prohibition on “processing” communications data, the Regulation would be a serious obstacle to digital innovations in Europe and to the development of new beneficial services based on data use and machine learning. The prohibition on “processing” would constitute a substantial setback to the European digital economy.
  • Excessive consent requirements would lead to red tape and tick boxes, which are likely to irritate consumers. This will negatively impact their online experience.
  • Art. 5 of the Regulation introduces a new prohibition on the “processing” of communications data. However, it is exactly the “processing” of communications data that that the customer pays for (as opposed to “interception” or “surveillance”). The prohibition should be limited to interception and surveillance of messages.
  • With respect to metadata, it is unclear why IP addresses and other “online identifiers” clearly covered by the GDPR need to be regulated in the Regulation as well.
  • Art. 6 of the Regulation does not work for machine-to-machine communication, wearables, connected cars and the Internet of Things (“IoT”). In machine-to-machine-communication, raw data are transmitted that qualify neither as “content” nor metadata.
  • When customers use digital communications services (e.g., email, messenger), they will expect their messages to be stored by the provider. Moreover, they will expect to be in control when it comes to the erasure of messages. Therefore, the provider’s duty to erase content is against the user’s interests and contrary to the user’s expectations.
  • Given that “online identifiers” cookies are covered by the GDPR, it is unclear why additional provisions are needed in the Regulation.
  • Web analytics tools are, on the one hand, recognized as “legitimate and useful”. On the other hand, hardly any analytics tool will be covered by the exception from the consent requirement, because the exception is applicable only when a website operator is using his or her own analytics tool. This is contradictory.
  • Fingerprinting falls under the “cookie provision” of Art. 8 of the Regulation and requires consent. For the time being, it does not appear to be realistic to expect that there will soon be browser settings on the market that meet the requirements of consent for fingerprinting. There are presently no standards for such settings on the market, and the standards that can be found in the Regulation focus exclusively on cookies and neglect fingerprinting and other non-cookie tracking technologies.
  • WI-FI and Bluetooth tracking are prohibited by Art. 8 (2) of the Regulation and no consent exception is provided. This is not in line with the intention of making consent the “central legal ground” of the Regulation.
  • The obligation to display “prominent notices” limits the lawfulness of WI-FI and Bluetooth tracking to tools that monitor a building or a pre-defined area.
  • The over reliance on consent is based on false assumptions when it comes to legal persons. The Regulation aims at protecting privacy and extending such protection to legal persons. However, it is unclear whose consent is relevant.
  • Art. 10 of the Regulation obliges app providers to enable users to prevent the storing of “information.” However, it is such storage that often will be a fundamental function of the app. There is no reason why the provider of a messenger app should be obliged to enable his or her customers to prevent the storing of messages, pictures and voice files on their smartphones given that the receipt and (temporary) storage of content is the main purpose of the app.

MS14-085 – Important: Vulnerability in Microsoft Graphics Component Could Allow Information Disclosure (3013126) – Version: 1.1

Severity Rating: Important
Revision Note: V1.1 (October 19, 2017): Corrected a typo in the CVE description.
Summary: This security update resolves a publicly disclosed vulnerability in Microsoft Windows. The vulnerability could allow information disclosure if a user browses to a website containing specially crafted JPEG content. An attacker could use this information disclosure vulnerability to gain information about the system that could then be combined with other attacks to compromise the system. The information disclosure vulnerability by itself does not allow arbitrary code execution. However, an attacker could use this information disclosure vulnerability in conjunction with another vulnerability to bypass security features such as Address Space Layout Randomization (ASLR).

Privacy and Data Security Risks in M&A Transactions: Part 2 of Video Series

In our final two segments of the series, industry leaders Lisa Sotto, partner and chair of Hunton & Williams’ Privacy and Cybersecurity practice; Steve Haas, M&A partner at Hunton & Williams; Allen Goolsby, special counsel at Hunton & Williams; and Eric Friedberg, co-president of Stroz Friedberg, along with moderator Lee Pacchia of Mimesis Law, continue their discussion on privacy and cybersecurity in M&A transactions and what companies can do to minimize risks before, during and after a deal closes. They discuss due diligence, deal documents and best practices in privacy and data security. The discussion wraps up with lessons learned in the rapidly changing area of data protection in M&A transactions, and predictions for what lies ahead.

Watch the full videos: Segment 3 – Before, During and After a Deal and Segment 4 – Lessons Learned and Outlook for the Future.

View the first two segments.

Anatomy of a spambot

For security pros, spambots are known enemies. For the uninitiated, they are unknown entities. And yet they proliferate like ants at a picnic or teens on messaging apps. You might be receiving countless messages from bots every day, and even worse, a bot might be sending out unwanted emails from your computer right now, making you an unwilling participant in digitized mayhem.

To read this article in full, please click here