Monthly Archives: October 2017

Hack Naked News #147 – October 31, 2017

Michael Santarcangelo discusses platform security architecture, Kaspersky, the Cyber Peace Corps, and more with Jason Wood on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode147

Visit http://hacknaked.tv for all the latest episodes!

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Bogus porn blackmail attempt from adulthehappytimes.com

This blackmail attempt is completely bogus, sent from a server belonging to the adulthehappytimes.com domain. From:    Hannah Taylor [bill@adulthehappytimes.com] Reply-To:    bill@adulthehappytimes.com To:    contact@victimdomail.tld Date:    31 October 2017 at 15:06 Subject:    ✓ Tiскеt ID: DMS-883-97867 [contact@victimdomail.tld] 31/10/2017 03:35:54 Maybe this will change your life Signed

Design For Behavior, Not Awareness

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes).

To me, there are three kinds of security awareness and education objectives:
1) Communicating new practices
2) Addressing bad practices
3) Modifying behavior

The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change.

Awareness as Communication

The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness.

Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added?

Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices.

Awareness as Behavior Design

The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there.

I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in numerous prior roles. However, it's only been the last few years that I've started to find, understand, and appreciate the underlying science and psychology that needs to be brought to bear on the topic. Most recently, I completed BJ Fogg's Boot Camp on behavior design, and that's the lens through which I now view most of these flaccid, ineffective, and frankly incompetent "awareness" programs. It's also what's led me to redefine "security awareness" as "behavioral infosec" in order to highlight the importance of applying better thinking and practices to the space.

Leveraging Fogg's models and methods, we learn that Behavior happens when three things come together: Motivation, Ability, and a Trigger (aka a prompt or cue). When designing for behavior change, we must then look at these three attributes together and figure out how to specifically address Motivation and Ability when applying/instigating a trigger. For example, if we need people to start following a better, preferred process that will help reduce risk to the organization, we must find a way to make it easy to do (Ability) or find ways to make them want to follow the new process (Motivation). Thus, when we tell them "follow this new process" (aka Trigger), they'll make the desired choice.

In this regard, technical and administrative controls should be buttressed by behavior design whenever a choice must be made. However, sadly, this isn't generally how security awareness programs view the space, and thus just focus on communication (a type of Trigger) without much regard for also addressing Motivation or Ability. In fact, many security programs experience frustration and failure because what they're asking people to do is hard, which means the average person is not able to do what's asked. Put a different way, the secure choice must be the easy choice, otherwise it's unlikely to be followed. Similarly, research has shown time and time again that telling people why a new practice is desirable will greatly increase their willingness to change (aka Motivation). Seat belt awareness programs are a great example of bringing together Motivation (particularly focused on negative outcomes from failure to comply, such as reality of death or serious injury, as well as fines and penalties), Ability (it's easy to do), and Triggers to achieved a desired behavioral outcome.

Overall, it's imperative that we start applying behavior design thinking and principles to our security programs. Every time you ask someone to do something different, you must think about it in terms of Motivation and Ability and Trigger, and then evaluate and measure effectiveness. If something isn't working, rather than devolving to a blame game, instead look at these three attributes and determine if perhaps a different approach is needed. And, btw, this may not necessarily mean making your secure choice easier so much as making the insecure choice more difficult (for example, someone recently noted on twitter that they simply added a wait() to their code to force deprecation over time)

Change Behavior, Change Org Culture

Another interesting aspect of this discussion on behavior design is this: organizational culture is the aggregate of behaviors and values. That is to say, when we can change behaviors, we are in fact changing org culture, too. The reverse, then, is also true. If we find bad aspects of org culture leading to insecure practices, we can then factor those back into the respective behaviors, and then start designing for behavior change. In some cases, we may need to break the behaviors into chains of behaviors and tackle things more slowly over time, but looking at the world through this lens can be quite enlightening. Similarly, looking at the values ensconced within org culture also let's us better understand motivations. People generally want to perform their duties, and do a reasonably decent job at it. This is generally how performance is measured, and those duties and performance measures are typically aligned against outcomes and - ultimately - values.

One excellent lesson that DevOps has taught us (there are many) is that we absolutely can change how the org functions... BUT... it does require a shift in org culture, which means changing values and behaviors. These sorts of shifts can be done either top-down or bottom-up, but the reality is that top-down is much easier in many regards, whereas bottom-up requires that greater consensus and momentum be built to achieve a breakthrough.

DevOps itself is cultural in nature and focuses heavily on changing behaviors, ranging from how dev and ops function, to how we communicate and interact, and so on. Shortened feedback loops and creating space for experimentation are both behavioral, which is why so many orgs struggle with how to make them a reality (that is, it's not simply a matter of better tools). Security absolutely should be taking notes and applying lessons learned from the DevOps movement, including investing in understanding behavior design.

---
To wrap this up, here are three quick take-aways:

1) Reinvent "security awareness" to be "behavioral infosec" toward shifting to a behavior design approach. Behavior design looks at Motivation, Ability, and Triggers in affecting change.

2) Understand the difference between controls (technical and administrative) and behaviors. Resorting to basic communication may be adequate if you're implementing controls that take away choices. However, if a new control requires that the "right" choice be made, you must then apply behavior design to the project, or risk failure.

3) Go cross-functional and start learning lessons from other practice areas like DevOps and even HR. Understand that everything you're promoting must eventually tie back into org culture, whether it be through changes in behavior or values. Make sure you clearly understand what you're trying to accomplish, and then make a very deliberate plan for implementing changes while addressing all appropriate objectives.

Going forward, let's try to make "cybersecurity awareness month" about something more than tired lines and vapid pejoratives. It's time to reinvent this space as "behavioral infosec" toward achieving better, measurable outcomes.

Incremental "Gains" Are Just Slower Losses

Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch his TedX) in order to make meaningful gains in security practices. Anton even went so far as to write about it a little over a week ago (sorry for the delayed response - work travel). As promised, I feel it's important to counter his arguments a bit.

Anton started by asking, "[Can] you really jump to the "next curve" in security, or do you have to travel the entire journey from the old ways to the cutting edge?"

Of course, this is a sucker's question, and it belies misunderstanding the whole "jump to the next curve" argument, which was conceived by Kawasaki in relation to innovation, but can be applied to strategy in general. In speaking of the notion, Kawasaki says "True innovation happens when a company jumps to the next curve-or better still, invents the next curve, so set your goals high." And this, here, is the point of arguing for organizations to not settle for incremental improvements, but instead to aim higher.

To truly understand this notion in context, let's first think about what would be separate curves in a security practice vertical. Let's take Anton's example of SOCs, SIEM, log mgmt, and threat hunting. To me, the curves might look like this:
- You have no SOC, SIEM, log mgmt
- You start doing some logging, mostly locally
- You start logging to a central location and having a team monitor and manage
- You build or hire a SOC to more efficiently monitor and respond to alerts
- You add in stronger analytics, automation, and threat hunting capabilities

Now, from a security perspective, if you're in one of the first couple stages today (and a lot of companies are!), then a small incremental improvement like moving to central logs might seem like a huge advance, but you'd be completely wrong. Logically, you're not getting much actual risk reduction by simply dumping all your logs to a central place unless you're also adding monitoring, analytics, and response+hunting capabilities at the same time!

In this regard, "jump to the next curve" would likely mean hiring an MSSP to whom you can send all your log data in order to do analytics and proactive threat hunting. Doing so would provide a meaningful leap in security capabilities and would help an organization catch-up. Moreover, even if you spent a year making this a reality, it's a year well-spent, whereas a year spent simply enabling logs without sending them to a central repository for meaningful action doesn't really improve your standing at all.

In Closing

In the interest in keeping this shorter than usual, let's just jump to the key takeaways.

1) The point of "jump to the next curve" is to stop trying to "win" through incremental improvements of the old and broken, instead leveraging innovation to make up lost ground ground by skipping over short-term "gains" that cost you time without actually gaining anything.

2) The farther behind you are, the more important it is to look for curve-jumping opportunities to dig out of technical debt. Go read DevOps literature on how to address technical debt, and realize that with incremental gains, you're at best talking about maintaining your position, not actually catching up. Many organizations are far behind today and cannot afford such an approach.

3) Attacks are continuing to rapidly evolve, which means your resilience relies directly on your agility and ability to make sizable gains in a short period of time. Again, borrowing from DevOps, it's past time to start leveraging automation, cloud services, and agile techniques to reinvent the security program (and, really, organizations overall) to leap out of antiquated, ineffective practices.

4) Anton quipped that "The risks with curve jumping are many: you can jump and miss (wasting resources and time) or you can jump at the wrong curve or you simply have no idea where to jump and where the next curve is." To a degree, yes, this is true. But, in many ways, for organizations that are 5-10 years behind in practices (again, this applies to a LOT of you!), we know exactly where you should go. Even Gartner advice can be useful in this regard! ;) The worst thing you can do is decide not to take an aggressive approach to getting out of technical security debt for fear of choosing the "wrong" path.

5) If you're not sure where the curves are, here's a few suggestions:
- Identity as Perimeter - move toward Zero Trust, heavily leveraging federated identity/IDaaS
- Leverage an MSSP to central manage and monitor log data, including analytics and threat hunting
- Automate, automate, automate! You don't need to invest in expensive security automation tools. You can do a lot with general purpose IT automation tools (like Ansible, Chef, Puppet, Jenkins, Travis, etc.). If you think you need a person staring a dashboard, clicking a button when a color changes, then I'm sorry to tell you that this can and should be automated.
- If your org writes code, then adopt DevOps practices, getting a CI/CD pipeline built, with appsec testing integrated and automated.
- Heavily leverage cloud services for everything!

Good luck, and may the odds be ever in your favor! :)

VirusTotal += Cybereason

We welcome Cybereason scanner to VirusTotal. In the words of the company:

“Cybereason has developed a comprehensive platform to manage risk, including endpoint detection and response (EDR) and next generation antivirus (NGAV). Cybereason’s NGAV solution is underpinned by a proprietary machine learning (ML) anti-malware engine that was built and trained to block advanced attacks such as never-before-seen malware, ransomware, and fileless malware. The cloud-enabled engine conducts both static binary analysis and dynamic behavioral analysis to increase detection of known and unknown threats. Files submitted to VirusTotal will be analyzed by Cybereason’s proprietary ML anti-malware engine and the rendered verdicts will be available to VirusTotal users.”

Cybereason has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by SE Labs, an AMTSO-member tester.

Updated 3NT Solutions LLP / inferno.name / V3Servers.net IP ranges

When I was investigating IOCs for the recent outbreak of BadRabbit ransomware I discovered that it downloaded from a domain 1dnscontrol.com hosted on 5.61.37.209. This IP belongs to a company called 3NT Solutions LLP that I have blogged about before. It had been three-and-a-half years since I looked at their IP address ranges so I thought I would give them a refresh. My personal recommendation

Hack Naked News #146 – October 24, 2017

Kaspersky has “nothing to hide”, the internet wants YOU, OS X malware runs rampant, WHOIS database slip-ups, and more. Jason Wood discusses an attack on critical US infrastructure on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode146

Visit http://hacknaked.tv for all the latest episodes!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

WAF and IPS. Does your environment need both?

WAF and IPS. Does your environment need both?

I have been in fair amount of discussions with management on the need for WAF, and IPS; they often confuse them and their basic purpose. It has been usually discussed after a pentest or vulnerability assessment, that if I can't fix this vulnerability - shall I just put an IPS or WAF to protect the intrusion/ exploitation? Or, sometimes they are considered as the silver bullet to thwart off the attackers instead of fixing the bugs. So, let me tell you - This is not good!

The security products are well suited to protect from something "unknown" or something that you have "unknowingly missed". It is not a silver bullet or an excuse to keep systems/ applications unpatched.

Security shouldn't be an AND/OR case. More the merrier only if they have been configured properly and each one of the product(s) has a different role to play under the flag of defense in depth! So, while I started this article as WAF vs. IPS - it's time to understand it's WAF and IPS. The ecosystem of your production environment is evolving and so is the threat landscape - it's more complex to protect than it was 5 years ago. Attackers are running at your pace, if not faster & a step ahead. These adversary as well piggy-back existing threats to launch their exploits. Often something that starts as simple as DDOS to overwhelm your networks, concedes in an application layer attack. So, network firewall, application firewall, anti-malware, IPS, SIEM etc. all have an important task and should be omnipresent with bells and whistles!

Nevertheless, whether it's a WAF or an IPS; each has it's own purpose and though they can't replace each other, they often have gray areas under which you can rest your risks. This blog will try to address these gray areas, and the associated differences to make life easier when it comes to WAF (Web Application Firewall) or IPS (Intrusion Prevention System). The assumption is both are modern products, and the IPS have deep packet inspection capabilities. Now, let's try to understand the infrastructure, environment and scope of your golden eggs before we can take a call which is the best way to protect the data,

  1. If you are protecting only the "web applications" running on HTTP sockets, then WAF is enough. IPS will be cherry on cake.
  2. If you are protecting all sorts of traffic - SSH, FTP, HTTP etc. then WAF is of less use at it can't inspect non HTTP traffic. I would recommend having a deep packet inspection IPS.
  3. WAF must not be considered as an alternative for traditional network firewalls. It works on the application layer and hence is primarily useful on HTTP, SSL (decryption), Javascript, AJAX, ActiveX, Session management kind of traffic.
  4. A typical IPS does not decrypt SSL traffic, and therefore is insufficient in packet inspection on HTTPS session.
  5. There is wide difference in the traffic visibility and base-lining for anomalies. While WAF has an "understanding" of traffic - HTTP GET, POST, URL, SSL etc. the IPS only understands it as network traffic and therefore can do layer 3/4 checks - bandwidth, packet size, raw protocol decoding/ anomalies but not the GET/ POST or session management.
  6. IPS is useful in cases where RDP, SSH or FTP traffic has to be inspected before it reaches the box to make sure that the protocol is not tampered or wrapped with another TCP packet etc.

Both the technologies have matured and have many gray areas of working but understand that WAF knows and capture the contents of HTTP traffic to see if there is a SQL injection, XSS or cookie manipulation but the IPS have very little or no understanding of the underlying application, therefore can't do much with the traffic contents. An IPS can't raise an alarm if someone is getting confidential data out, or even sending a harmful parameter to your application - it will let it through if it's a valid HTTP packet.

Now, with the information I just shared, try to have a conversation with your management on how to provide the best layered approach in security. How to make sure the network, and application is resilient to complex attacks and threats lurking at your perimeter, or inside.

Be safe.

Malware spam: "Order acknowledgement for BEPO/N1/380006006(2)"

A change to the usual Necurs rubbish, this fake order has a malformed .z archive file which contains a malicious executable with an icon to make it look like an Office document. Reply-To:    purchase@animalagriculture.orgTo:    Recipients [DY]Date:    24 October 2017 at 06:48Subject:    FW: Order acknowledgement for BEPO/N1/380006006(2)Dear All,Kindly find the attached Purchase order# IT/

Startup Security Weekly #60 – It’s An Exit

Ten sales rules you should break, how to pitch a venture capitalist, guiding employees towards mental health, and updates from Duo Security, Contrast Security, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode60Visit https://www.securityweekly.com/ssw for all the latest episodes!

How to Minimize Leaking

I am hopeful that President Trump will not block release of the remaining classified documents addressing the 1963 assassination of President John F. Kennedy. I grew up a Roman Catholic in Massachusetts, so President Kennedy always fascinated me.

The 1991 Oliver Stone movie JFK fueled several years of hobbyist research into the assassination. (It's unfortunate the movie was so loaded with fictional content!) On the 30th anniversary of JFK's death in 1993, I led a moment of silence from the balcony of the Air Force Academy chow hall during noon meal. While stationed at Goodfellow AFB in Texas, Mrs B and I visited Dealey Plaza in Dallas and the Sixth Floor Museum.

Many years later, thanks to a 1992 law partially inspired by the Stone movie, the government has a chance to release the last classified assassination records. As a historian and former member of the intelligence community, I hope all of the documents become public. This would be a small but significant step towards minimizing the culture of information leaking in Washington, DC. If prospective leakers were part of a system that was known for releasing classified information prudently, regularly, and efficiently, it would decrease the leakers' motivation to evade the formal declassification process.

Many smart people have recommended improvements to the classification system. Check out this 2012 report for details.

Paul’s Security Weekly #534 – Pizza the Hut

Wendy Nather of Duo Security is our featured interview, Joe Vest and Andrew Chiles of MINIS deliver a tech segment on borrowing Microsoft metadata and digital signatures to “hide” binaries, and in the security news, Microsoft hypocritically mocks Google, hacking child safety smart watches, five steps to building a vulnerability management program, Google Play introduces a bug bounty program, and why is technology outing sex workers?

Full Show Notes: https://wiki.securityweekly.com/Episode534

Visit https://www.securityweekly.com for all the latest episodes!

MS14-085 – Important: Vulnerability in Microsoft Graphics Component Could Allow Information Disclosure (3013126) – Version: 1.1

Severity Rating: Important
Revision Note: V1.1 (October 19, 2017): Corrected a typo in the CVE description.
Summary: This security update resolves a publicly disclosed vulnerability in Microsoft Windows. The vulnerability could allow information disclosure if a user browses to a website containing specially crafted JPEG content. An attacker could use this information disclosure vulnerability to gain information about the system that could then be combined with other attacks to compromise the system. The information disclosure vulnerability by itself does not allow arbitrary code execution. However, an attacker could use this information disclosure vulnerability in conjunction with another vulnerability to bypass security features such as Address Space Layout Randomization (ASLR).

Anatomy of a spambot

For security pros, spambots are known enemies. For the uninitiated, they are unknown entities. And yet they proliferate like ants at a picnic or teens on messaging apps. You might be receiving countless messages from bots every day, and even worse, a bot might be sending out unwanted emails from your computer right now, making you an unwilling participant in digitized mayhem.

To read this article in full, please click here