Category Archives: Artificial Intelligence (AI)

Stay on Top of Zero-Day Malware Attacks With Smart Mobile Threat Defense

The mobile threat landscape is a dynamic ecosystem in perpetual motion. Cybercriminals are constantly renewing their attack techniques to access valuable data, challenging the capabilities of traditional mobile security solutions. Mobile threat defense technology was conceived to tackle the onslaught of cyberthreats targeting enterprise mobility that standard security solutions have failed to address. Some security experts even note that emerging mobile threats can only be countered with the help of artificial intelligence (AI) and machine learning, both of which are essential to any reliable protection strategy.

Data Exfiltration Is a Serious Threat

Pradeo’s most recent mobile security report found that 59 percent of Android and 42 percent of iOS applications exfiltrate the data they manipulate. Most mobile applications that leak data are not malicious, as they don’t feature any malware. They operate by silently collecting as much data as they can and sending that data over networks, sometimes to unverified servers. The harmful aspect of these apps resides in the fact that they seem perfectly safe to the security checks of marketplaces such as Google Play and App Store, and as a result, these platforms feature many such apps.

Zero-Day Malware Is Growing at a Fast Pace

There are two main categories of malware: the type that has a recognizable viral signature that is included in virus databases, and the zero-day type that features new, uncategorized behaviors. Researchers at Pradeo observed a 92 percent increase in the amount of zero-day malware detected between January and June 2018 on the mobile devices the company secures, compared to a 1 percent increase in known malware. These figures demonstrate how threat actors are constantly renewing their efforts with new techniques to overcome existing security measures.

Enhance Your Mobile Threat Defense With AI

Mobile threats such as leaky apps and zero-day malware are growing both in number and severity. Antivirus and score-based technologies can no longer detect these threats because they rely on viral databases and risk estimations, respectively, without being able to clearly identify behaviors.

To protect their data, organizations need mobile security solutions that automatically replicate the accuracy of manual analysis on a large scale. To precisely determine the legitimacy of certain behaviors, it’s essential to take into consideration the context and to correlate it with security facts. Nowadays, only AI has the capacity to enable a mobile threat defense solution with this level of precision by putting machine learning and deep learning into practice. With these capabilities, undeniable inferences can be drawn to efficiently counter current and upcoming threats targeting enterprise mobility.

Read the 2018 Mobile Security Report from Pradeo

The post Stay on Top of Zero-Day Malware Attacks With Smart Mobile Threat Defense appeared first on Security Intelligence.

Machine Learning Algorithms Are Not One-Size-Fits-All

This is the second installment in a three-part series about machine learning. Be sure to read part one for the full story.

When designing machine learning solutions for security, it’s important to decide on a classifier that will perform the best with minimal error. Given the sheer number of choices available, it’s easy to get confused. Let’s explore some tips to help security leaders select the right machine learning algorithm for their needs.

4 Types of Machine Learning Techniques

If you know which category your problem falls into, you can narrow down your choices. Machine learning algorithms are broadly categorized under four types of learning problems.

1. Supervised Learning

Supervised learning trains the algorithm based on example sets of input/output pairs. The goal is to develop new inferences based on patterns inferred from the sample results. Sample data must be available and labeled. For example, designing a spam detector model by learning from samples of labeled spam/nonspam is supervised learning. Another example is a problem such as the Defense Advanced Research Projects Agency (DARPA)’s Knowledge Discovery and Data Mining 1999 challenge (KDD-99), in which contestants competed to design a machine learning-based intrusion detection system (IDS) from a set of 41 features per instance labeled either “attack” or “normal.”

2. Unsupervised Learning

Unsupervised learning uses data that has not been labeled, classified or categorized. The machine is challenged to identify patterns through processes such as clustering, and the outcome is usually unknown. Clustering is a task in which samples are compared with each other in an attempt to find examples that are close to each other, usually by either a measure of density or a distance metric, such as Euclidian distance when projected into a high-dimensional space.

A security problem that falls into this category is network anomaly detection, which is a different method of designing an IDS. In this case, the algorithm doesn’t assume that it knows an attack from a normal input. Instead, the algorithm tries to understand what normal traffic is by watching the network in a (hopefully) clean state so that it can learn the patterns of traffic. Then, anything that falls outside of this “normal” region is a possible attack. Note that there is a great deal of uncertainty with these algorithms because they do not actually know what an attack looks like.

3. Semisupervised Learning

This type of learning uses a combination of labeled and unlabeled data, typically with the majority being unlabeled. It is primarily used to provide some concept of a known classification to unsupervised algorithms. Several techniques, such as label spreading and weakly supervised learning, can also be employed to augment a supervised training set with a small number of samples. This requires a great deal of work because it is an extremely common scenario.

For the challenge of exploit kit identification, for example, we can find some known exploit kits to train our model, but there are many variants and unknown kits that can’t be labeled. Semisupervised learning can help solve this problem. Note that semisupervised and fully supervised learning can often be differentiated by the choice of features to learn against. Depending on the features, you can often label much more data than you can with another selected set of features.

4. Reinforcement Learning

Unlike the other three types of learning problems, reinforcement learning seeks the optimal path to a desired result by rewarding improvement. The problem set is generally small and the training data well-understood. An example is a generative adversarial network (GAN) such as this experiment, in which the distance, measured in correct and incorrect bits, was used as a loss function to encrypt messages between two neural networks.

Another example is PassGAN, where a GAN was trained to guess passwords more efficiently and the reinforcement function, at a very high level, was how many passwords it guessed correctly. Specifically, PassGAN learned rules to take a dictionary and create likely passwords based on an actual set of leaked passwords. It modeled human behavior in guessing the ways that humans transformed passwords into nondictionary character strings.

Problem Type and Training Data

Another, broader categorization of algorithm is based on the type of problem, such as classification, regression, anomaly detection or dimensionality reduction. There are specific machine learning algorithms designed for each type of problem.

Once you have narrowed down your choices based on the above broader categories, you should consider the algorithm’s bias or variance.

Bias is defined in practical effect as the ability of an algorithm to model the training data accurately. Bias represents the proximity of the model to the training data. A high bias results in the model missing relationships between features and the target variable, which leads to what is known as underfitting.

Variance is how accurately the model measures noise in the distribution of training samples. A high variance means that random variables are captured more effectively. Obviously, we do not want this either. Therefore, we seek to minimize both of these values. We can control bias and variance through different parameters. For example, the k-means algorithm with a high value of k gives us high bias and low variance. This makes sense because the value of k means that we have allowed more clusters in space to form, and more of the relationships among the features will be missed as the algorithm fits the data to a larger number of clusters.

A deeper decision tree has more variance because it contains more branches (decision points) and therefore has more false relationships that model noise in the training data. For artificial neural networks (ANNs), variance increases and bias decreases with the increase in the number of layers. Therefore, deep learning has a very low bias. However, this is at the cost of more noise being represented in the deep learning models.

Other Considerations for Selecting Machine Learning Algorithms

Data type also dictates the choice of algorithm because some algorithms work better on certain data types than others. For example, support vector machines (SVMs), linear and logistic regression, and neural networks require the feature vector to be numerical. On the other hand, decision trees can be more flexible to different data types, such as nominal input.

Some algorithms perform poorly when there is correlation between the features — meaning that multiple features demonstrate the same patterns. A feature that is defined as the result of calculations based on other features would be highly correlated with those input features. Linear regression and logistic regression, along with other algorithms, require regularization to avoid numerical instabilities that come from redundancy in data.

The relationship between independent and dependent variables can also help us determine which algorithm to choose. Because naive Bayes, linear regression and logistic regression perform well if each feature has an independent contribution to the output, these algorithms are poor choices for correlated data. Unfortunately, when features are extracted from voluminous data, it is often impossible to know whether the independence assumption holds true without further analysis. If the relationship is more complex, decision trees or neural networks may be a better choice.

Noise in the output values can also inform which algorithm to choose, as well as the parameters for your selected algorithm. However, you are best served by cleaning the data of noise. If you can’t clean the data, you will most likely receive results with poor classification boundaries. An SVM, for example, is very sensitive to noise because it attempts to draw a margin, or boundary, between or among the classes of the data as they are labeled. A decision tree or bagged tree such as a random forest might be a better choice here since these allow for the tree(s) to be pruned. A shallower tree models less noise.

Dimensionality of the input space is critical to your decision as well. In dimensionality, there are two common phenomena that are directly at odds with each other. First, the so-called curse of dimensionality occurs when the data contains a very large number of features. If you think about this in special terms, a two-dimensional figure verses a three-dimensional figure can look very different. A very packed set of points on a plane can become spread apart once we add the third dimension. If we try to cluster these points, the distances between two arbitrary points will dramatically increase.

Alternatively, the blessing of dimensionality means that having more dimensions helps us model the problem more completely. In this sense, the plane may pack completely unrelated points together because the two-dimensional coordinates are close to each other even though they have nothing to do with each other. In three dimensions, these points might be spread farther apart, showing us a more complete separation of the unrelated points.

These two ideas cause us, as domain experts and deep learning designers, to pick our features carefully. We do not want to encounter the sparsity of the curse of dimensionality because there might not be a clear boundary, but we also do not want to pack our unrelated examples too close to each other and create an unusable boundary. Some algorithms work better in higher dimensions. In very high dimensional space, for example, you might choose a boosted random forest or a deep learning model.

Transparency and Functional Complexity

The level of visibility into the model’s decision process is a very important criterion for selecting an algorithm. Algorithms that provide decision trees show clearly how the model reached a decision, whereas a neural network is essentially a black box.

Similarly, you should consider functional complexity and the amount of training data. Functional complexity can be understood in terms of things like speed and memory usage. If you lack sufficient computational power or memory in the machine on which your model will be run, you should choose an algorithm such as naive Bayes or one of the many rules generator-based algorithms. Naive Bayes counts the frequencies of terms, so its model is only as big as the number of features in the data. Rules generators essentially create a series of if/then conditions that the data must satisfy to be classified correctly. These are very good for low-complexity devices. If, however, you have a good deal of power and memory, you might go so far as deep learning, which (in most but not all configurations) requires many more resources.

How Much Training Data Do You Need?

You might also opt for naive Bayes if you have a smaller set of training data available. A small training data size will severely limit you, so it is always best to acquire as much data as you can. A few hundred might be able to construct a shallow decision tree or a naive Bayesian model. A few thousand to tens of thousands is usually enough to create an effective random forest, an algorithm that performs very well in most learning problems. Finally, if you want to try deep learning, you may need hundreds of thousands — or, preferably, millions — of training examples, depending on how many layers you want your deep learning model to have.

Try Several Classifiers and Pick the Best

When all else fails, try several classifiers that are suitable for your problem and then compare the results. Key metrics include accuracy of the model and true and false positive rates; derived metrics such as precision, recall and F1; and even metrics as complex as area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPRC). Each measurement tells you something different about the model’s performance. We’ll define these metrics in greater detail in the final installment of our three-part series.

The post Machine Learning Algorithms Are Not One-Size-Fits-All appeared first on Security Intelligence.

How Can Government Security Teams Overcome Obstacles in IT Automation Deployment?

IT automation has become an increasingly critical tool used by enterprises around the world to strengthen their security posture and mitigate the cybersecurity skills shortage. But most organizations don’t know how, when or where to automate effectively, as noted in a recent report by Juniper Networks and the Ponemon Institute.

According to “The Challenge of Building the Right Security Automation Architecture,” only 35 percent of organizations have employees on hand who are experienced enough to respond to threats using automation. The majority of organizations (71 percent) ranked integrating disparate security technologies as the primary obstacle they have yet to overcome as they work toward an effective security automation architecture.

The report pointed out that the U.S. government is likely to struggle with IT automation as well, but there is much that it can learn from the private sector to help streamline the process.

How Hard Can IT Automation Be?

According to the study’s findings, enterprises are struggling to implement automation tools because of the lack of expertise currently available.

Juniper’s head of threat research, Mounir Hahad, and its head of federal strategy, David Mihelcic, said the U.S. government will “definitely struggle with automation as much, if not more than the private sector.”

About half (54 percent) of the survey’s respondents reported that detecting and responding to threats is made easier with automation technologies. Of the 1,859 IT and IT security practitioners in the U.S., the U.K., Germany and France, 64 percent found a correlation between automation and the increased productivity of security personnel.

Be Cautiously Optimistic

Indeed, there is good news for government security teams. Technology Modernization Fund (TMF) awards are now available as an initiative of the Modernizing Government Technology Act (MGT). The Departments of Energy, Agriculture, and Housing and Urban Development were the first three agencies to receive a combined total of $45 million in TMFs, according to FedScoop.

More government agencies will likely apply for some of the $55 million that remains available for 2018. While there’s a strong likelihood that agencies will continue to invest in automation with some portion of these funds, Juniper Networks warned that they shouldn’t expect an easy deployment.

“The cybercrime landscape is incredibly vast, organized and automated — cybercriminals have deep pockets and no rules, so they set the bar,” said Amy James, director of security portfolio marketing at Juniper Networks, in a press release. “Organizations need to level the playing field. You simply cannot have manual security solutions and expect to successfully battle cybercriminals, much less get ahead of their next moves. Automation is crucial.”

Why Automate?

With so many IT teams unable to recruit sufficient talent to implement automation tools, David “Moose” Wolpoff, chief technology officer (CTO) and co-founder of Randori, questioned why organizations are considering them as part of their security infrastructure in the first place.

“Based on [Juniper’s] findings, I get the impression that government entities may be feeling the same way, buying a bunch of automation tools without knowing quite how or why they are going to use them,” Wolpoff said.

Organizations that dive headfirst into implementing automation, whether government entities or not, will likely run into problems if they fail to plan with business objectives in mind.

“Automation isn’t a solution, it’s a force-multiplier,” explained Wolpoff. “If it’s not enabling your objectives, then you’re just adding a useless tool to your toolbox. My advice to government security teams planning to implement automation would be to sit down with leadership to discuss not only what you want to gain from automation, but where automation makes sense and what it will take to successfully implement.”

Three Tips to Deploy Automation Thoughtfully

Given the need for interoperability within and across the sundry components of different agencies, many conversations about automation will likely result in a green light for implementation. If that’s the case, Hahad offered these three steps security teams can take to overcome IT obstacles.

1. Start With Basic Tasks

Security teams should start by automating administrative tasks before implementing more advanced processes such as event-driven automation once IT departments gain experience.

Too often, organizations bite off more than they can chew when it comes to implementing automation tools, by either misdeploying them or deploying more than they can fully take advantage of. This will only further complicate processes.

2. Collaborate Across Agencies

Replacing legacy systems and deploying automation tools will require much closer collaboration across teams and agencies to identify which framework and architecture they should adopt. A lack of coordination will result in a patchwork of architectures, vendors and tools, which could produce significant gaps and redundancies.

3. Fully Embrace Automation

IT teams are traditionally hesitant to remove the human element from processes, fearing the system will block something critical and cause more problems. If an agency invests in automating its security tools, it should automate across the security processes — from detection and alerting to incident response. The more tasks automation can manage, the more teams will be empowered to complete higher-level work.

It’s important to identify the additional capabilities that don’t require a lot of heavy lifting but will result in saving both time and money. You can avoid unnecessary additional costs that will delay deployment by talking with other agencies that have gone through a similar process.

Depending on how deeply automated those organizations are, it may be appropriate to share experiences to streamline deployments. In the end, streamlining and simplifying programs for every team is the ultimate goal of automation.

The post How Can Government Security Teams Overcome Obstacles in IT Automation Deployment? appeared first on Security Intelligence.

Fight Evolving Cybersecurity Threats With a One-Two-Three Punch

When I became vice president and general manager for IBM Security North America, the staff gave me an eye-opening look at the malicious hackers who are infiltrating everything from enterprises to government agencies to political parties. The number of new cybersecurity threats is distressing, doubling from four to eight new malware samples per second between the third and fourth quarters of 2017, according to McAfee Labs.

Yet that inside view only increased my desire to help security professionals fulfill their mission of securing organizations against cyberattacks through client and industry partnerships, advanced technologies such as artificial intelligence (AI), and incident response (IR) training on the cyber range.

Cybersecurity Is Shifting From Prevention to Remediation

Today, the volume of threats is so overwhelming that getting ahead is often unrealistic. It’s not a matter of if you’ll have a breach, it’s a matter of when — and how quickly you can detect and resolve it to minimize damage. With chief information security officers (CISOs) facing a shortage of individuals with the necessary skills to design environments and fend off threats, the focus has shifted from prevention to remediation.

To identify the areas of highest risk, just follow the money to financial institutions, retailers and government entities. Developed countries also face greater risks. The U.S. may have advanced cybersecurity technology, for example, but we also have assets that translate into greater payoffs for attackers.

Remediation comes down to visibility into your environment that allows you to notice not only external threats, but internal ones as well. In fact, internal threats create arguably the greatest vulnerabilities. Users on the inside know where the networks, databases and critical information are, and often have access to areas that are seldom monitored.

Bring the Power of Partnerships to Bear

Once you identify a breach, you’ll typically have minutes or even seconds to quarantine it and remediate the damage. You need to be able to leverage the data available and make immediate decisions. Yet frequently, the tools that security professionals use aren’t appropriately implemented, managed, monitored or tuned. In fact, 44 percent of organizations lack an overall information security strategy, according to PwC’s “The Global State of Information Security Survey 2018.”

Organizations are beginning to recognize that they cannot manage cybersecurity threats alone. You need a partner that can aggregate data from multiple clients and make that information accessible to everyone, from customers to competitors, to help prevent breaches. It’s like the railroad industry: Union Pacific, BNSF and CSX may battle for business, but they all have a vested interest in keeping the tracks safe, no matter who is using them.

Harden the Expanding Attack Surface

Along with trying to counteract increasingly sophisticated threats, enterprises must also learn how to manage the data coming from a burgeoning number of Internet of Things (IoT) devices. This data improves our lives, but the devices give attackers even more access points into the corporate environment. That’s where technology that manages a full spectrum of challenges comes into play. IBM provides an immune system for security from threat intelligence to endpoint management, with a host of solutions that harden your organization.

Even with advanced tools, analysts don’t always have enough hours in the day to keep the enterprise secure. One solution is incorporating automation and AI into the security operations center (SOC). We layer IBM Watson on top of our cybersecurity solutions to analyze data and make recommendations. And as beneficial as AI might be on day one, it delivers even more value as it learns from your data. With increasing threats and fewer resources, any automation you can implement in your cybersecurity environment helps get the work done faster and smarter.

Make Incident Response Like Muscle Memory

I mentioned malicious insider threats, but users who don’t know their behavior creates vulnerabilities are equally dangerous — even if they have no ill intent. At IBM, for example, we no longer allow the use of thumb drives since they’re an easy way to compromise an organization. We also train users from myriad organizations on how to react to threats, such as phishing scams or bogus links, so that their automatic reaction is the right reaction.

This is even more critical for incident response. We practice with clients just like you’d practice a golf swing. By developing that muscle memory, it becomes second nature to respond in the appropriate way. If you’ve had a breach in which the personally identifiable information (PII) of 100,000 customers is at risk — and the attackers are demanding payment — what do you say? What do you do? Just like fire drills, you must practice your IR plan.

Additionally, security teams need training to build discipline and processes, react appropriately and avoid making mistakes that could cost the organization millions of dollars. Response is not just a cybersecurity task, but a companywide communications effort. Everyone needs to train regularly to know how to respond.

Check out the IBM X-Force Command Cyber Tactical Operations Center (C-TOC)

Fighting Cybersecurity Threats Alongside You

IBM considers cybersecurity a strategic imperative and, as such, has invested extensive money and time in developing a best-of-breed security portfolio. I’m grateful for the opportunity to put it to work to make the cyber world a safer place. As the leader of the North American security unit, I’m committed to helping you secure your environments and achieve better business outcomes.

The post Fight Evolving Cybersecurity Threats With a One-Two-Three Punch appeared first on Security Intelligence.

Insights From European Customers on Cybersecurity and Security Awareness

Also co-authored by Luisa Colucci, Lucia Cozzolino, Silvia Peschiera, Emilia Cozzolino and Vita Santa Barletta.

European Cyber Security Month (ECSM), celebrated every year in October, is a European Union (EU) advocacy campaign designed to promote security awareness among citizens.

ECSM has continued to grow since its inception in 2012. The 2018 agenda featured more than 350 events and activities across all EU member countries. ECSM’s schedule also included a rich series of conferences, training sessions, videos, webinars, demonstrations and more, giving eager participants many opportunities to get involved and learn more about security.

The contributors of this article participated in many events and collected many questions about the cybersecurity industry from other attendees. This article gathers those frequent questions, whose answers initially seemed obvious and straightforward, but were very quickly unveiled to more complicated than we previously thought.

Is Cybersecurity a Challenge or an Opportunity?

For people working in the industry, cybersecurity is an opportunity. This may not be the answer people expect, but being direct is important, and the reality is that cybersecurity drives a multibillion-dollar market. Today, the cybersecurity industry is absorbing a lot of talent, and a lot more will be requested in the future. Let’s start with the challenges.

The first challenge organizations face is the need for growth. Enterprises must adopt new technologies or they will be left behind. It is not just about being more profitable. If healthcare devices that are inserted into the body required a medical inquiry for tuning yesterday, today this can be done without a medical inquiry as the medical device can be controlled with WiFi — but it can also be hacked. Therefore, threats impact growth. Compliance also impacts growth. In fact, if compliance is about the execution of security controls necessary to mitigate the possibility of an attack, if there is a penalty associated with the compliance, the penalty has an impact on the financials of an enterprise.

The second element to consider is that enterprises have invested a lot in different technologies and processes, but have not spent enough integrating them. Processes are actually less integrated than products. For example, security information and event management (SIEM) is rarely integrated with vulnerability management or patch management, and misconfiguration actually continues to be one of the major vectors for data breaches.

Another challenge is the ever-growing mass of operational technology (OT) and Internet of Things (IoT) devices connected to network infrastructure. The adoption of IT practices related to these devices is a good thing, but it is not always as straightforward as one might imagine because processes are totally different and vary from one industry to another. For example, if something goes wrong on a train, the train stops. However, if something goes wrong on a plane, you cannot just stop it midflight. In addition, we are exposed to highly sophisticated malware agencies that can develop phishing campaigns and malware, control devices and recycle cryptocurrencies.

Moving to the opportunities, many tend to think that the cybercriminal population is different than the traditional criminal population, but this is not true. Because of the cyberworld, criminals have just been moved into the cyberspace, leaving the overall entropy unchanged with the difference that in the real world, the perfect crime is possible. In the cyber world, threat actors always leave something behind — a trace. We need technologies that can help us find those traces among billions of unstructured records. Artificial Intelligence (AI) can help with such a task. Finally, criminals also use the cyber world. This is a great opportunity to use the same investigation techniques developed in cybersecurity to stop the more old-fashioned and traditional criminals.

Who Are the Bad Guys?

We cannot always claim that those who work on the defensive side are good, and those who work on the attacking side are bad. This would be like saying that those who carry a gun are inherently bad — it is not as simple as one might think. We actually need to consider two elements. The first is that many increasingly think cybersecurity is something that has an intrinsic value, and that someone else can take care of it. For example, if we develop a camera with a traditional operating system where a password is stored on the firmware, we would tend to think that someone else will secure the password.

The second is the belief that what happens in the cyber world is only real when the benefits are perceived. But when things go wrong, then it is bad. In the real world, if a door is open, we do not enter unless we are either invited or authorized to do so. The same should happen in the cyber world. Instead of trying to work out who is bad or who is good, we should increase security awareness and start thinking that what happens in the cyber world is serious and real, and could lead to dramatic situations with serious consequences.

Does Compliance Help?

Compliance helps if it is a continuous process and if we believe in the security controls we have been forced to implement. If it is just a moment to pass the audit, this does not help. Like most security controls, compliance requires a periodic execution of set controls. Systems and applications are administered by humans, and humans make mistakes. Yet new vulnerabilities are discovered every day. What seems secure today may not be secure tomorrow. The only solution to this ever-changing landscape is the periodic execution of a strong set of controls.

How Much Should We Invest in Cybersecurity?

Usually, investment is based on the value of the business and the assets. Today, IoT adoption is creating a definite shift because the IoT provides threat actors with millions of devices — with no substantial revenue/cost impact — that they can use to launch an attack. Therefore, when we introduce a device into our network architecture, we must protect it and protect ourselves from it during the entire life cycle. This is something we should highly consider while building a secure ecosystem. The cybersecurity has an intrinsic value, but we should all work toward keeping a safe environment and improving security awareness, and we cannot assume that someone else will take care of it.

What Happens When You Are Breached?

Beyond the fines and penalties, the loss of customer trust is arguably the greatest damage that will result from a cyberattack. Customers do not really care about the money enterprises spend on security; all they care about is the fact that a company lost their data. The scariest thing is that today’s cybercriminals are real, advanced and persistent, so once they gain a foothold, they have access to your infrastructure and they will take every possible step to ensure they will continue to have access. Therefore, if you stop an attack, do not assume you are in the clear. You should always assume that attackers are inside your network, even if you have not yet discovered what they are after.

 

The post Insights From European Customers on Cybersecurity and Security Awareness appeared first on Security Intelligence.

How Daniel Gor Helps Protect the World — and His Grandparents — From Financial Fraud

Daniel Gor might be “just a regular guy” by his own account, but he’s doing important work that shouldn’t be overlooked. As a solution engineer on IBM Trusteer’s fraud analyst team, Daniel spends his days helping to protect our hard-earned cash from fraudsters.

“There’s a nice feeling knowing that you’re with the good guys,” Daniel said as he talked about social engineering and automated hacking from his office in Tel Aviv, Israel. And, as the product of two cultures, Daniel has a more global view of financial fraud than most.

Born in New York and raised in Miami through his early years, Daniel moved to Israel at the age of seven when his parents decided they wanted to be closer to their families. Today Daniel has a family of his own — a wife and seven-month-old daughter — and still lives close to his extended family in Ra’anana, a suburb not far from Tel Aviv.

He said the impact of two very different cultures sometimes comes out in his work style: A combination of American diligence and persistence with a hint of the typical Israeli “chutzpah.” He said his experiences in the army, as part of Unit 8200 in the Israeli Intelligence Corps, and at university gave him “perspective about how to get things done and how to approach tasks.”

Namely, he said, there’s an element of searching for the truth, “even if you don’t go by all the rules.” That comes in handy when writing policies for his fraud analyst colleagues.

Humble Beginnings as a Financial Fraud Analyst

Daniel graduated from university less than two years ago and went straight to work at IBM Trusteer. He started as a fraud analyst, conducting research to determine the rules the team needed to establish to protect financial data for a range of banks. The team writes rules and policies that are applied behind the scenes for the banks’ different applications; these, in turn, help identify behavioral anomalies that may indicate a fraud attempt.

Each analyst is responsible for monitoring the performance of the policies and rules at several banks; this often constitutes hundreds of rules and reams of data. Daniel’s firsthand experience as an analyst informs his current work as a solution engineer to automate processes designed to assist analysts in this monitoring and, in addition, implement machine learning algorithms that can strengthen the policies even more.

But rules and policies are just one part of the equation. Banks also need to build a picture of what each customer’s “digital identity” looks like so they can detect fraud sooner and more efficiently. Without an idea of how Joe from Jacksonville regularly interacts with his accounts, the bank will never know whether Joe’s profile has been compromised. This is an entirely new research field that Daniel is a part of.

Daniel Gor

Automated Behavioral Analysis Is a Game-Changer

In his present role as a solution engineer, Daniel partners with the team to analyze behavior indicators using machine learning models. He trains the models to identify behavioral anomalies and then writes those models as rules in the bank’s policies.

So that phone call you got from the bank asking if you were currently hesitant or suspiciously stalling while committing a transaction? That’s likely because, thanks to Daniel’s work, your bank identified an anomaly in your normal behavior patterns.

Daniel believes automation technology and AI have had a “great impact” on security in the financial sector.

“The machine learning algorithms are so smart now, they can detect anomalies only by mouse movement or the time that the fraudster spends on a page inside the account,” he explained. “The AI allows us to detect those anomalies in the user’s behaviors.”

Standing Up for Good Values

Unfortunately, fraudsters continue to exploit our human innocence and conduct artful sophistry such as social engineering to target vulnerable banking customers and steal their credentials. Daniel said he’s been surprised at the sophistication and methods used by these fraudsters, who can go so far as calling customers posing as bank personnel to supposedly help them recover money.

“In a way, I was surprised at how people can exploit people’s good natures and vulnerabilities,” he said.

In light of this threat, Daniel noted that he works in cybersecurity so his grandparents can live their lives without fear of being deceived every time the phone rings. And to those who are considering following in his footsteps, Daniel encouraged aspiring cybersecurity professionals to “just do it.” While tech careers are becoming more and more coveted, he believes the goal of working in a company “where you feel you’re adding to the world with good values” is worth aspiring to.

“In a way, I can say that I’m working for myself,” he said. “I want my money to be safe in a place only people I trust have access to, and it’s very important for the world to have these kinds of shields from people that are eventually trying to steal our money, to steal credentials. The world needs companies that are here to prevent those kinds of cases.”

Meet Fraud Analyst Shir Levin

The post How Daniel Gor Helps Protect the World — and His Grandparents — From Financial Fraud appeared first on Security Intelligence.

Is Your SOC Overwhelmed? Artificial Intelligence and MITRE ATT&CK Can Help Lighten the Load

Whether you have a security team of two or 100, your goal is to ensure that the business thrives. That means protecting critical systems, users and data, detecting and responding to threats, and staying one step ahead of cybercrime.

However, security teams today are faced with myriad challenges, such as fragmented threat data, an overabundance of poorly integrated point solutions and lengthy dwell times — not to mention an overwhelming volume of threat intelligence and a dearth of qualified talent to analyze it.

With the average cost of a data breach as high as $3.86 million, up 6.4 percent from 2017, security leaders need solutions and strategies that deliver demonstrable value to their business. But without a comprehensive framework by which to implement these technologies, even the most advanced tools will have little effect on the organization’s overall security posture. How can security teams lighten the load on their analysts while maximizing the value of their technology investments?

Introducing the MITRE ATT&CK Framework

The MITRE Corporation maintains several common cybersecurity industry standards, including Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE). MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations.

A cyber kill chain describes the various stages of a cyberattack as it pertains to network security. The actual framework, called the Cyber Kill Chain, was developed by Lockheed Martin to help organizations identify and prevent cyber intrusions.

The steps in a kill chain trace the typical stages of an attack from early reconnaissance to completion. Analysts use the chain to detect and prevent advanced persistent threats (APT).

Cyber Kill Chain

The MITRE ATT&CK builds on the Cyber Kill Chain, provides a deeper level of granularity and is behavior-centric.

MITRE Modified Cyber Kill Chain

Benefits of adopting the MITRE ATT&CK framework in your security operations center (SOC) include:

  • Helping security analysts understand adversary behavior by identifying tactics and techniques;

  • Guiding threat hunting and helping prioritize investigations based on tactics used;

  • Helping determine the coverage and detection capability (or lack thereof); and

  • Determining the overall impact using adversaries’ behaviors.

How Artificial Intelligence Brings the ATT&CK Framework to Life

To unlock the full range of benefits, organizations should adopt artificial intelligence (AI) solutions alongside the ATT&CK framework. This confluence enables security leaders to automate incident analysis, thereby force-multiplying the team’s efforts and enabling analysts to focus on the most important tasks in an investigation.

Artificial intelligence solutions can also help security teams drive more consistent and deeper investigations. Whether it’s 4:30 p.m. on a Friday or 10 a.m. on a Monday, your investigations should be equally thorough each and every time.

Finally, using advanced AI tools, such as the newly released QRadar Advisor with Watson 2.0, in the context of the ATT&CK framework can help organizations reduce dwell times with a quicker and more decisive escalation process. Security teams can determine root cause analysis and drive next steps with confidence by mapping the attack to their dynamic playbook.

Learn how QRadar Advisor with Watson 2.0 embraces the MITRE ATT&CK framework

The post Is Your SOC Overwhelmed? Artificial Intelligence and MITRE ATT&CK Can Help Lighten the Load appeared first on Security Intelligence.

Retail Cybersecurity Is Lagging in the Digital Transformation Race, and Attackers Are Taking Advantage

Digital transformation is dominating retailers’ attention — and their IT budgets. As a result, significant gaps in retail cybersecurity are left unfilled just as retail IT faces new challenges, from infrastructure moving to the cloud without clear security policies to an array of new threat vectors focused on personal customer information, ransomware and underprotected business-to-business (B2B) connections.

Just as with line-of-business functions like merchandising and operations, retailers’ cybersecurity functions must undergo a digital transformation to become more holistic, proactive and nimble when protecting their businesses, partners and customers.

Retailers Aren’t Prioritizing Security, and Attackers Are Exploiting the Gaps

According to the retail edition of the “2018 Thales Data Threat Report,” 75 percent of retailers have experienced at least one data breach in the past, with half seeing a breach in the past year alone. That puts retail among the most-attacked industries as ranked by the “2018 IBM X-Force Threat Intelligence Index.”

Underfunded security infrastructure is likely a big reason for this trend; organizations only dedicated an average of around 5 percent of their overall IT budgets to security and risk management, according to a 2016 Gartner report.

While retailers have done a great job addressing payment card industry (PCI) compliance, it has come at a cost to other areas. According to IBM X-Force Incident Response and Intelligence Services (IRIS) research, 78 percent of publicly disclosed point-of-sale (POS) malware breaches in 2017 occurred in the retail sector.

In addition to traditional POS attacks, malicious actors are targeting retailers with new threat vectors that deliver more bang for the buck, such as the following:

  • Personally identifiable information (PII) about customers — Accessible via retailers’ B2C portals, attackers use this information in bot networks to create false IDs and make fraudulent transactions. An increasingly popular approach involves making purchases with gift cards acquired via fraud.
  • Ransomware — Criminals are exploiting poorly configured apps and susceptible end users to access and lock up data, so they can then extract pricey ransoms from targeted retailers.
  • Unprotected B2B vendor connections — Threat actors can gain access to retail systems by way of digital connections to their partners. A growing target is a retailer’s B2B portals that have been constructed without sufficient security standards.

What Are the Biggest Flaws in Retail Cybersecurity?

These new types of attacks take advantage of retailers’ persistent underfunding of critical security defenses. Common gaps include inadequate vulnerability scanning capabilities, unsegmented and poorly designed networks, and using custom apps on legacy systems without compensating controls. When retailers do experience a breach, they tend to address the specific cause instead of taking a more holistic look at their environments.

Retailers also struggle to attract security talent, competing with financial services and other deeper-pocketed employers. The National Institute of Standards and Technology (NIST) reported in 2017 that the global cybersecurity workforce shortage is expected to reach 1.5 million by 2019.

In addition, flaws in governance make retailers more vulnerable to these new types of security threats. To keep up with rapidly evolving consumer demands, many line-of-business departments are adopting cloud and software-as-a-service (SaaS) solutions — but they often do so without any standardized security guidance from IT.

According to the “2017 Thales Data Threat Report,” the majority of U.S. retail organizations planned to use sensitive data in an advanced technology environment such as cloud, big data, Internet of Things (IoT) or containers this year. More than half believed that sensitive data use was happening at the time in these environments without proper security in place. Furthermore, companies undergoing cloud migration at the time of a breach incur $12 per record in additional costs, according to the “2018 Cost of a Data Breach Study.”

To protect their data, retailers need tools to both identify security threats and escalate the response back through their entire infrastructure, including SaaS and cloud services. But many enterprises lack that response capability. What’s more, the “Cost of a Data Breach Study” found that using an incident response (IR) team can reduce the cost of a breach by around $14 per compromised record.

Unfortunately, cybersecurity is not always on the radar in retailers’ C-suites. Without a regularly updated cybersecurity scorecard that reflects an organization’s current vulnerability to attack, senior executives might not regularly discuss the topic, take part in system testing or see cybersecurity as part of business continuity.

3 Steps to Close the Gaps in Your Security Stance

Time isn’t stopping as retailers grapple with these threats. Retail cybersecurity leaders must also monitor the General Data Protection Regulation (GDPR), where compliance requirements are sometimes poorly understood, as well as the emergence of artificial intelligence (AI) in both spoofing and security response. In addition, retailers should keep an eye on the continued uncertainty about the vulnerability of platform-as-a-service (PaaS), microservices, cloud-native apps and other emerging technologies.

By addressing the gaps in their infrastructure, governance and staffing, retailers can more effectively navigate known threats and those that will inevitably emerge. Change is never easy, but the following three steps can help retailers initiate digital transformation and evolve their current approach to better suit today’s conditions:

1. Increase Budgets

According to Thales, 84 percent of U.S. retailers plan to increase their security spending. While allocating these additional funds, it’s important for retailers to take a more holistic view, matching budgets to areas of the highest need. Understanding the costs and benefits of addressing security gaps internally or through outsourcing is a key part of this analysis.

2. Improve Governance

Enacting consistent security guidelines across internally run systems as well as cloud- and SaaS-based services can help retailers ensure that they do not inadvertently open up new vulnerabilities in their platforms. Senior-level endorsement is an important ingredient in prioritizing cybersecurity across the enterprise. Regular security scorecarding can be a valuable tool to keep cybersecurity at the top of executives’ minds.

3. Invest in MSS

A growing number of retailers have realized that starting or increasing their use of managed security services (MSS) can help them achieve a higher level of security maturity at the same price as managing activities in-house, if not at a lower cost. MSS allow retailers’ internal cybersecurity to operate more efficiently, address critical talent shortages and enable retailers to close critical gaps in their current security stance.

Why Digital Transformation Is Critical to Rapid Response

Digital transformation is all about becoming more proactive and nimble to respond to consumers’ rapidly growing expectations for seamless, frictionless shopping. Retailers’ cybersecurity efforts require a similar, large-scale transition to cope with new threat vectors, close significant infrastructure gaps and extend security protocols across new platforms, such as cloud and SaaS. By rethinking their budgets, boosting governance and incorporating MSS into their security operations, retail security professionals can support digital transformation while ensuring the business and customer data remains protected and secure.

Listen to the podcast

The post Retail Cybersecurity Is Lagging in the Digital Transformation Race, and Attackers Are Taking Advantage appeared first on Security Intelligence.

Soft Skills, Solid Benefits: Cybersecurity Staffing Shifts Gears to Bring in New Skill Sets

With millions of unfilled cybersecurity jobs and security experts in high demand, chief information security officers (CISOs) are starting to think outside the box to bridge the skills gap. Already, initiatives such as outsourced support and systems automation are making inroads to reduce IT stress and improve efficiency — but they’re not enough to drive long-term success.

Enter the next frontier for forward-thinking technology executives: Soft skills.

How Important Are Soft Skills in the Enterprise?

Soft skills stem from personality traits and characteristics. Common examples include excellent communication, above-average empathy and the ability to demystify tech jargon, as opposed to the certifications and degrees associated with traditional IT skills.

Historically, IT organizations have prioritized harder skills over their softer counterparts — what good is empathy in solving storage problems or improving server uptime? However, as noted by Forbes, recent Google data revealed measurable benefits when teams contain a mix of hard and soft skills. The search giant found that the “highest-performing teams were interdisciplinary groups that benefited heavily from employees who brought strong soft skills to the collaborative process.”

How Can Companies Quantify Qualitative Skill Sets?

Soft skills drive value, but how can organizations quantify qualitative characteristics? Which skill sets offer the greatest value for corporate objectives?

When it comes to prioritization, your mileage may vary; depending on the nature and complexity of IT projects, different skills provide different value. For example, long-term projects that require cross-departmental collaboration could benefit from highly communicative IT experts, while quick-turnaround mobile application developments may require creative thinking to identify potential security weaknesses.

According to Tripwire, there is some industry consensus on the most sought-after skills: Analytical thinking tops the list at 65 percent, followed by good communication (60 percent), troubleshooting (59 percent) and strong ethical behavior (58 percent). CIO calls out skills such as in-house customer service, a collaborative mindset and emotional intelligence.

Start Your Search for Soft Cybersecurity Skills

The rise of soft skills isn’t happening in a vacuum. As noted by a recent Capgemini study, “The talent gap in soft digital skills is more pronounced than in hard digital skills,” with 51 percent of companies citing a lack of hard digital skills and 59 percent pointing to a need for softer skill sets. CISOs must strive to create hiring practices that seek out soft-skilled applicants and a corporate culture that makes the best use of these skills.

When it comes to hiring, start by identifying a shortlist of skills that would benefit IT projects — these might include above-average communication, emotional aptitude or adaptability — then recruit with these skills in mind. This might mean tapping new collar candidates who lack formal certifications but have the drive and determination to work in cybersecurity. It also means designing an interview process that focuses on staff interaction and the ability of prospective employees to recognize and manage interpersonal conflict.

It’s also critical to create a plan for long-term retention. Enterprises must create IT environments that maximize employee autonomy and give staff the ability to implement real change. Just like hard skills, if soft skills aren’t used regularly they can decay over time — and employees won’t wait around if companies aren’t willing to change.

Cultivate Relationships Between Humans and Hardware

Just as IT certifications are adapting to meet the demands of new software, hardware and infrastructure, soft skills are also changing as technology evolves. Consider the rise of artificial intelligence (AI): Often portrayed positively as a key component of automated processes and negatively as an IT job stealer, there’s an emerging need for IT skills that streamline AI interaction and fill in critical performance gaps.

As noted by HR Technologist, tasks that require emotional intelligence are naturally resistant to AI. These include everything from delivering boardroom presentations to analyzing qualitative user feedback or assisting staff with cybersecurity concerns. Here, the human nature of soft skills provides their core value: Over time, these skills will set employees apart from their peers and organizations apart from the competition. Enterprises must also court professionals capable of communicating with AI tools and human colleagues with equal facility. These soft-centric characteristics position new collar employees as the bridge between new technologies and existing stakeholder expectations.

It’s Time to Prioritize Softer Skill Sets

There’s obviously solid value in soft skills — according to a study from the University of Michigan, these skills offer a 256 percent return on investment (ROI). For CISOs, the message is clear: It’s time to prioritize softer skill sets, re-evaluate hiring and recruitment practices, and prepare for a future where the hard skills of AI-enhanced technology require a soft balance to drive cybersecurity success.

The post Soft Skills, Solid Benefits: Cybersecurity Staffing Shifts Gears to Bring in New Skill Sets appeared first on Security Intelligence.

How to Choose the Right Artificial Intelligence Solution for Your Security Problems

Artificial intelligence (AI) brings a powerful new set of tools to the fight against threat actors, but choosing the right combination of libraries, test suites and trading models when building AI security systems is highly dependent on the situation. If you’re thinking about adopting AI in your security operations center (SOC), the following questions and considerations can help guide your decision-making.

What Problem Are You Trying to Solve?

Spam detection, intrusion detection, malware detection and natural language-based threat hunting are all very different problem sets that require different AI tools. Begin by considering what kind of AI security systems you need.

Understanding the outputs helps you test data. Ask yourself whether you’re solving a classification or regression problem, building a recommendation engine or detecting anomalies. Depending on the answers to those questions, you can apply one of four basic types of machine learning:

  1. Supervised learning trains an algorithm based on example sets of input/output pairs. The goal is to develop new inferences based on patterns inferred from the sample results. Sample data must be available and labeled. For example, designing a spam detection model by learning from samples labeled spam/nonspam is a good application of supervised learning.
  2. Unsupervised learning uses data that has not been labeled, classified or categorized. The machine is challenged to identify patterns through processes such as cluster analysis, and the outcome is usually unknown. Unsupervised machine learning is good at discovering underlying patterns and data, but is a poor choice for a regression or classification problem. Network anomaly detection is a security problem that fits well in this category.
  3. Semisupervised learning uses a combination of labeled and unlabeled data, typically with the majority being unlabeled. It is primarily used to improve the quality of training sets. For exploit kit identification problems, we can find some known exploit kits to train our model, but there are many variants and unknown kits that can’t be labeled. We can use semisupervised learning to address the problem.
  4. Reinforcement learning seeks the optimal path to a desired result by continually rewarding improvement. The problem set is generally small, and the training data well-understood. An example of reinforcement learning is a generative adversarial network (GAN), such as this experiment from Cornell University in which distance, measured in the form of correct and incorrect bits, is used as a loss function to encrypt messages between two neural networks and avoid eavesdropping by an unauthorized third neural network.

Artificial Intelligence Depends on Good Data

Machine learning is predicated on learning from data, so having the right quantity and quality is essential. Security leaders should ask the following questions about their data sources to optimize their machine learning deployments:

  • Is there enough data? You’ll need a sufficient amount to represent all possible scenarios that a system will encounter.
  • Does the data contain patterns that machine learning systems can learn from? Good data sets should have frequently recurring values, clear and obvious meanings, few out-of-range values and persistence, meaning that they change little over time.
  • Is the data sparse? Are certain expected values missing? This can create misleading results.
  • Is the data categorical or numeric in nature? This dictates the choice of the classifier we can use.
  • Are labels available?
  • Is the data current? This is particularly important in AI security systems because threats change so quickly. For example, a malware detection system that has been trained on old samples will have difficulty detecting new malware variations.
  • Is the source of the data trusted? You don’t want to train your model from publicly available data of origins you don’t trust. Data sample poisoning is just one attack vector through which machine learning-based security models are compromised.

Choosing the Right Platforms and Tools

There is a wide variety of platforms and tools available on the market, but how do you know which is the right one for you? Ask the following questions to help inform your choice:

  • How comfortable are you in a given language?
  • Does the tool integrate well with your existing environment?
  • Is the tool well-suited for big data analytics?
  • Does it provide built-in data parsing capabilities that enable the model to understand the structure of data?
  • Does it use a graphical or command-line interface?
  • Is it a complete machine learning platform or just a set of libraries that you can use to build models? The latter provides more flexibility, but also has a steeper learning curve.

What About the Algorithm?

You’ll also need to select an algorithm to employ. Try a few different algorithms and compare to determine which delivers the most accurate results. Here are some factors that can help you decide which algorithm to start with:

  • How much data do you have, and is it of good quality? Data with many missing values will deliver lower-quality results.
  • Is the learning problem supervised, unsupervised or reinforcement learning? You’ll want to match the data set to the use case as described above.
  • Determine the type of problem being solved, such as classification, regression, anomaly detection or dimensionality reduction. There are different AI algorithms that work best for each type of problem.
  • How important is accuracy versus speed? If approximations are acceptable, you can get by with smaller data sets and lower-quality data. If accuracy is paramount, you’ll need higher quality data and more time to run the machine learning algorithms.
  • How much visibility do you need into the process? Algorithms that provide decision trees show you clearly how the model reached a decision, while neural networks are a bit of a black box.

How to Train, Test and Evaluate AI Security Systems

Training samples should be constantly updated as new exploits are discovered, so it’s often necessary to perform training on the fly. However, training in real time opens up the risk of adversarial machine learning attacks in which bad actors attempt to disrupt the results by introducing misleading input data.

While it is often impossible to perform training offline, it is desirable to do so when possible so the quality of the data can be regulated. Once the training process is complete, the model can be deployed into production.

One common method of testing trained models is to split the data set and devote a portion of the data — say, 70 percent — to training and the rest to testing. If the model is robust, the output from both data sets should be similar.

A somewhat more refined approach called cross-validation divides the data set into groups of equal sizes and trains on all but one of the groups. For example, if the number of groups is “n,” then you would train on n-1 groups and test with the one set that is left out. This process is repeated many times, leaving out a different group for testing each time. Performance is measured by averaging results across all repetitions.

Choice of evaluation metrics also depends on the type of problem you’re trying to solve. For example, a regression problem tries to find the range of error between the actual value and the predicted value, so the metrics you might use include mean absolute error, root mean absolute error, relative absolute error and relative squared error.

For a classification problem, the objective is to determine which categories new observations belong in — which requires a different set of quality metrics, such as accuracy, precision, recall, F1 score and area under the curve (AUC).

Deployment on the Cloud or On-Premises?

Lastly, you’ll need to select a location for deployment. Cloud machine learning platforms certainly have advantages, such as speed of provisioning, choice of tools and the availability of third-party training data. However, you may not want to share data in the cloud for security and compliance reasons. Consider these factors before choosing whether to deploy on-premises or in a public cloud.

These are just a few of the many factors to consider when building security systems with artificial intelligence. Remember, the best solution for one organization or security problem is not necessarily the best solution for everyone or every situation.

The post How to Choose the Right Artificial Intelligence Solution for Your Security Problems appeared first on Security Intelligence.

Why User Behavior Analytics Is an Application, Not a Cybersecurity Platform

Last year, a cybersecurity manager at a bank near me brought in a user behavior analytics (UBA) solution based on a vendor’s pitch that UBA was the next generation of security analytics. The company had been using a security information and event management (SIEM) tool to monitor its systems and networks, but abandoned it in favor of UBA, which promised a simpler approach powered by artificial intelligence (AI).

One year later, that security manager was looking for a job. Sure, the UBA package did a good job of telling him what his users were doing on the network, but it didn’t do a very good job of telling him about threats that didn’t involve abnormal behavior. I can only speculate about what triggered his departure, but my guess is it wasn’t pretty.

UBA hit the peak of the Gartner hype cycle last year around the same time as AI. The timing isn’t surprising given that many UBA vendors tout their use of machine learning to detect anomalies in log data. UBA is a good application of SIEM, but it isn’t a replacement for it. In fact, UBA is more accurately described as a cybersecurity application that rides on top of SIEM — but you wouldn’t know that the way it’s sometimes marketed.

User Behavior Analytics Versus Security Information and Event Management

While SIEM and UBA do have some similar features, they perform very different functions. Most SIEM offerings are essentially log management tools that help security operators make sense of a deluge of information. They are a necessary foundation for targeted analysis.

UBA is a set of algorithms that analyze log activity to spot abnormal behavior, such as repeated login attempts from a single IP address or large file downloads. Buried in gigabytes of data, these patterns are easy for humans to miss. UBA can help security teams combat insider threats, brute-force attacks, account takeovers and data loss.

UBA applications require data from an SIEM tool and may include basic log management features, but they aren’t a replacement for a general-purpose SIEM solution. In fact, if your SIEM system has anomaly detection capabilities or can identify whether user access activity matches typical behavior based on the user’s role, you may already have UBA.

Part of the confusion comes from the fact that, although SIEM has been around for a long time, there is no one set of standard features. Many systems are only capable of rule-based alerting or limited to canned rules. If you don’t have a rule for a new threat, you won’t be alerted to it.

Analytical applications such as UBA are intended to address certain types of cybersecurity threat detection and remediation. Choosing point applications without a unified log manager creates silos of data and taxes your security operations center (SOC), which is probably short-staffed to begin with. Many UBA solutions also require the use of software agents, which is something every IT organization would like to avoid.

Start With a Well-Rounded SIEM Solution

A robust, well-rounded SIEM solution should cross-correlate log data, threat intelligence feeds, geolocation coordinates, vulnerability scan data, and both internal and external user activity. When combined with rule-based alerts, an SIEM tool alone is sufficient for many organizations. Applications such as UBA can be added on top for more robust reporting.

Gartner’s latest “Market Guide for User and Entity Behavior Analytics” forecast significant disruption in the market. Noting that the technology is headed downward into Gartner’s “Trough of Disillusionment,” researchers explained that some pure-play UBA vendors “are now focusing their route to market strategy on embedding their core technology in other vendors’ more traditional security solutions.”

In my view, that’s where it belongs. User behavior analytics is a great technology for identifying insider threats, but that’s a use case, not a security platform. A robust SIEM tool gives you a great foundation for protection and options to grow as your needs demand.

The post Why User Behavior Analytics Is an Application, Not a Cybersecurity Platform appeared first on Security Intelligence.

Busting Cybersecurity Silos

Cybersecurity is among the most siloed disciplines in all of IT. The industry is exceedingly fragmented between many highly specialized companies. In fact, according to IBM estimates, the average enterprise uses 80 different products from 40 vendors. To put this in perspective, imagine a law enforcement officer trying to piece together the events surrounding a crime based solely on witness statements written in multiple languages — one in Chinese, another in Arabic, a third in Italian, etc. Security operations centers (SOCs) face a similar challenge all the time.

Security professionals are increasingly taking on the role of investigator, sorting through multiple data sources to track down slippery foes. Third-party integration tools don’t exist, so the customer is responsible for bringing together data from multiple sources and applying insights across an increasingly complex environment.

For example, a security team may need to coordinate access records with Lightweight Directory Access Protocol (LDAP) profiles, database access logs and network activity monitoring data to determine whether a suspicious behavior is legitimate or the work of an impostor. Security information may even need to be brought in from external sources such as social networks to validate an identity. The process is equivalent to performing a massive database join, but with incompatible data spread across a global network.

What Can We Learn About Collaboration From Threat Actors?

Organizations would be wise to observe the strategy of today’s threat actors, who freely share tactics, tools and vulnerabilities on the dark web, accelerating both the speed and impact of their attacks. As defenders of cybersecurity, we need to take a similar approach to sharing security information and building collaborative solutions that will address the evolving cybersecurity threat landscape.

This is easier said than done, as the cybersecurity industry has not been successful in enabling information to be shared, federated and contextualized in a way that drives effective security outcomes. But the barriers aren’t solely technical; corporate policies, customer privacy concerns and regulations all combine to inhibit information sharing. We must enable collaboration in ways that don’t undermine the interests of the collaborators.

Security information sharing is not only useful for threat management, but also for accurately determining IT risk, enabling secure business transformation, accelerating innovation, helping with continuous compliance and minimizing friction for end users. For example, organizations can leverage the identity context of an individual from multiple sources to evaluate the person’s reputation and minimize fraud for both new account creation and continuous transaction validation. This type of risk-based approach allows organizations to quickly support new initiatives, such as open banking application programming interfaces (APIs), and regulations, such as the European Union’s revised Payment Service Directive (PSD2).

The Keys to Building a Community Across Cybersecurity Silos

Sharing security data and insights and developing an ecosystem across cybersecurity silos is a transformational concept for the industry — one that requires people, process and technology adaptations. As organizations embrace secure digital transformations, security professionals need to adopt a risk-based approach to security management built on insights from several sources that include both technical and business contexts.

As security becomes more distributed within an organization, processes need to evolve to support integrated and collaborative operations. Sharing of data and insights will enable multiple business units to coordinate and deliver unified security. Technology needs to be API-driven and delivered as a service so it can integrate with others to facilitate sharing. Security solutions also need to evolve to deliver outcome-based security though capabilities that take advantage of data and insights from multiple vendors, platforms and locations.

The security industry is taking steps to address the complexity problem with standards designed to efficiently share data and insights. Standards such as STIX/TAXII, OpenC2 and CACAO are rapidly maturing and gaining adoption for their ability to enable vendors and their customers to choose what data to share. More than 50 cybersecurity vendors have adopted or plan to adopt STIX as a standard for data interchange, according to OASIS.

However, more work needs to be done. Standards and practices need to evolve to enable information sharing within and between industries, as well as ways to exchange methodologies, indicators of compromise (IoCs), response strategies and the like.

Finally, we need a cloud-based community platform that supports open standards-based collaboration for the delivery of integrated cybersecurity solutions. A platform-based approach will bring together people, process, tools, data and insights without expensive customization and integration projects. By increasing the adoption of such a platform, we can create a cybersecurity ecosystem that can address complexity, combat the evolving threat landscape and reduce the need for specialized security skills.

Bringing the Industry Together With IBM Security Connect

IBM has been on a journey to reduce complexity through a security immune system approach, enabling open collaboration through initiatives such as X-Force Exchange and Quad9, and driving open industry standards such as STIX/TAXII. We are furthering our commitment to strengthening cybersecurity with the recent announcement of IBM Security Connect, an open cloud platform for developing solutions based on distributed capabilities and federated data and insights.

Security Connect provides an open data integration service for sharing and normalizing threat intelligence, federated data searching across on-premises and cloud data repositories, and real-time sharing of security alerts, events and insights that can be leveraged by any integrated application or solution. This will pave the way for new methods of delivering innovative outcome-based security solutions powered by artificial intelligence (AI).

Clients and partners can take advantage of this open, cloud-native platform by combining their own data and insights with capabilities from IBM and other vendor technologies. We have already partnered with 15 major security software providers and look forward to adding more.

We are very excited about bringing this concept of data and insights collaboration to life, and grateful for the opportunity to bring cybersecurity silos together to reduce complexity and keep up with the evolving cybersecurity landscape. Early feedback has been gratifying, and we’d love to hear your comments and suggestions. I hope you will join us in this endeavor by learning more about IBM Security Connect and participating in the early field trial.

The post Busting Cybersecurity Silos appeared first on Security Intelligence.