Category Archives: threat detection

Stay on Top of Zero-Day Malware Attacks With Smart Mobile Threat Defense

The mobile threat landscape is a dynamic ecosystem in perpetual motion. Cybercriminals are constantly renewing their attack techniques to access valuable data, challenging the capabilities of traditional mobile security solutions. Mobile threat defense technology was conceived to tackle the onslaught of cyberthreats targeting enterprise mobility that standard security solutions have failed to address. Some security experts even note that emerging mobile threats can only be countered with the help of artificial intelligence (AI) and machine learning, both of which are essential to any reliable protection strategy.

Data Exfiltration Is a Serious Threat

Pradeo’s most recent mobile security report found that 59 percent of Android and 42 percent of iOS applications exfiltrate the data they manipulate. Most mobile applications that leak data are not malicious, as they don’t feature any malware. They operate by silently collecting as much data as they can and sending that data over networks, sometimes to unverified servers. The harmful aspect of these apps resides in the fact that they seem perfectly safe to the security checks of marketplaces such as Google Play and App Store, and as a result, these platforms feature many such apps.

Zero-Day Malware Is Growing at a Fast Pace

There are two main categories of malware: the type that has a recognizable viral signature that is included in virus databases, and the zero-day type that features new, uncategorized behaviors. Researchers at Pradeo observed a 92 percent increase in the amount of zero-day malware detected between January and June 2018 on the mobile devices the company secures, compared to a 1 percent increase in known malware. These figures demonstrate how threat actors are constantly renewing their efforts with new techniques to overcome existing security measures.

Enhance Your Mobile Threat Defense With AI

Mobile threats such as leaky apps and zero-day malware are growing both in number and severity. Antivirus and score-based technologies can no longer detect these threats because they rely on viral databases and risk estimations, respectively, without being able to clearly identify behaviors.

To protect their data, organizations need mobile security solutions that automatically replicate the accuracy of manual analysis on a large scale. To precisely determine the legitimacy of certain behaviors, it’s essential to take into consideration the context and to correlate it with security facts. Nowadays, only AI has the capacity to enable a mobile threat defense solution with this level of precision by putting machine learning and deep learning into practice. With these capabilities, undeniable inferences can be drawn to efficiently counter current and upcoming threats targeting enterprise mobility.

Read the 2018 Mobile Security Report from Pradeo

The post Stay on Top of Zero-Day Malware Attacks With Smart Mobile Threat Defense appeared first on Security Intelligence.

Deception technology: Authenticity and why it matters

This article is the second in a five-part series being developed by Dr. Edward Amoroso in conjunction with the deception technology team from Attivo Networks. The article provides an overview of the central role that authenticity plays in the establishment of deception as a practical defense and cyber risk reduction measure. Requirements for authenticity in deception The over-arching goal for any cyber deception system is to create target computing and networking systems and infrastructure that … More

The post Deception technology: Authenticity and why it matters appeared first on Help Net Security.

3 Security Business Benefits From a 2018 Gartner Magic Quadrant SIEM Leader

Last week Gartner published its 2018 Magic Quadrant for Security Information and Event Management (SIEM). As in past years, the report supports the steady evolution of SIEM technology and the growing demand from customers for simple SIEM functionality with an architecture built to scale that meets both current and future use cases.

So how do we interpret from Gartner what it means to be a SIEM leader in 2018? Based on a quick dissection, the main characteristics of a leading SIEM tool are centered around innovation in early threat detection, adaptation to customer environments and strong market presence.

Read the 2018 Gartner Magic Quadrant for SIEM

What Separates a SIEM Leader From the Rest of the Market?

The first element, early detection via analytics — more clearly stated as efficacy in threat detection and response — remains the centerpiece of any effective SIEM solution. Security analysts and security operations center (SOC) leaders today need to detect both known and unknown threats in real time. By applying analytics to a combination of threat intelligence, behavioral analytics and a wide variety of security monitoring data, organizations can improve both their time to detection and total alert volumes. While these two basic outcomes — reduced dwell time and fewer alerts — sound tactical, they can ultimately help security teams become less distracted and more effective at managing threats, which helps reduce business risks and liabilities and maintain a positive brand reputation.

IBM is a leader in the 2018 Gartner Magic Quadrant for SIEM

The second element of Gartner’s definition of a leader, rapid adaptation to customer environments, is becoming a core factor in how much return on investment (ROI) customers realize and how quickly they realize it. Ad hoc content, add-on applications and flexibility in upgrading the platform are all required to mature a SIEM system in an affordable way once it’s installed.

Also included in this element is the ability to scale the platform in terms of both network coverage and security capabilities. By using out-of-the-box content to automate and streamline more security workflows, organizations can better combat challenges related to the shortage of skills and headcount and better enable the business to adopt new technologies that can help increase its competitive position.

The third element of a leading SIEM is strong market presence and easy access to services. Growth rates around the world still vary based on local security maturity, regulations and specific geographic needs. Customers are looking for access to local resources to help meet their unique requirements and learn lessons from a local community that has already gone through SIEM deployment. It is not uncommon for customers to first select a SIEM platform and then find a local managed service provider or systems integrator for operational support or oversight. Support for this approach provides SIEM users with multiple options to help optimize operating expenses without cutting into expertise.

Take Your SIEM Deployment to the Next Level

IBM was named a SIEM leader in the 2018 Gartner Magic Quadrant report. The IBM QRadar platform has demonstrated continuous innovation that has expanded its value, from its origins in network behavior anomaly detection to real-time threat detection to more recent developments that help automate investigations and streamline orchestrated response processes.

Unique to QRadar is the simplified approach to provide a continuous evolution of use cases and deployment options via optimized content packs, easily downloadable apps from IBM Security App Exchange and flexible deployment options that support organizations regardless of where they are on their cloud journey.

Market presence also contributed to IBM’s leadership; a community of thousands of customers worldwide, a strong business partner network and a wealth of services options allows QRadar customers to easily find knowledgeable local resources that can help them maintain and scale their platform.

To learn more about Gartner’s full review of QRadar, SIEM market trends and vendor evaluation criteria, download your complimentary copy of the 2018 Gartner Magic Quadrant for SIEM. We also invite you to register forour upcoming webinar, “Stay Ahead of Threat Detection & Response with a Scalable SIEM Platform.” The webinar will take place Dec. 18 at 11 a.m. ET and will be available to watch on-demand thereafter.

Register for the webinar

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post 3 Security Business Benefits From a 2018 Gartner Magic Quadrant SIEM Leader appeared first on Security Intelligence.

An introduction to deception technology

This article is first in a five-part series being developed by Dr. Edward Amoroso in conjunction with the deception technology team from Attivo Networks. The article provides an overview of the evolution of deception, including its use in the enterprise, with emphasis on the practical requirements that have emerged in recent years to counter the growing number and nature of malicious threats. Purpose of deception for cyber The idea of modern deception in cyber security … More

The post An introduction to deception technology appeared first on Help Net Security.

Advancing Security Operations Through the Power of a SIEM Platform

The 2018 Gartner Magic Quadrant for Security Information and Event Management (SIEM) has recently been published, and in reading it, it seemed like a good time to reflect upon the latest trends in this well-established yet continuously evolving market. In its early days, the SIEM market was primarily driven by audit and compliance needs. But, as the threat landscape evolved and attackers became more sophisticated, SIEM solutions have had to keep up. A technology that was initially meant for compliance evolved into threat detection, and now, in many cases, it sits at the epicenter of the security operations center (SOC).

While not all SIEM providers have survived this decade of transition, the leading vendors have evolved to help security teams keep up with today’s constant barrage of threats, better defend new environments from advanced and targeted attacks, and effectively address threats despite a growing cybersecurity skills shortage. While some SIEMs did die, the old adage of “SIEM is dead” is certainly not true.

Read the full report

3 Key Trends in SIEM Evolution

When I look back at the last 12 to 18 months, three key trends have had a major impact on the next phase of SIEM evolution.

First, adversaries continue to use tactics such as well-crafted spear phishing emails to exploit users, compromise credentials and use insider access to steal critical enterprise data. As these threats increasingly become signature-less, defenders need new ways to identify not just known threats, but also symptoms of unknown threats. As this need has grown, so have technologies such as machine learning and advanced historical analysis, which help detect anomalous behaviors and enable defenders to respond faster so they can stop attackers before damage is done.

Second, the adoption of new technologies, such as cloud infrastructure and the Internet of Things (IoT), has increased the attack surface and, in many cases, created new blind spots. While these new systems and environments can help create new business advantages, they can also create new risks. As a result, more than ever before, security teams are looking to SIEM solutions to gain a comprehensive, centralized view into cloud environments, on-premises environments, and network and user activity to increase their situational awareness and enable them to better manage cybersecurity risks.

Third, thanks to a growing cybersecurity skills shortage, organizations are demanding solutions that are easier to deploy, manage and maintain. Modern threat detection capabilities require an ever-growing number of data sources, and the addition of those data sources can require significant integration and tuning effort. Resource-constrained teams simply don’t have the luxury of allocating this much time or effort to managing a solution. Instead, they demand ongoing assistance to continuously improve detection and investigation processes — without needing to dedicate expensive in-house experts or buy months of professional services.

Security Teams Need a More Advanced SIEM Solution

In the past year, the leading SIEM vendors recognized the above three market trends and invested significant effort into evolving their solutions to address the challenges. Through developing open app-based ecosystems, vendors are now able to easily deliver prebuilt integrations, security use cases and reports that can be easily consumed. As a result, customers are able to address what matters most in their unique environments without introducing unnecessary complexity or requiring major system upgrades.

For example, to address more sophisticated attackers, security teams should be able to leverage prebuilt, fully integrated analytics for targeted use cases, such as detecting endpoint threats, compromised user credentials and data exfiltration over the Domain Name System (DNS). This approach can help security teams leverage their vendor’s expertise to outpace attackers — without having to become experts in each and every technology themselves.

To better address the rapid adoption of new technologies such as infrastructure-as-a-service (IaaS), security teams should be able to easily integrate their SIEM platform with cloud environments such as AWS, Azure and Google Cloud to gain centralized visibility into misconfigurations and emerging threats such as cryptocurrency mining.

Lastly, to help address the challenges associated with the cybersecurity skill shortage, organizations can look to solutions that provide built-in automation and intelligence. Unique offerings such as cognitive assistants are available to provide intelligent insights into the root cause, scope, severity and attack stage of a threat, helping security analysts punch above their cybersecurity weight class. Additional expertise can be provided with built-in guidance to help analysts address new use cases and more easily tune systems. As a result of these innovations, security teams can become more effective despite having limited resources and budgets.

Leading the Way With New SIEM Platform Innovations

As the landscape continues to evolve, cybersecurity teams can no longer rely on closed, complex solutions for threat detection and investigation. Instead, they need to be able to rely on a proven, flexible SIEM platform that offers open ecosystems packed with out-of-the-box integrations, security use cases and reports to address a variety of needs — ranging from compliance to advanced threat detection — across on-premises and cloud-based environments.

This year, we’re proud that IBM was named a Leader in the 2018 Gartner Magic Quadrant for SIEM, marking our 10th consecutive year in the “Leaders” Quadrant. But we’re even prouder that organizations continue to choose IBM QRadar day in and day out because of our demonstrated commitment to their evolving needs.

Read the full report

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post Advancing Security Operations Through the Power of a SIEM Platform appeared first on Security Intelligence.

5 Tips for Uncovering Hidden Cyberthreats with DNS Analytics

The internet has fueled growth opportunities for enterprises by allowing them to establish an online presence, communicate with customers, process transactions and provide support, among other benefits. But it’s a double-edged sword: A cyberattack that compromises these business advantages can easily result in significant losses of money, customers, credibility and reputation, and increases the risk of completely going out of business. That’s why it’s critical to have a cybersecurity strategy in place to protect your enterprise from attackers that exploit internet vulnerabilities.

How DNS Analytics Can Boost Your Defense

The Domain Name System (DNS) is one of the foundational components of the internet that malicious actors commonly exploit and use to deploy and control their attack framework. The internet relies on this system to translate names, known as Uniform Resource Locators (URLs), into numbers, known as Internet Protocol (IP) addresses. Giving each IP a unique identifier allows computers and devices to send and receive information across networks. However, DNS also opens the door for opportunistic cyberattackers to infiltrate networks and access sensitive information.

Here are five tips to help you uncover hidden cyberthreats and protect your enterprise with DNS analytics.

1. Think Like an Attacker to Defend Your Enterprise

To protect the key assets of your enterprise and allocate sufficient resources to defend them, you must understand why a threat actor would be interested in attacking your organization. Attacker motivations can vary depending on the industry and geography of your enterprise, but the typical drivers are political and ideological differences, fame and recognition, and the opportunity to make money.

When it comes to DNS, bad actors have a vast arsenal of weapons they can utilize. Some of the most common methods of attack to anticipate are distributed denial-of-service (DDoS) attacks, DNS data exfiltration, cache poisoning and fast fluxing. As enterprises increase their security spending, cyberattacks become more innovative and sophisticated, including novel ways to abuse the DNS protocol. Malware continues to be the preferred method of threat actors, and domain generation algorithms (DGAs) are still widely used, but even that method has evolved to avoid detection.

2. Make DNS Monitoring a Habit

Passive DNS data is important because it is unlikely that a new network connection doesn’t have an associated DNS lookup. It also means that if you collect DNS data correctly, you can see most of the network activity in your environment. A more interesting subject is what we can do with this data to create more local security insights. Even though it is not hard to bypass DNS lookup, such network connections are suspicious and easy to detect.

3. Understand Communication and Traffic Patterns

Attackers leverage the DNS protocol in various ways — some of which are way ahead of our detection tools — however, there are always anomalies that we can observe in the DNS request sent out by endpoints. DNS traffic patterns vary by enterprise, so understanding what the normal pattern for your organization is will enable you to spot pattern anomalies easily.

A robust, secure system should be able to detect exfiltration via DNS tunneling software, which is not as easy as it sounds due to their different communication patterns. DNS tunneling software communication is reliable and frequent, the flow is bidirectional, and it is typically long. On the other hand, DNS exfiltration communication is opportunistic and unexpected, and possibly unidirectional since attackers are looking for the right moment to sneak out valuable data.

4. Get the Right Tools in Place

When analyzing which tools are the best to protect your organization against attacks leveraging DNS, consider what assets you want to protect and the outcomes you would like your analysts to achieve. There are many tools that can be pieced together to create a solution depending on your goals, such as firewalls, traffic analyzers and intrusion detection systems (IDSs).

To enhance the day-to-day activities of your security operations center (SOC), enable your team to conduct comprehensive analysis on domain activity and assign an appropriate risk rating, your SOC analysts should take advantage of threat intelligence feeds. These feeds empower analysts to understand the tactics, techniques and procedures (TTPs) of attackers and provide them with a list of malicious domains to block or alert on their security system. When this information is correlated with internal enterprise information through a security information and event management (SIEM) platform, analysts have full visibility to detect or anticipate ongoing attacks.

5. Be Proactive and Go Threat Hunting

Technology is a very useful tool that allows us to automate processes and alerts us of suspicious activity within our networks — but it is not perfect. Threat hunting can complement and strengthen your defense strategy by proactively searching for indicators of compromise (IoC) that traditional detection tools might miss. To succeed at threat hunting, you must define a baseline within your environment and then define the anomalies that you are going to look for.

A standard method for threat hunting is searching for unusual and unknown DNS requests, which can catch intruders that have already infiltrated your system as well as would-be intruders. Some indicators of abnormal DNS requests tinclude the number of NXDOMAIN records received by an endpoint, the number of queries an endpoint sends out and new query patterns. If you identify a potential threat, an incident response (IR) team can help resolve and remediate the situation by analyzing the data.

Learn More

Every organization is unique, but by understanding the basics of DNS analytics, the common methods of attack and the tools available to security teams, you will be better prepared to protect your enterprise from hidden cyberthreats.

We invite you to attend a live webinar at 11 a.m. ET on Dec. 11 (and available on-demand thereafter) to learn even more about DNS threat hunting.

Register for the webinar

The post 5 Tips for Uncovering Hidden Cyberthreats with DNS Analytics appeared first on Security Intelligence.

OceanLotus Watering Hole Campaign Compromises 21 High-Profile Southeast Asian Websites

A watering hole campaign that has been active in Southeast Asia since September has compromised at least 21 websites, including those of government agencies and major media outlets.

Researchers attributed the attack to a group of cybercriminals known as OceanLotus, which has been targeting foreign governments for approximately six years. Users who visited the compromised websites were redirected to a page controlled by the attackers. While those in charge of the domains have since been informed about the watering hole attack, some continue to be injected with malware scripts.

A Wider Watering Hole Than Usual

The traditional watering hole campaign strategy has focused on luring specific individuals by compromising URLs they’re known to use regularly, but the latest OceanLotus attack includes sites such as a popular Vietnamese newspaper, suggesting that a large number of people could be affected.

Over the course of a multiphase attack, OceanLotus installs a piece of malicious Java code on a site that creates a connection with a victim’s system, and then additional scripts to deliver a possible payload. While the full extent of the watering hole campaign isn’t clear, researchers speculated that the compromised websites could be used to conduct phishing schemes and steal confidential data.

Like many other cybercriminal organizations, OceanLotus is focused on improving the sophistication of its attacks. The researchers noted, for example, that the group used an RSA 1024-bit public key to prevent the decryption of information sent from its server and client devices. OceanLotus also purchased dozens of domains and servers, which it used to run the first and second stages of the attacks and make the URLs look legitimate.

How To Strike A Better Threat Management Balance

Compared with more obvious tactics, such as phishing emails with malicious links or ransomware attachments, a watering hole campaign can easily fly under the radar of organizations that haven’t experienced a website compromise before. For that reason, many companies affected by the likes of OceanLotus find themselves responding reactively rather than proactively addressing the associated risks ahead of time.

IBM experts suggest adopting a threat management framework that begins with generating insights about potential attacks, implementing safeguards necessary to prevent them, monitoring continuously to detect anomalies and responding as necessary.

Source: WeLiveSecurity

The post OceanLotus Watering Hole Campaign Compromises 21 High-Profile Southeast Asian Websites appeared first on Security Intelligence.

Why You Should Act Now to Prevent Peer-to-Peer Payments Network Fraud

Consumers flock to opportunities for instant gratification. They want their coffee orders ready when they arrive, their purchases delivered today, their movies to play the instant the mood strikes. So, naturally, they bring these same expectations to day-to-day financial transactions, such as sending money to friends on their smartphones — even at the cost of security, and at the risk of peer-to-peer (P2P) network fraud.

That demand has driven rapid global adoption of P2P payments. Almost 60 percent of U.S. consumers use P2P platforms, according to Mercator Advisory Group. In the U.S., payment volume through the third quarter of 2018 for market leaders Zelle and Venmo already exceeded last year’s totals, according to American Banker.

Unfortunately, P2P payment network fraud is popularizing right along with it, according to USA Today. But with a holistic, layered prevention and detection program, financial institutions can capture P2P payment market share while protecting themselves and their customers.

The Rapid Growth of Adoption Is Driving Fraud

Experts expect P2P payments to continue their frenetic growth, with even more providers likely to emerge. Unfortunately, like any new payment vehicle, fraudsters quickly hauled their efforts into finding and exploiting holes in P2P network defenses.

Many cybercriminals have succeeded because of the nature of P2P transactions. Since they are in near-real time, the opportunity to safeguard and verify the legitimacy of all parties to the transaction using legacy banking tools is severely limited. Banks vary widely in their P2P network fraud protection, with some moving ahead with no protection at all, according to the New York Times. As a result, some financial institutions and customers are seeing big losses; one bank experienced a 90 percent fraud rate on Zelle transactions.

What Is P2P Network Fraud?

P2P payment fraud affects multiple victims and variations that financial institutions should understand when building protections against attacks. Consumers are often tripped up in P2P payments when they send funds to the wrong phone number or email or to someone who doesn’t hold up his or her end of a deal. But one of the biggest sources of fraud is account takeover.

An account takeover is initiated when a victim with an account at Bank A has his or her personal or account credentials stolen through a previous data theft or phishing attack. The fraudster verifies that there is money in the account, then sends funds via P2P payment to a co-fraudster at Bank B, who withdraws the cash. The accomplice at Bank B might be part of the fraud ring, or perhaps her or she has been promised a share of the proceeds. The fraudster may use a dormant account or set up a new account in the name of another identify theft victim to receive the funds.

Both Bank A and Bank B have some culpability for the loss. Right now in the U.S., the Electronic Fund Transfer Act (EFTA) requires Bank A to make the victim whole by restoring the funds. However, it’s likely that regulators will soon hold Bank B accountable for preventing this type of activity as well, so financial institutions should learn how to detect account takeover sooner rather than later.

How to Detect Account Takeover

The challenge for financial institutions is to keep P2P payments appealing and easy for the customer while ensuring that both the customer and the bank are protected from fraud. Global P2P payment momentum will only grow, so to participate, financial institutions will need a holistic, multilayered security approach to detect and prevent fraudulent transactions.

A key piece involves detecting questionable behaviors on the part of any party to the transaction — for example:

  • A dormant account suddenly moving cash in and out;
  • An unusually high dollar amount sent to a new recipient; or
  • A contact center update to personal data, quickly followed by a new device accessing the account and then a P2P payment to a new payee.

Detecting those behaviors requires active monitoring via a digital fraud detection tool that spots mobile and online activity outside the norm for a user, such as a new device, location, transaction size or login pattern.

These work by tapping both internal and external data, such as the customer’s cell carrier — how long has the victim had this device and mobile number? Email is another resource — is this email address suddenly sending or receiving a lot of P2P payments? Examining transaction patterns, such as a low dollar amount followed by a high dollar transaction, adds to the picture.

Balancing Speed and Protection

A holistic view is key; any one action might be normal, but as part of a series of activities can reveal a suspicious pattern. A well-designed fraud detection engine profiles the behavior of any entity and delivers best-fit analytics to quickly screen for suspicious patterns — all while enabling legitimate transactions to flow rapidly and smoothly.

Advanced capabilities such as artificial intelligence (AI) and machine learning mean these solutions learn as they go, taking into account digital, transactional and other data to discover new patterns and apply that learning to future transactions. This complements the fraud trend sharing that financial institutions must undertake to advance security across the industry.

Putting this holistic, multilayer detection and prevention layer in place is critical as P2P payments move beyond friends and family into their transactions with businesses, like landscapers and veterinarians. It starts with a well-rounded risk evaluation, ensuring layered controls all the way from customer login through to transaction fulfillment.

When financial institutions fills in those gaps with a multilayered solution, they enable P2P payments to flow in a way that balances risk mitigation with a fast, easy experience for customers — an invaluable arrangement for all parties involved.

The post Why You Should Act Now to Prevent Peer-to-Peer Payments Network Fraud appeared first on Security Intelligence.

How to Stay One Step Ahead of Phishing Websites — Literally

Phishing scams are more advanced and widespread than ever, and threat actors are becoming increasingly sophisticated in their ability to craft malicious websites that look legitimate to unsuspecting users — including your employees, who have the kind of restricted access to enterprise data that cybercriminals covet most.

Traditionally, organizations have taken the blacklist approach: A security service provider rates domains based on reputation and a browser extension uses this data to block phishing websites. However, this approach can quickly devolve into a cat-and-mouse game, since the blacklist would only include domains that are actively hosting malicious content. In other words, this method isn’t effective until after victims are already infected.

Spot Phishing Scams Before They Cast Their Bait

IBM Research in Tokyo and IBM X-Force developed a more advanced approach to protecting users from malicious domains called ahead-of-threat detection. This unique method enables security teams to detect potentially malicious domains and actors before the actual threat becomes visible.

To demonstrate how to stay one step ahead of phishing websites with ahead-of-threat detection, let’s take a phishing site, for example. Figure 1 shows the typical life cycle of a malicious domain. In many cases, phishing domains are generated by replacing a single character in a popular domain with another similar-looking or easy-to-mistype character. Then there are domain generation alorithms, such as homograph domains and combosquatting and typosquatting domains.

Ahead-of-Threat Detection

Figure 1: Malicious domain life cycle

However they’re generated, bad actors register these domains to host phishing wesites. Ahead-of-threat detection can identify potentially malicious domains at the time of registration or at the very early stages of domain activation.

How Does Ahead-of-Threat Detection Identify Phishing Websites?

Ahead-of-threat detection pulls in numerous pools of data. One of the most important is the ICANN WHOIS database, which includes information about domain registrars, such as names, phone numbers and addresses. Although WHOIS is anonymized by a guard service that protects registrants’ more detailed personal information, it is one of the primary sources of data by which to evaluate a domain’s maliciousness via ahead-of-threat detection.

Another key metric is image comparison. Phishing sites nowadays tend to look nearly identical to legitimate websites. Therefore, if a domain hosts a site that looks similar to that of a bank or other service, ahead-of-threat detection will raise a red flag.

What Does Ahead-of-Threat Detection Look Like in Practice?

We deployed our architecture in the real world in the form of Quad9, a free service IBM launched in collaboration with Packet Clearing House (PCH) and the Global Cyber Alliance (GCA) to deliver greater online privacy and security protection to consumers and businesses. Quad 9 provides a stream of newly observed response tuples, or Unique DNS Record (UDR), that contains only the response domain, query type and response record. This stream of around 1 million UDR records per day is condensed from the Quad9 systems operating in 76 countries and 128 locations.

Combining multiple results from these analytics can filter a limited number of potentially malicious domains from massive amount of incoming domains. In our experiment, we filtered more than 17 million domains to identify only about 10 malicious ones, including phishing sites masquerading as cryptocurrency exchanges, insurance sites, online banking portals and more. Most of these sites eluded the detection of traditional browser protection; in other words, they were detected ahead of the threat! It wasn’t until weeks later that many of these sites landed on traditional domain blacklists.

Ahead-of-threat detection can help security teams protect their users — and, by proxy, their enterprise data — proactively. It also has the potential to change the game for consumers, who traditionally use tools capable of identifying malicious sites only after they are accessed.

As the risk of phishing escalates and threat actors grow more and more cunning, ahead-of-threat protection can help security professionals discover how to stay one step ahead of phishing websites — literally.

The post How to Stay One Step Ahead of Phishing Websites — Literally appeared first on Security Intelligence.

How to Choose the Right Artificial Intelligence Solution for Your Security Problems

Artificial intelligence (AI) brings a powerful new set of tools to the fight against threat actors, but choosing the right combination of libraries, test suites and trading models when building AI security systems is highly dependent on the situation. If you’re thinking about adopting AI in your security operations center (SOC), the following questions and considerations can help guide your decision-making.

What Problem Are You Trying to Solve?

Spam detection, intrusion detection, malware detection and natural language-based threat hunting are all very different problem sets that require different AI tools. Begin by considering what kind of AI security systems you need.

Understanding the outputs helps you test data. Ask yourself whether you’re solving a classification or regression problem, building a recommendation engine or detecting anomalies. Depending on the answers to those questions, you can apply one of four basic types of machine learning:

  1. Supervised learning trains an algorithm based on example sets of input/output pairs. The goal is to develop new inferences based on patterns inferred from the sample results. Sample data must be available and labeled. For example, designing a spam detection model by learning from samples labeled spam/nonspam is a good application of supervised learning.
  2. Unsupervised learning uses data that has not been labeled, classified or categorized. The machine is challenged to identify patterns through processes such as cluster analysis, and the outcome is usually unknown. Unsupervised machine learning is good at discovering underlying patterns and data, but is a poor choice for a regression or classification problem. Network anomaly detection is a security problem that fits well in this category.
  3. Semisupervised learning uses a combination of labeled and unlabeled data, typically with the majority being unlabeled. It is primarily used to improve the quality of training sets. For exploit kit identification problems, we can find some known exploit kits to train our model, but there are many variants and unknown kits that can’t be labeled. We can use semisupervised learning to address the problem.
  4. Reinforcement learning seeks the optimal path to a desired result by continually rewarding improvement. The problem set is generally small, and the training data well-understood. An example of reinforcement learning is a generative adversarial network (GAN), such as this experiment from Cornell University in which distance, measured in the form of correct and incorrect bits, is used as a loss function to encrypt messages between two neural networks and avoid eavesdropping by an unauthorized third neural network.

Artificial Intelligence Depends on Good Data

Machine learning is predicated on learning from data, so having the right quantity and quality is essential. Security leaders should ask the following questions about their data sources to optimize their machine learning deployments:

  • Is there enough data? You’ll need a sufficient amount to represent all possible scenarios that a system will encounter.
  • Does the data contain patterns that machine learning systems can learn from? Good data sets should have frequently recurring values, clear and obvious meanings, few out-of-range values and persistence, meaning that they change little over time.
  • Is the data sparse? Are certain expected values missing? This can create misleading results.
  • Is the data categorical or numeric in nature? This dictates the choice of the classifier we can use.
  • Are labels available?
  • Is the data current? This is particularly important in AI security systems because threats change so quickly. For example, a malware detection system that has been trained on old samples will have difficulty detecting new malware variations.
  • Is the source of the data trusted? You don’t want to train your model from publicly available data of origins you don’t trust. Data sample poisoning is just one attack vector through which machine learning-based security models are compromised.

Choosing the Right Platforms and Tools

There is a wide variety of platforms and tools available on the market, but how do you know which is the right one for you? Ask the following questions to help inform your choice:

  • How comfortable are you in a given language?
  • Does the tool integrate well with your existing environment?
  • Is the tool well-suited for big data analytics?
  • Does it provide built-in data parsing capabilities that enable the model to understand the structure of data?
  • Does it use a graphical or command-line interface?
  • Is it a complete machine learning platform or just a set of libraries that you can use to build models? The latter provides more flexibility, but also has a steeper learning curve.

What About the Algorithm?

You’ll also need to select an algorithm to employ. Try a few different algorithms and compare to determine which delivers the most accurate results. Here are some factors that can help you decide which algorithm to start with:

  • How much data do you have, and is it of good quality? Data with many missing values will deliver lower-quality results.
  • Is the learning problem supervised, unsupervised or reinforcement learning? You’ll want to match the data set to the use case as described above.
  • Determine the type of problem being solved, such as classification, regression, anomaly detection or dimensionality reduction. There are different AI algorithms that work best for each type of problem.
  • How important is accuracy versus speed? If approximations are acceptable, you can get by with smaller data sets and lower-quality data. If accuracy is paramount, you’ll need higher quality data and more time to run the machine learning algorithms.
  • How much visibility do you need into the process? Algorithms that provide decision trees show you clearly how the model reached a decision, while neural networks are a bit of a black box.

How to Train, Test and Evaluate AI Security Systems

Training samples should be constantly updated as new exploits are discovered, so it’s often necessary to perform training on the fly. However, training in real time opens up the risk of adversarial machine learning attacks in which bad actors attempt to disrupt the results by introducing misleading input data.

While it is often impossible to perform training offline, it is desirable to do so when possible so the quality of the data can be regulated. Once the training process is complete, the model can be deployed into production.

One common method of testing trained models is to split the data set and devote a portion of the data — say, 70 percent — to training and the rest to testing. If the model is robust, the output from both data sets should be similar.

A somewhat more refined approach called cross-validation divides the data set into groups of equal sizes and trains on all but one of the groups. For example, if the number of groups is “n,” then you would train on n-1 groups and test with the one set that is left out. This process is repeated many times, leaving out a different group for testing each time. Performance is measured by averaging results across all repetitions.

Choice of evaluation metrics also depends on the type of problem you’re trying to solve. For example, a regression problem tries to find the range of error between the actual value and the predicted value, so the metrics you might use include mean absolute error, root mean absolute error, relative absolute error and relative squared error.

For a classification problem, the objective is to determine which categories new observations belong in — which requires a different set of quality metrics, such as accuracy, precision, recall, F1 score and area under the curve (AUC).

Deployment on the Cloud or On-Premises?

Lastly, you’ll need to select a location for deployment. Cloud machine learning platforms certainly have advantages, such as speed of provisioning, choice of tools and the availability of third-party training data. However, you may not want to share data in the cloud for security and compliance reasons. Consider these factors before choosing whether to deploy on-premises or in a public cloud.

These are just a few of the many factors to consider when building security systems with artificial intelligence. Remember, the best solution for one organization or security problem is not necessarily the best solution for everyone or every situation.

The post How to Choose the Right Artificial Intelligence Solution for Your Security Problems appeared first on Security Intelligence.

Why User Behavior Analytics Is an Application, Not a Cybersecurity Platform

Last year, a cybersecurity manager at a bank near me brought in a user behavior analytics (UBA) solution based on a vendor’s pitch that UBA was the next generation of security analytics. The company had been using a security information and event management (SIEM) tool to monitor its systems and networks, but abandoned it in favor of UBA, which promised a simpler approach powered by artificial intelligence (AI).

One year later, that security manager was looking for a job. Sure, the UBA package did a good job of telling him what his users were doing on the network, but it didn’t do a very good job of telling him about threats that didn’t involve abnormal behavior. I can only speculate about what triggered his departure, but my guess is it wasn’t pretty.

UBA hit the peak of the Gartner hype cycle last year around the same time as AI. The timing isn’t surprising given that many UBA vendors tout their use of machine learning to detect anomalies in log data. UBA is a good application of SIEM, but it isn’t a replacement for it. In fact, UBA is more accurately described as a cybersecurity application that rides on top of SIEM — but you wouldn’t know that the way it’s sometimes marketed.

User Behavior Analytics Versus Security Information and Event Management

While SIEM and UBA do have some similar features, they perform very different functions. Most SIEM offerings are essentially log management tools that help security operators make sense of a deluge of information. They are a necessary foundation for targeted analysis.

UBA is a set of algorithms that analyze log activity to spot abnormal behavior, such as repeated login attempts from a single IP address or large file downloads. Buried in gigabytes of data, these patterns are easy for humans to miss. UBA can help security teams combat insider threats, brute-force attacks, account takeovers and data loss.

UBA applications require data from an SIEM tool and may include basic log management features, but they aren’t a replacement for a general-purpose SIEM solution. In fact, if your SIEM system has anomaly detection capabilities or can identify whether user access activity matches typical behavior based on the user’s role, you may already have UBA.

Part of the confusion comes from the fact that, although SIEM has been around for a long time, there is no one set of standard features. Many systems are only capable of rule-based alerting or limited to canned rules. If you don’t have a rule for a new threat, you won’t be alerted to it.

Analytical applications such as UBA are intended to address certain types of cybersecurity threat detection and remediation. Choosing point applications without a unified log manager creates silos of data and taxes your security operations center (SOC), which is probably short-staffed to begin with. Many UBA solutions also require the use of software agents, which is something every IT organization would like to avoid.

Start With a Well-Rounded SIEM Solution

A robust, well-rounded SIEM solution should cross-correlate log data, threat intelligence feeds, geolocation coordinates, vulnerability scan data, and both internal and external user activity. When combined with rule-based alerts, an SIEM tool alone is sufficient for many organizations. Applications such as UBA can be added on top for more robust reporting.

Gartner’s latest “Market Guide for User and Entity Behavior Analytics” forecast significant disruption in the market. Noting that the technology is headed downward into Gartner’s “Trough of Disillusionment,” researchers explained that some pure-play UBA vendors “are now focusing their route to market strategy on embedding their core technology in other vendors’ more traditional security solutions.”

In my view, that’s where it belongs. User behavior analytics is a great technology for identifying insider threats, but that’s a use case, not a security platform. A robust SIEM tool gives you a great foundation for protection and options to grow as your needs demand.

The post Why User Behavior Analytics Is an Application, Not a Cybersecurity Platform appeared first on Security Intelligence.

New Cobalt Gang PDF Attack Avoids Traditional Static Analysis Tools

An attack campaign conducted by the Cobalt Gang used a specially crafted PDF document to evade detection by static analysis tools.

Palo Alto Networks’ Unit 42 threat intelligence team observed the operation near the end of October 2018. The analyzed example used an email containing the subject line “Confirmations on October 16, 2018” to target employees at several banking organizations.

Attached to the email was a PDF document that didn’t come with an exploit or malicious code. Instead, an embedded link within the PDF document redirected recipients to a legitimate Google location which, in turn, redirected the browser to a Microsoft Word document containing malicious macros.

How Does the Cobalt Gang

At the time of discovery, the PDF attack bypassed nearly all traditional antivirus software. It was able to do so because the Cobalt Gang added some empty pages and pages with text to make the document look more authentic. These characteristics prevented the PDF from raising red flags with most static analysis tools.

Using specially crafted PDF documents isn’t the only way that digital attackers can fly under the radar. For instance, plenty don’t even use exploits and instead turn to spear phishing emails that leverage social engineering techniques.

Those that do use exploits can conduct their attacks with the help of tools like ThreadKit, a document exploit builder kit. These utilities enable individuals with low levels of technical expertise to get into the world of digital crime without forcing threat actors to come up with potentially attributable custom build processes for their attack documents.

How to Protect Against This PDF Attack

Security professionals can defend against this latest attack campaign from the Cobalt Gang by analyzing flagged PDF documents for base64-encoded strings, JavaScript keywords and other content that might be indicative of malspam. They should also use a ranking formula to prioritize vulnerabilities by risk so that they can close security weaknesses before exploit documents have a chance to abuse them.

Source: Palo Alto Networks

The post New Cobalt Gang PDF Attack Avoids Traditional Static Analysis Tools appeared first on Security Intelligence.

Why You Should Start Leveraging Network Flow Data Before the Next Big Breach

Organizations tend to end up in cybersecurity news because they failed to detect and/or contain a breach. Breaches are inevitable, but whether or not an organization ends up in the news depends on how quickly and effectively it can detect and respond to a cyber incident.

Beyond the fines, penalties and reputational damage associated with a breach, organizations should keep in mind that today’s adversaries represent a real, advanced and persistent threat. Once threat actors gain a foothold in your infrastructure or network, they will almost certainly try to maintain it.

To successfully protect their organizations, security teams need the full context of what is happening on their network. This means data from certain types of sources should be centrally collected and analyzed, with the goal of being able to extract and deliver actionable information.

What Is Network Flow Data?

One of the most crucial types of information to analyze is network flow data, which has unique properties that provide a solid foundation on which a security framework should be built. Network flow data is extracted — by a network device such as a router — from the sequence of packets observed within an interval between two internet protocol (IP) hosts. The data is then forwarded to a flow collector for analysis.

A unique flow is defined by the combination of the following seven key fields:

  1. Source IP address
  2. Destination IP address
  3. Source port number
  4. Destination port number
  5. Layer 3 protocol type
  6. Type of service (ToS)
  7. Input logical interface (router or switch interface)

If any one of the packet values for these fields is found to be unique, a new flow record is created. The depth of the extracted information depends on both the device that generates the flow records and the protocol used to export the information, such as NetFlow or IP Flow Information Export (IPFIX). Inspection of the traffic can be performed at different layers of the Open Systems Interconnection (OSI) model — from layer 2 (the data link layer) to layer 7 (the application layer). Each layer that is inspected adds more meaningful and actionable information for a security analyst.

One major difference between log event data and network flow data is that an event, which typically is a log entry, happens at a single point in time and can be altered. A network flow record, in contrast, describes a condition that has a life span, which can last minutes, hours or days, depending on the activities observed within a session, and cannot be altered. For example, a web GET request may pull down multiple files and images in less than a minute, but a user watching a movie on Netflix would have a session that lasts over an hour.

What Makes Network Flow Data So Valuable?

Let’s examine some of the aforementioned properties in greater detail.

Low Deployment Effort

Network flow data requires the least deployment effort because networks aggregate most of the traffic in a few transit points, such as the internet boundary, and the changes made to those transit points are not often prone to configuration mistakes.

Everything Is Connected

From a security perspective, we can assume that most of the devices used by organizations, if not all of them, operate on and interact with a network. Those devices can either be actively controlled by individuals — workstations, mobile devices, etc. — or operated autonomously — servers, security endpoints, etc.

Furthermore, threat actors typically try to remove traces of their attacks by manipulating security and access logs, but they cannot tamper with network flow data.

Reliable Visibility

The data relevant to security investigations is typically collected from two types of sources:

  • Logs from endpoints, servers and network devices, using either an agent or remote logging; or
  • Network flow data from the network infrastructure.

The issue with logs is that there will always be connected devices from which an organization cannot collect data. Even if security policies mandate that only approved devices may be connected to a network, being able to ensure that unmanaged devices or services have not been inserted into the network by a malicious user is crucial. Furthermore, history has shown that malicious users actively attempt to circumvent host agents and remote logging, making the log data from those hosts unreliable. The most direct source of information about unmanaged devices is the network.

Finally, network flow data is explicitly defined by the protocol, which changes very slowly. This is not the case with log data, where formats are very often poorly documented, tied to specific versions, not standardized and prone to more frequent changes.

Automatically Reduce False Positives

A firewall or access control list (ACL) permit notification does not mean that a successful communication actually took place. On the other hand, network flow data can be used to confirm that a successful communication took place. Being able to issue an alert unless a successful communication took place can dramatically reduce false positives and, therefore, save precious security analyst time.

Moving Beyond Traditional Network Data

Traditional network flow technology was originally designed to provide network administrators with the ability to monitor their network and pinpoint network congestion. More recently, security analysts discovered that network flow data was also useful to help them find network intrusions. However, basic network flow data was never designed to detect the most sophisticated advanced persistent threats (APTs). It does not provide the necessary in-depth visibility, such as the hash of a file transferred over a network or the detected application, as opposed to the port number, to name a few. By lacking this level of visibility, traditional network flow data greatly limits the ability to provide actionable information about a cyber incident.

Given the increasing level of sophistication of attacks, certain communications, such as inbound traffic from the internet, should be further scrutinized and inspected with a purpose-built solution. The solution must be able to perform detailed packet dissection and analysis — at line speed and in passive mode — and deliver extensive and enriched network flow data through a standard protocol such as IPFIX, which defines how to format and transfer IP flow data from an exporter to a collector.

The resulting enriched network flow data can be used to augment the prioritization of relevant alerts. Such data can also accelerate alert-handling research and resolution.

Why You Should Anaylze Network Flow Data?

Network flow data is a crystal ball into your environment because it delivers much-needed and immediate, in-depth visibility. It can also help security teams detect the most sophisticated attacks, which would otherwise be missed if investigation relied solely on log data. By reconciling network flow data with less-reliable log data, organizations can detect attacks more capably and conduct more thorough investigations. The bottom line is that network flow data can help organizations catch some of the most advanced attacks that exist, and it should not be ignored.

The post Why You Should Start Leveraging Network Flow Data Before the Next Big Breach appeared first on Security Intelligence.

Most attacks against energy and utilities occur in the enterprise IT network

The United States has not been hit by a paralyzing cyberattack on critical infrastructure like the one that sidelined Ukraine in 2015. That attack disabled Ukraine's power grid, leaving more than 700,000 people in the dark.

But the enterprise IT networks inside energy and utilities networks have been infiltrated for years. Based on an analysis by the U.S. Department of Homeland Security (DHS) and FBI, these networks have been compromised since at least March 2016 by nation-state actors who perform reconnaissance activities looking industrial control system (ICS) designs and blueprints to steal.