Category Archives: Incident Response (IR)

3 Keys to Building a Scalable Incident Response Automation and Orchestration Plan

Incident response (IR) automation and orchestration is crucial to operationalizing cybersecurity, giving overburdened security professionals relief by streamlining processes, maximizing the efficiency of their resources and increasing their organization’s overall security posture. As the volume of security alerts skyrockets and the skills gap widens, security teams are rapidly implementing IR automation and orchestration technologies to keep up: Nearly 85 percent of businesses have adopted or are currently adopting these solutions, according to Enterprise Strategy Group.

Craft a Robust Incident Response Plan That Works for You

Despite this growth, successfully implementing automation and orchestration isn’t as simple as deploying technology. Security teams need to start with a robust IR plan; if you’re going to streamline processes, you first need to define what those processes are.

The playbook — the exact tasks and actions your organization will take in response to various incident types — is the heart of the IR plan. Whether your organization is building an IR program from scratch or implementing advanced orchestration tools, your documented IR processes are the foundation. And with a few key considerations, your team can build IR playbooks that continue to pay dividends long into the future.

Here are three keys to building a robust, consistent incident response plan:

1. Build Your Initial Playbook Around Manual Actions

A good incident response playbook should be functional regardless of the efficiency afforded by external technologies. Focus on capturing and documenting the full extent of tasks analysts may need to perform during the IR process, and plan for future orchestration and automation that will aid and assist human analysts’ decisions and actions during an incident.

While creating these manual tasks, make them action-oriented and include a measured purpose and outcome for each. Give the analyst the “why” when you can, and make the task instructions as descriptive and detailed as possible. Doing so will allow for easy verification and validation and enable processes to be transferable up and down the team. You’ll also end up creating training opportunities and allowing for smooth internal and external audits.

2. Enable Continual Process Assessment and Refinement

Incident response is a process of continual improvement, and IR playbooks should enable maintenance and growth — such as the replacement or removal of certain tasks based on learnings from simulations and real-world experience.

Consider how your playbooks are stored, referenced and maintained. No matter the format — paper, electronic, tribal knowledge — updating and disseminating IR playbooks can be challenging. A centralized and secured platform, such as an internal wiki or document share, can enable better collaborative management, whereas an IR platform enables seamless collaboration before, during and after an incident.

A feedback loop, also known as a post-incident analysis process or an after-action review (AAR), is critical to the success and continual improvement of the organization’s response time and operational effectiveness. Additionally, to orchestrate and automate certain user tasks and actions to streamline response, you’ll need tried-and-true metrics to understand which of those processes should be automated and the ability to measure the impact and return on investment (ROI) of that automation. We’ll outline examples of these metrics in a future blog post.

3. Design Your Playbooks to Be Iterative and Scalable

As your incident response program grows, you’ll want the ability to quickly develop new playbooks for additional incident types or scenarios to both account for changes in the threat landscape and to change the scope of existing playbooks.

Try to identify common processes and tasks to group into modules and share across your playbooks, allowing for greater flexibility of their application and maintenance. Of course, where applicable, create and maintain the very specific and detailed work effort related to a discrete process. As there are changes in technologies, skills, requirements and resources, you can quickly adapt your now modular processes to account for them without the need to make finite edits to multiples of unrelated and potentially duplicate tasks.

Reuse these common tasks and modular processes to avoid the cumbersome and inefficient effort of developing new playbooks from scratch.

Build Today for Future Success

A robust, documented incident response plan is the foundation of a successful automation and orchestration program. By focusing on the right details today and enabling agility and growth, your solid and scalable IR playbooks will deliver benefits for years.

Six Steps for Building a Robust Incident Response Function

The post 3 Keys to Building a Scalable Incident Response Automation and Orchestration Plan appeared first on Security Intelligence.

Why You Need a BGP Hijack Response Plan

The vast majority of computer security incidents involve some sort of phishing or malware. Typically, this is the type of incident that receives the most attention from organizations, and for which security controls are established. And rightfully so — malware that exploits a vulnerability or human error can cause significant damage to an organization.

However, attacks targeting an organization’s network or internet infrastructure components — such as Border Gateway Protocol (BGP) — have been generally overlooked, even as they gain traction. BGP hijack attacks are still far less common than distributed denial-of-service (DDoS) attacks, but several recent events have turned this unusual method into headlines.

What Is BGP?

Some consider BGP the glue that ties the internet together. Purists might argue that it is the Domain Name System (DNS) that plays this role, given that there can be glue records in a zone file. However, without BGP, your packets would not arrive at their intended destinations.

BGP is the routing protocol of the internet. It is used to determine the most efficient way to route data between independently operated networks, known as autonomous systems (AS). In technical terms, an AS is a collection of IP prefixes that are assigned an Autonomous System Number (ASN).

Put simply, BGP is the road map to the internet, whereas DNS is the phone book.

How BGP Routing Works

A BGP router uses a large table called the routing information base (RIB), which describes the networks it can reach and what the most efficient paths to these networks are. BGP peers are systems (or neighbors) from which the router receives information (networks or prefixes). These are configured manually.

Basically, BGP peers tell the router that it should process or include the information received by other manually entered peers. By combing the information coming from different peers, the router can then work out the most efficient path to a destination.

What Is BGP Hijacking?

In short, a BGP attack is a configuration of an edge router to announce prefixes that have not been legitimately assigned to it. If the injected announcement is more specific (meaning more efficient) than the legitimate one, then the traffic will be rerouted to the injected announcement. In this way, an attacker can broadcast false prefix announcements, polluting the routing table of all its connected peers.

Because of the propagation of routes through connected networks, if one peer includes the malicious information in its routing table, this information can be quickly propagated to other peers. Routing announcements are accepted almost without any validation, making a successful BGP hijack relatively easy.

There are two primary types of attacks: A complete hijack attack overtakes a specific IP prefix, whereas in a partial hijack, the attacker competes with the legitimate source by announcing the same prefix with the same efficiency.

There are also unintentional cases. Human error can cause the same effect as a BGP hijack attack. This is often referred to as a route leak.

Recognize the Impact

The most obvious impact of BGP hijacking is that packets do not take their most optimal route, slowing down users’ connections to the network.

Far worse, attackers can black hole an entire network, including the organization’s services, thus resulting in an outage resembling a DDoS attack. Similarly, attackers can censor certain sources of information by black holing specific networks.

The rerouting makes the attacker a middleman of the network flow — meaning he or she can eavesdrop on certain parts of the communication, or in some cases even alter the traffic. They can also redirect traffic from your customers or users to malicious sites pretending to be part of your network. This can result in the theft of information or credentials or delivery of malware that exploits weaknesses.

In addition, spammers can abuse the good reputation of your ASN to conduct spam runs. This can have a negative effect on your network if it gets blocked by spam filters.

Watch for a Secondary Attack

In some cases, the BGP hijack might not be the attacker’s final objective. The goal might be to steal credentials or divert your users to sources that could potentially exploit their systems.

During the incident response phase, it’s important to be aware of this possibility and try to gather as much material as possible that could help you analyze these attacks. Valuable data sources include passive DNS, Secure Sockets Layer (SSL) certificate history and full packet captures.

How to Detect a BGP Hijack

One of the problems with BGP attacks is that they do not always last very long, so by the time you know an attack is taking place, the situation can already be restored to normal. This stresses the importance of implementing monitoring tools and establishing an efficient alerting workflow.

Start by monitoring the BGP routes that relate to your AS. You can set up your own monitoring solutions, but you can just as well rely on publicly available sources, such as BGPMon and Oracle Dyn, to do the heavy lifting for you.

Build an Incident Response Plan

Proper reaction to a BGP hijack starts with an incident response plan. Unfortunately, this isn’t the type of incident for which you can set up a simple fallback solution or defensive security control. Nor is it one that you can easily detect.

That’s because BGP attacks take place outside the network of an organization. A well-conducted BGP hijack can intervene with traffic without your users ever noticing something was wrong. You might be able to convince your ISP to remove the false route or request it to convince its peers to drop these announcements.

For BGP hijack attacks, the containment, eradication and recovery phases of an incident response plan glue together. Because the route announcements will spread very quickly, containment might be a real challenge.

If you can’t free up the resources to develop a dedicated incident response plan, then you can reuse parts of your plan for combating DDoS attacks.

Be Prepared

Most organizations do not have their own ASN and must rely on the measures of their upstream internet service provider (ISP). But there are ways to prepare:

  • Understand which network providers your organization uses. Does it rely on one single network provider or multiple? An AS relation model can give you insight on this.
  • Once you have listed your network providers, reach out and ask them what precautions or response plans they have with regard to BGP security. You could start by asking for a high-level overview of the peering policy and what agreements toward protection they have in place.
  • Build good working communication channels with your network providers. Next to the normal abuse contact, these should also include escalation paths.
  • Establish out-of-band communication channels via another network provider. Use these channels to inform your customers in case of an attack. Possible options would be social media or a communication page hosted at a cloud provider (take into account phishing).

If you own an ASN, there are some additional measures to take:

  • Write down your peering policy and make sure everyone understands the BGP interconnection policy.
  • Implement the BGP-peering BCPs.
  • Review and implement the best practices from Mutually Agreed Norms for Routing Security (MANRS).
  • Specify an AS path. Be aware that this can quickly backfire since the intent of the system is to find the best path automatically. Introducing manual paths will weaken the system.
  • Limit the amount of prefixes that can be received to prevent being flooded with announcements.
  • Implement route filtering.
  • Filter bogons, the IP prefixes that should not be allowed on the internet.
  • Use a form of authentication before accepting announcements.
  • Implement BGP time to live (TTL) checks, rejecting updates from routers located further away from you.

If you want to exercise your plan, you can, for example, make use of a virtual machine (VM) with the option to load +500k BGP routes.

Consider Automated Response Tools

A key element in fighting BGP hijacking is accurate and fast detection that enables flexible and equally fast mitigation of these events. This is where the Automatic and Real-Time dEtection and MItigation System (ARTEMIS) can provide future help.

ARTEMIS, presented in a research paper by the Center for Applied Internet Data Analysis (CAIDA), is a self-operated and unified detection and mitigation approach based on control-plane monitoring. Although still in development, the project shows potential to help network providers address these attacks.

The last phase in incident response — learning lessons — calls for collecting the necessary information to update and improve your plan, especially for the preparation and detection phases. Review whether all the communication channels worked as expected, the escalation paths gave the expected results and you were able to detect the attack in time. The best response plan is prevention.

The post Why You Need a BGP Hijack Response Plan appeared first on Security Intelligence.

5 More Retail Cybersecurity Practices to Keep Your Data Safe Beyond the Holidays

This is the second article in a two-part series about retail cybersecurity during the holidays. Read part one for the full list of recommendations.

The holiday shopping season offers myriad opportunities for threat actors to exploit human nature and piggyback on the rush to buy and sell products in massive quantities online. Our previous post covered some network security basics for retailers. Let’s take a closer look at how retailers can properly configure and monitor their networks to help mitigate cyberattacks and provide customers with a safe shopping experience during the holiday season.

1. Take a Baseline Measurement of Your Network Traffic

Baselining is the process of measuring normal amounts of traffic over a period of days or even weeks to discern any suspicious traffic peaks or patterns that could reveal an evolving attack.

Network traffic measurements should be taken during regular business hours as well as after hours to cover the organization’s varying activity phases. As long as the initial baseline is taken during a period when traffic is normal, the data can be considered reliable. An intrusion detection system (IDS) or intrusion prevention system (IPS) can then assist with detecting abnormal traffic volumes — for example, when an intruder is exfiltrating large amounts of data when offices are closed.

Below are some factors to consider when performing a baseline measurement that could be helpful in detecting anomalies:

  • Baseline traffic on a regular basis.
  • Look for atypical traffic during both regular and irregular times (e.g., after hours).
  • Set alarms on an IDS/IPS for high and low thresholds to automate this process. Writing signatures specific to your company’s needs is a key element to an IDS/IPS working effectively and should be carried out by trained security specialists to avoid false alarms.
  • Investigate any discrepancies upon initial discovery and adjust thresholds accordingly.
  • Consider using an endpoint detection and response (EDR) solution to help security teams better identify threats, and to allow operations teams to remediate endpoints quickly and at scale.

Listen to the podcast: Examining the State of Retail Security

2. Run a Penetration Test Before It’s Too Late

A key preventative measure for retailers with a more mature security posture is running a penetration test. Simply put, the organization’s security team can allow a white hat hacker, or penetration tester, to manually try to compromise assets using the same tactics, techniques and procedures (TTPs) as criminal attackers. This is done to ascertain whether protections applied by the organization are indeed working as planned and to find any unknown vulnerabilities that could enable a criminal to compromise a high-value asset.

Manual testing should be performed in addition to automated scanning. Whereas automated tools can find known vulnerabilities, manual testing finds the unknown vulnerabilities that tools alone cannot find. Manual testing also targets the systems, pieces of information and vulnerabilities most appealing to an attacker, and specifically focuses on attempting to exploit not just technical vulnerabilities within a system, but business logic errors and other functionality that, when used improperly, can grant unintended access and/or expose sensitive data.

The key to a penetration test is to begin by assessing vulnerabilities and addressing as many of them as possible prior to the test. Then, after controls are in place, decide on the type of test to carry out. Will it be a black box test, where the testers receive no information about the target’s code and schematics? Or will it be a white box test, where organizations fully disclose information about the target to give the tester full knowledge of how the system or application is intended to work? Will it be in a very specific scope and only include customer-facing applications?

It can be helpful to scope a penetration test by taking the following three steps prior to launching the testing period:

  1. Establish goals for the testing. Since penetration testing is intended to simulate a real-world attack, consider scenarios that are relevant to your organization. Giving thought to what type of data is at risk or what type of attacker you’re trying to simulate will allow the testers to more closely approximate threats relevant to your organization.
  2. Draft a thorough contract to state the expectations and scope of the project. For example, if there are specific areas a penetration tester should not access based on criticality or sensitivity, such as production servers or credit card data, outline these points in the contract. Also, define whether the penetration testers should attempt to compromise both physical access and remote access to compromise networks, or if just one is preferred. Consider if you wish to have social engineering included within the test as well.
  3. Have the vendor and its employees sign nondisclosure agreements (NDAs) to keep their findings confidential and ensure their exclusive use by the organization.

Penetration testers from reputable companies are thoroughly vetted before being allowed to conduct these tests. The retail industry can benefit from this type of testing because it mimics the actions of a threat actor and can reveal specific weaknesses about an organization. It can even uncover deficiencies in staff training and operational procedures if social engineering is included within the scope of the testing.

3. Check Your Log Files for Anomalies

Log data collected by different systems throughout an organization is critical in investigating and responding to attacks. Bad actors know this and, if they manage to breach an organization and gain elevated privileges, will work to cover up their tracks by tampering with logs.

According to IBM X-Force Incident Response and Intelligence Services (IRIS) research, one of the most common tactics malicious actors employ is post-intrusion log manipulation. In looking to keep their actions concealed, attackers will attempt to manipulate or delete entries, or inject fake entries, from log files. Compromising the integrity of security logs can delay defenders’ efforts to find out about malicious activity. Additional controls and log monitoring can help security teams avoid this situation.

Below are some helpful tips and examples of security logs that must be checked to determine whether anything is out of the ordinary.

  • Are your logs being tampered with? Look for altered timestamps, missing entries, additional or duplicate entries, and anomalous login attempts.
  • Transfer old log files to a restricted zone on your network. This can help preserve the data and create space for logs being generated overnight.
  • Use a security information and event management (SIEM) tool to assist with analyzing logs and identifying anomalies reported by your organization’s security controls.
  • To include as many sources of information as possible, plug in endpoint, server, network, transaction and security logs for analysis by a SIEM system. Look for red flags such as multiple failed logins, denied access to sensitive areas, ping sweeps, etc.

Knowing which logs to investigate is also critical to successful log analysis. For example, point-of-sale (POS) systems are often installed on Microsoft Windows or Linux systems. It is therefore critical to review operating system logs for these particular endpoints. When it comes to POS networks, where many of the devices are decentralized, daily usage, security and application logs are good places to look for anomalies.

For network security, use logs from network appliances to determine failed or excessive login attempts, increases or decreases in traffic flow, and unauthorized access by users with inadequate privilege levels.

4. Balance Your Network and Website Traffic

According to the National Retail Federation, online sales from November and December 2017 generated more than $138.4 billion, topping 2016 sales by 11.5 percent. This year is likely going to set its own record. With internet traffic volumes expected to be at their highest, online retailers that are unprepared could see the loss of sales and damaged reputation in the aftermath of the holiday season.

But preparing for extra shoppers is the least of retailers’ worries; attackers may take advantage of the festive time of year to extort money by launching distributed denial-of-service (DDoS) attacks against retail websites. These attacks work by flooding a website or network with more traffic than it can handle, causing it to cease accepting requests and stop responding.

To stay ahead of such attacks, online retailers can opt to use designated controls such as load balancers. Load balancers are an integral part of preventing DDoS attacks, which can affect POS systems storewide. With a well-coordinated DDoS attack, a malicious actor could shut down large parts of their target’s networks.

One best practice is to prepare before traffic peaks. Below are some additional tips for a more balanced holiday season.

  • Preventing a DDoS attack can be an imposing undertaking, but with a load balancing device, most of this work can be automated.
  • Load balancers can be either hardware devices or virtual balancers that work to distribute traffic as efficiently as possible and route it to the server or node that can best serve the customer at that given moment. In cases of high traffic, it may take several load balancers to do the work, so evaluate and balance accordingly.
  • Load balancers can be programmed to direct traffic to servers dedicated to customer-facing traffic. Using them can also enable you to move traffic to the proper location instead of inadvertently allowing access to forbidden areas.

Load balancers are typically employed by larger companies with a prominent web footprint. However, smaller companies should still consider employing them because they serve a multitude of purposes. Keeping the load on your servers balanced can help network and website activity run smoothly year-round and prevent DDoS attacks from doing serious damage to your organization’s operations or web presence.

5. Plan and Practice Your Incident Response Strategy

An incident response (IR) plan is essential to identifying and recovering from a security incident. Security incidents should be investigated until they have been classified as true or false positives. The more timely and coordinated an organization’s response is to an incident, the faster it can limit and manage the impact. A solid IR plan can help contain an incident rapidly and result in better protection of customer data, reduction of breach costs and preservation of the organization’s reputation.

If your enterprise does not have an IR plan, now is the time to create one. In the event that your enterprise already has a plan, take the time to get key stakeholders together to review it and ensure it is up-to-date. Most importantly, test and drill the plan and document its effectiveness so you’re prepared for the attack scenarios most relevant to your organization.

When evaluating an IR plan, consider the following tips to help accelerate your organization’s response time:

  • Threat actors who compromise retail cybersecurity will typically turn stolen data around quickly for a profit on the dark web. Use dark web search tools to look for customer data that may have been compromised. Sometimes, data can be identified by the vendor that lost it, leading to the detection of an ongoing attack.
  • Before an attack occurs, establish a dedicated IR team with members from different departments in the organization.
  • Make sure each team member knows his or her precise role in the case of an incident.
  • Keep escalation charts and runbooks readily available to responders, and make sure copies are available offline and duplicated in different physical locations.
  • Test your IR strategy under pressure in an immersive cyberattack simulation to find out where the team is strong and what may still need some fine-tuning.

Make Retail Cybersecurity a Year-Round Priority

Increased vigilance is important for retailers during the holiday season, but these network security basics and practices can, and should, be maintained throughout the year. Remember, attackers don’t just wait until the holiday season to strike. With year-round preparation, security teams can mitigate the majority of threats that come their way.

Read the latest IBM X-Force Research

The post 5 More Retail Cybersecurity Practices to Keep Your Data Safe Beyond the Holidays appeared first on Security Intelligence.

Advancing Security Operations Through the Power of a SIEM Platform

The 2018 Gartner Magic Quadrant for Security Information and Event Management (SIEM) has recently been published, and in reading it, it seemed like a good time to reflect upon the latest trends in this well-established yet continuously evolving market. In its early days, the SIEM market was primarily driven by audit and compliance needs. But, as the threat landscape evolved and attackers became more sophisticated, SIEM solutions have had to keep up. A technology that was initially meant for compliance evolved into threat detection, and now, in many cases, it sits at the epicenter of the security operations center (SOC).

While not all SIEM providers have survived this decade of transition, the leading vendors have evolved to help security teams keep up with today’s constant barrage of threats, better defend new environments from advanced and targeted attacks, and effectively address threats despite a growing cybersecurity skills shortage. While some SIEMs did die, the old adage of “SIEM is dead” is certainly not true.

Read the full report

3 Key Trends in SIEM Evolution

When I look back at the last 12 to 18 months, three key trends have had a major impact on the next phase of SIEM evolution.

First, adversaries continue to use tactics such as well-crafted spear phishing emails to exploit users, compromise credentials and use insider access to steal critical enterprise data. As these threats increasingly become signature-less, defenders need new ways to identify not just known threats, but also symptoms of unknown threats. As this need has grown, so have technologies such as machine learning and advanced historical analysis, which help detect anomalous behaviors and enable defenders to respond faster so they can stop attackers before damage is done.

Second, the adoption of new technologies, such as cloud infrastructure and the Internet of Things (IoT), has increased the attack surface and, in many cases, created new blind spots. While these new systems and environments can help create new business advantages, they can also create new risks. As a result, more than ever before, security teams are looking to SIEM solutions to gain a comprehensive, centralized view into cloud environments, on-premises environments, and network and user activity to increase their situational awareness and enable them to better manage cybersecurity risks.

Third, thanks to a growing cybersecurity skills shortage, organizations are demanding solutions that are easier to deploy, manage and maintain. Modern threat detection capabilities require an ever-growing number of data sources, and the addition of those data sources can require significant integration and tuning effort. Resource-constrained teams simply don’t have the luxury of allocating this much time or effort to managing a solution. Instead, they demand ongoing assistance to continuously improve detection and investigation processes — without needing to dedicate expensive in-house experts or buy months of professional services.

Security Teams Need a More Advanced SIEM Solution

In the past year, the leading SIEM vendors recognized the above three market trends and invested significant effort into evolving their solutions to address the challenges. Through developing open app-based ecosystems, vendors are now able to easily deliver prebuilt integrations, security use cases and reports that can be easily consumed. As a result, customers are able to address what matters most in their unique environments without introducing unnecessary complexity or requiring major system upgrades.

For example, to address more sophisticated attackers, security teams should be able to leverage prebuilt, fully integrated analytics for targeted use cases, such as detecting endpoint threats, compromised user credentials and data exfiltration over the Domain Name System (DNS). This approach can help security teams leverage their vendor’s expertise to outpace attackers — without having to become experts in each and every technology themselves.

To better address the rapid adoption of new technologies such as infrastructure-as-a-service (IaaS), security teams should be able to easily integrate their SIEM platform with cloud environments such as AWS, Azure and Google Cloud to gain centralized visibility into misconfigurations and emerging threats such as cryptocurrency mining.

Lastly, to help address the challenges associated with the cybersecurity skill shortage, organizations can look to solutions that provide built-in automation and intelligence. Unique offerings such as cognitive assistants are available to provide intelligent insights into the root cause, scope, severity and attack stage of a threat, helping security analysts punch above their cybersecurity weight class. Additional expertise can be provided with built-in guidance to help analysts address new use cases and more easily tune systems. As a result of these innovations, security teams can become more effective despite having limited resources and budgets.

Leading the Way With New SIEM Platform Innovations

As the landscape continues to evolve, cybersecurity teams can no longer rely on closed, complex solutions for threat detection and investigation. Instead, they need to be able to rely on a proven, flexible SIEM platform that offers open ecosystems packed with out-of-the-box integrations, security use cases and reports to address a variety of needs — ranging from compliance to advanced threat detection — across on-premises and cloud-based environments.

This year, we’re proud that IBM was named a Leader in the 2018 Gartner Magic Quadrant for SIEM, marking our 10th consecutive year in the “Leaders” Quadrant. But we’re even prouder that organizations continue to choose IBM QRadar day in and day out because of our demonstrated commitment to their evolving needs.

Read the full report

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post Advancing Security Operations Through the Power of a SIEM Platform appeared first on Security Intelligence.

How Can Government Security Teams Overcome Obstacles in IT Automation Deployment?

IT automation has become an increasingly critical tool used by enterprises around the world to strengthen their security posture and mitigate the cybersecurity skills shortage. But most organizations don’t know how, when or where to automate effectively, as noted in a recent report by Juniper Networks and the Ponemon Institute.

According to “The Challenge of Building the Right Security Automation Architecture,” only 35 percent of organizations have employees on hand who are experienced enough to respond to threats using automation. The majority of organizations (71 percent) ranked integrating disparate security technologies as the primary obstacle they have yet to overcome as they work toward an effective security automation architecture.

The report pointed out that the U.S. government is likely to struggle with IT automation as well, but there is much that it can learn from the private sector to help streamline the process.

How Hard Can IT Automation Be?

According to the study’s findings, enterprises are struggling to implement automation tools because of the lack of expertise currently available.

Juniper’s head of threat research, Mounir Hahad, and its head of federal strategy, David Mihelcic, said the U.S. government will “definitely struggle with automation as much, if not more than the private sector.”

About half (54 percent) of the survey’s respondents reported that detecting and responding to threats is made easier with automation technologies. Of the 1,859 IT and IT security practitioners in the U.S., the U.K., Germany and France, 64 percent found a correlation between automation and the increased productivity of security personnel.

Be Cautiously Optimistic

Indeed, there is good news for government security teams. Technology Modernization Fund (TMF) awards are now available as an initiative of the Modernizing Government Technology Act (MGT). The Departments of Energy, Agriculture, and Housing and Urban Development were the first three agencies to receive a combined total of $45 million in TMFs, according to FedScoop.

More government agencies will likely apply for some of the $55 million that remains available for 2018. While there’s a strong likelihood that agencies will continue to invest in automation with some portion of these funds, Juniper Networks warned that they shouldn’t expect an easy deployment.

“The cybercrime landscape is incredibly vast, organized and automated — cybercriminals have deep pockets and no rules, so they set the bar,” said Amy James, director of security portfolio marketing at Juniper Networks, in a press release. “Organizations need to level the playing field. You simply cannot have manual security solutions and expect to successfully battle cybercriminals, much less get ahead of their next moves. Automation is crucial.”

Why Automate?

With so many IT teams unable to recruit sufficient talent to implement automation tools, David “Moose” Wolpoff, chief technology officer (CTO) and co-founder of Randori, questioned why organizations are considering them as part of their security infrastructure in the first place.

“Based on [Juniper’s] findings, I get the impression that government entities may be feeling the same way, buying a bunch of automation tools without knowing quite how or why they are going to use them,” Wolpoff said.

Organizations that dive headfirst into implementing automation, whether government entities or not, will likely run into problems if they fail to plan with business objectives in mind.

“Automation isn’t a solution, it’s a force-multiplier,” explained Wolpoff. “If it’s not enabling your objectives, then you’re just adding a useless tool to your toolbox. My advice to government security teams planning to implement automation would be to sit down with leadership to discuss not only what you want to gain from automation, but where automation makes sense and what it will take to successfully implement.”

Three Tips to Deploy Automation Thoughtfully

Given the need for interoperability within and across the sundry components of different agencies, many conversations about automation will likely result in a green light for implementation. If that’s the case, Hahad offered these three steps security teams can take to overcome IT obstacles.

1. Start With Basic Tasks

Security teams should start by automating administrative tasks before implementing more advanced processes such as event-driven automation once IT departments gain experience.

Too often, organizations bite off more than they can chew when it comes to implementing automation tools, by either misdeploying them or deploying more than they can fully take advantage of. This will only further complicate processes.

2. Collaborate Across Agencies

Replacing legacy systems and deploying automation tools will require much closer collaboration across teams and agencies to identify which framework and architecture they should adopt. A lack of coordination will result in a patchwork of architectures, vendors and tools, which could produce significant gaps and redundancies.

3. Fully Embrace Automation

IT teams are traditionally hesitant to remove the human element from processes, fearing the system will block something critical and cause more problems. If an agency invests in automating its security tools, it should automate across the security processes — from detection and alerting to incident response. The more tasks automation can manage, the more teams will be empowered to complete higher-level work.

It’s important to identify the additional capabilities that don’t require a lot of heavy lifting but will result in saving both time and money. You can avoid unnecessary additional costs that will delay deployment by talking with other agencies that have gone through a similar process.

Depending on how deeply automated those organizations are, it may be appropriate to share experiences to streamline deployments. In the end, streamlining and simplifying programs for every team is the ultimate goal of automation.

The post How Can Government Security Teams Overcome Obstacles in IT Automation Deployment? appeared first on Security Intelligence.

Fight Evolving Cybersecurity Threats With a One-Two-Three Punch

When I became vice president and general manager for IBM Security North America, the staff gave me an eye-opening look at the malicious hackers who are infiltrating everything from enterprises to government agencies to political parties. The number of new cybersecurity threats is distressing, doubling from four to eight new malware samples per second between the third and fourth quarters of 2017, according to McAfee Labs.

Yet that inside view only increased my desire to help security professionals fulfill their mission of securing organizations against cyberattacks through client and industry partnerships, advanced technologies such as artificial intelligence (AI), and incident response (IR) training on the cyber range.

Cybersecurity Is Shifting From Prevention to Remediation

Today, the volume of threats is so overwhelming that getting ahead is often unrealistic. It’s not a matter of if you’ll have a breach, it’s a matter of when — and how quickly you can detect and resolve it to minimize damage. With chief information security officers (CISOs) facing a shortage of individuals with the necessary skills to design environments and fend off threats, the focus has shifted from prevention to remediation.

To identify the areas of highest risk, just follow the money to financial institutions, retailers and government entities. Developed countries also face greater risks. The U.S. may have advanced cybersecurity technology, for example, but we also have assets that translate into greater payoffs for attackers.

Remediation comes down to visibility into your environment that allows you to notice not only external threats, but internal ones as well. In fact, internal threats create arguably the greatest vulnerabilities. Users on the inside know where the networks, databases and critical information are, and often have access to areas that are seldom monitored.

Bring the Power of Partnerships to Bear

Once you identify a breach, you’ll typically have minutes or even seconds to quarantine it and remediate the damage. You need to be able to leverage the data available and make immediate decisions. Yet frequently, the tools that security professionals use aren’t appropriately implemented, managed, monitored or tuned. In fact, 44 percent of organizations lack an overall information security strategy, according to PwC’s “The Global State of Information Security Survey 2018.”

Organizations are beginning to recognize that they cannot manage cybersecurity threats alone. You need a partner that can aggregate data from multiple clients and make that information accessible to everyone, from customers to competitors, to help prevent breaches. It’s like the railroad industry: Union Pacific, BNSF and CSX may battle for business, but they all have a vested interest in keeping the tracks safe, no matter who is using them.

Harden the Expanding Attack Surface

Along with trying to counteract increasingly sophisticated threats, enterprises must also learn how to manage the data coming from a burgeoning number of Internet of Things (IoT) devices. This data improves our lives, but the devices give attackers even more access points into the corporate environment. That’s where technology that manages a full spectrum of challenges comes into play. IBM provides an immune system for security from threat intelligence to endpoint management, with a host of solutions that harden your organization.

Even with advanced tools, analysts don’t always have enough hours in the day to keep the enterprise secure. One solution is incorporating automation and AI into the security operations center (SOC). We layer IBM Watson on top of our cybersecurity solutions to analyze data and make recommendations. And as beneficial as AI might be on day one, it delivers even more value as it learns from your data. With increasing threats and fewer resources, any automation you can implement in your cybersecurity environment helps get the work done faster and smarter.

Make Incident Response Like Muscle Memory

I mentioned malicious insider threats, but users who don’t know their behavior creates vulnerabilities are equally dangerous — even if they have no ill intent. At IBM, for example, we no longer allow the use of thumb drives since they’re an easy way to compromise an organization. We also train users from myriad organizations on how to react to threats, such as phishing scams or bogus links, so that their automatic reaction is the right reaction.

This is even more critical for incident response. We practice with clients just like you’d practice a golf swing. By developing that muscle memory, it becomes second nature to respond in the appropriate way. If you’ve had a breach in which the personally identifiable information (PII) of 100,000 customers is at risk — and the attackers are demanding payment — what do you say? What do you do? Just like fire drills, you must practice your IR plan.

Additionally, security teams need training to build discipline and processes, react appropriately and avoid making mistakes that could cost the organization millions of dollars. Response is not just a cybersecurity task, but a companywide communications effort. Everyone needs to train regularly to know how to respond.

Check out the IBM X-Force Command Cyber Tactical Operations Center (C-TOC)

Fighting Cybersecurity Threats Alongside You

IBM considers cybersecurity a strategic imperative and, as such, has invested extensive money and time in developing a best-of-breed security portfolio. I’m grateful for the opportunity to put it to work to make the cyber world a safer place. As the leader of the North American security unit, I’m committed to helping you secure your environments and achieve better business outcomes.

The post Fight Evolving Cybersecurity Threats With a One-Two-Three Punch appeared first on Security Intelligence.

5 Tips for Uncovering Hidden Cyberthreats with DNS Analytics

The internet has fueled growth opportunities for enterprises by allowing them to establish an online presence, communicate with customers, process transactions and provide support, among other benefits. But it’s a double-edged sword: A cyberattack that compromises these business advantages can easily result in significant losses of money, customers, credibility and reputation, and increases the risk of completely going out of business. That’s why it’s critical to have a cybersecurity strategy in place to protect your enterprise from attackers that exploit internet vulnerabilities.

How DNS Analytics Can Boost Your Defense

The Domain Name System (DNS) is one of the foundational components of the internet that malicious actors commonly exploit and use to deploy and control their attack framework. The internet relies on this system to translate names, known as Uniform Resource Locators (URLs), into numbers, known as Internet Protocol (IP) addresses. Giving each IP a unique identifier allows computers and devices to send and receive information across networks. However, DNS also opens the door for opportunistic cyberattackers to infiltrate networks and access sensitive information.

Here are five tips to help you uncover hidden cyberthreats and protect your enterprise with DNS analytics.

1. Think Like an Attacker to Defend Your Enterprise

To protect the key assets of your enterprise and allocate sufficient resources to defend them, you must understand why a threat actor would be interested in attacking your organization. Attacker motivations can vary depending on the industry and geography of your enterprise, but the typical drivers are political and ideological differences, fame and recognition, and the opportunity to make money.

When it comes to DNS, bad actors have a vast arsenal of weapons they can utilize. Some of the most common methods of attack to anticipate are distributed denial-of-service (DDoS) attacks, DNS data exfiltration, cache poisoning and fast fluxing. As enterprises increase their security spending, cyberattacks become more innovative and sophisticated, including novel ways to abuse the DNS protocol. Malware continues to be the preferred method of threat actors, and domain generation algorithms (DGAs) are still widely used, but even that method has evolved to avoid detection.

2. Make DNS Monitoring a Habit

Passive DNS data is important because it is unlikely that a new network connection doesn’t have an associated DNS lookup. It also means that if you collect DNS data correctly, you can see most of the network activity in your environment. A more interesting subject is what we can do with this data to create more local security insights. Even though it is not hard to bypass DNS lookup, such network connections are suspicious and easy to detect.

3. Understand Communication and Traffic Patterns

Attackers leverage the DNS protocol in various ways — some of which are way ahead of our detection tools — however, there are always anomalies that we can observe in the DNS request sent out by endpoints. DNS traffic patterns vary by enterprise, so understanding what the normal pattern for your organization is will enable you to spot pattern anomalies easily.

A robust, secure system should be able to detect exfiltration via DNS tunneling software, which is not as easy as it sounds due to their different communication patterns. DNS tunneling software communication is reliable and frequent, the flow is bidirectional, and it is typically long. On the other hand, DNS exfiltration communication is opportunistic and unexpected, and possibly unidirectional since attackers are looking for the right moment to sneak out valuable data.

4. Get the Right Tools in Place

When analyzing which tools are the best to protect your organization against attacks leveraging DNS, consider what assets you want to protect and the outcomes you would like your analysts to achieve. There are many tools that can be pieced together to create a solution depending on your goals, such as firewalls, traffic analyzers and intrusion detection systems (IDSs).

To enhance the day-to-day activities of your security operations center (SOC), enable your team to conduct comprehensive analysis on domain activity and assign an appropriate risk rating, your SOC analysts should take advantage of threat intelligence feeds. These feeds empower analysts to understand the tactics, techniques and procedures (TTPs) of attackers and provide them with a list of malicious domains to block or alert on their security system. When this information is correlated with internal enterprise information through a security information and event management (SIEM) platform, analysts have full visibility to detect or anticipate ongoing attacks.

5. Be Proactive and Go Threat Hunting

Technology is a very useful tool that allows us to automate processes and alerts us of suspicious activity within our networks — but it is not perfect. Threat hunting can complement and strengthen your defense strategy by proactively searching for indicators of compromise (IoC) that traditional detection tools might miss. To succeed at threat hunting, you must define a baseline within your environment and then define the anomalies that you are going to look for.

A standard method for threat hunting is searching for unusual and unknown DNS requests, which can catch intruders that have already infiltrated your system as well as would-be intruders. Some indicators of abnormal DNS requests tinclude the number of NXDOMAIN records received by an endpoint, the number of queries an endpoint sends out and new query patterns. If you identify a potential threat, an incident response (IR) team can help resolve and remediate the situation by analyzing the data.

Learn More

Every organization is unique, but by understanding the basics of DNS analytics, the common methods of attack and the tools available to security teams, you will be better prepared to protect your enterprise from hidden cyberthreats.

We invite you to attend a live webinar at 11 a.m. ET on Dec. 11 (and available on-demand thereafter) to learn even more about DNS threat hunting.

Register for the webinar

The post 5 Tips for Uncovering Hidden Cyberthreats with DNS Analytics appeared first on Security Intelligence.

Phish or Fox? A Penetration Testing Case Study From IBM X-Force Red

As you may know, IBM X-Force Red is IBM Security’s penetration testing team. The team features professional, world-class testers who help organizations find and manage their security vulnerabilities on any and all platforms, including software and hardware devices. Our motto is “hack anything to protect everything.”

This post features a case study from IBM X-Force Red that shows how we ran into trouble on a black-box penetration testing assignment, worked against a well-prepared blue team, and overcame the obstacles to ultimately establish a solid adversarial operation. Let’s take a closer look at what we did to get through security and, more importantly, what your team can do to better secure your organization in an ever-evolving adversarial landscape.

A Tale of an Undeliverable Payload

On one of our red team’s recent engagements with a customer’s blue team, we were tasked with delivering a malicious payload to network users without setting off security controls or alerting the defensive team.

As a first attempt, we sent a phishing email to feel out the level of awareness on the other side. The email message was rigged with our malicious payload, for which we selected the attachment type and a lure that would appear credible. However, the blue team on the other side must have been lying in wait for suspicious activity. Every one of our emails was delivered, but our payloads were not. The payloads did not call home to the control server we had set up, and we started getting visits from the defensive team in the form of an anti-malware sandbox.

Within minutes, additional sandboxes hit on our command and control (C&C) server’s handler, and soon more than 12 security vendor clouds were feasting on the payload. We understood at that point that our payload had been detected, analyzed and widely shared by the blue team, but since this was a black-box operation, we had little way of knowing what went wrong after sending out our rigged emails.

If the Phish Fails, Send in the Fox

Going back to the drawing board, we realized that we must have triggered the blue team’s dynamic malware detection systems and controls. We had to find a new way to deliver the payload in a more concealed manner — preferably encrypted — and to have it detonate only when it reached its final destination to prevent premature discovery.

To do so, we had to overcome some hurdles, including:

  • Sidestepping traffic inspection controls;
  • Opening a siloed channel to send information from outside into the organizational networks;
  • Decreasing repeatable sampling of our externally hosted content;
  • Minimizing the chance of attribution at the initial visit/download/delivery stages; and
  • Bypassing URL inspections.

Some creative thinking summoned a good candidate to help us overcome most controls, mostly because it is a legitimate service that people use in daily interactions: Mozilla’s Firefox Send (FFSend).

Before we continue to describe the use of FFSend, we would like to note here that it is a legitimate tool that can be used safely, and that it was not compromised. We also disclosed information in this blog to Mozilla ahead of its publication and received the company’s support.

The Right Fox for the Job

FFSend is a legitimate file transfer tool from Mozilla. It has several interesting features that make it a great tool for users, and when files are sent through, its developers indicate it will generate “a safe, private and encrypted link that automatically expires to ensure your stuff does not remain online forever.” This makes FFSend a useful way to send private files between people in a secure manner.

To send a file, the sender, accessing FFSend via a browser, uploads the file he or she wants to share with the recipient through a simple web interface. He or she receives a URL for a shared link and can send it to the recipient. The recipient visits the shared link and downloads the file, at which point the FFSend service “forgets” the link and removes shared content from the server.

Red Team Research

Figure 1: Basic flow of events using FFSend

From our red team’s perspective, FFSend was a good fit for sending encrypted files. Let’s see how it answered some of the needs we defined.

FFSend allows for large file sizes up to 1 GB, which is large enough an allowance to both send a payload and exfiltrate data. This answered our need for a siloed, covert channel into the organization. It would encrypt and decrypt the payload for us with an AES-GCM algorithm directly in the internet browser, yet we won’t have to deal with any key generation or distribution. The payload would evade the inspection of intercepting proxies that can unwrap Transport Layer Security (TLS), and would remain private and won’t be shared with any party along the way, including Mozilla.

Red Team Payload Delivery

Figure 2: Schematic view of FFSend’s automated encryption

Since firefox.com is a trusted domain on most organizational controls, we gain yet another advantage by using FFSend. We won’t have to labor to set up a fake site that would raise suspicion, and we can still get our file’s link across to the recipient. The trusted Firefox domain is also more likely to slip through URL inspection and anti-phishing controls, as well as blacklists that organizations deploy to catch malicious content coming from rogue resources.

Red Team Research

Figure 3: FFSend is considered a trusted source

As for reducing repeated sampling of the payload, we get that as well by setting a strict one-time-only limit on the number of times our FFSend link can be accessed after it’s generated, avoiding the sandbox attempts and threat sharing. Moreover, FFSend automatically expires links after 24 hours, which effectively makes the path to our payload self-destruct if the target has not opened it. Self-destruction is also featured on FFSend’s application program interface (API), so it can also be ordered ad hoc after a link is sent but before its default expiration.

Red Team Research

Figure 4: FFSend’s link expiration and self-destruct schema

Avoiding attribution is also easier when using a legitimate service that implements ephemeral storage of the files it delivers. Using such a service allowed us to avoid any links back to our testers, since there was no account required to send a file, nor was information on the owner of the encrypted data sent, required or kept.

This meant our ownership of the malicious file would be anonymous, though there would still be a tie to our originating IP address and browser fingerprints. With most information concealed, we deemed this level of anonymity good enough for the desired outcome.

Red Team Payload Delivery

Figure 5: No sender identity required, no attribution links back to red team

Setting Up a Communications Channel

With the file sending issue resolved, we still needed a covert communication channel to help us establish an ongoing operation without being ousted by the blue team.

To set up a communications channel, we did not wish to start from scratch. We decided to use FFSend to make it work as the siloed, covert channel we needed. That was one problem solved, but to coordinate the sending and receiving of data over that channel, we would also need a side channel of communications to avoid inspection and detection.

Communication gets inspected by a number of security controls, so it is essential that we blend in with the environment. To do that, we would have to choose a communication protocol that would allow us to look like everyone else on the network. Looking at the typical choices — Hyper Text Transfer Protocol Secure (HTTPS), Internet Control Message Protocol (ICMP) and Domain Name System (DNS) protocols — we selected DNS for its decent packet capacity and overall better chance of blending in with legitimate user traffic.

DNS fit our need to implement a data channel to FFSend. Also, a command channel can offload to DNS. To make everything work together, DNS record content could be encrypted with the same FFSend shared key used to post the data link, keeping things consistent.

In our command protocol, we can accommodate short instructions and differentiate between the types of requests we want to task agents with, to run or receive responses on. For example, we can encode instructions such as fetch me <file> or execute <command>. The agent would then carry out the request and post the results over our FFSend data channel.

On the wire side, channel interaction will look like a well-formed dynamic DNS request, separate from an HTTPS channel used for data. This split would ensure avoiding traffic correlation.

The Foxtrot Control Server Rises

Once we knew how to set up our covert communications, we set up a rogue control server and named it Foxtrot. Foxtrot was a mechanism we used to facilitate communication between any number of the remote agents.

Having created Foxtrot with a modified FFSend service and a DNS side channel, IBM X-Force Red testers were able to push the initial payload to unsuspecting recipients. The payload circumvented dynamic defenses, helped our red team gain a foothold in the environment and established persistence to freely move data across intercepting proxies. We were also able to execute commands on compromised hosts, even when the defensive team had its security controls and monitoring turned on.

A Word to the Wise Defender

Red teams have the advantage of only needing to find one way in, while blue teams are tasked with securing all ways in and out. This one-sided advantage means that defenders have to keep a close eye on attack tactics, techniques and procedures (TTPs) and expect encryption and covert side channels to challenge existing automated controls.

After having achieved our goals, we came away with some tips for defenders that can help security teams prepare for the TTPs we used.

  • Expect to see the use of client-side encryption gain more prominence in adversarial workflows, and choose security controls accordingly.
  • Expect to see split-data and command channels grow in popularity among attackers, because this technique can help break automated analysis patterns employed by traditional security tools. Defenders should look into behavioral, heuristics-based detection, augmented by a fully staffed security operations center (SOC) to continuously detect split-channel operations.
  • X-Force Red encourages defensive teams to test their incident response (IR) processes against simulated attacker workflows that employ custom tooling capabilities.

What can teams do right now to get ahead of determined threat actors? Step up your security with pre-emptive action in the shape of professional penetration testing, and make sure the scope of the testing gradually covers both hardware and software. You should also consider adopting cognitive solutions to augment analysts’ capabilities and scale up as attacks grow more frequent and complex.

Listen to the X-Force Red in Action podcast series

The post Phish or Fox? A Penetration Testing Case Study From IBM X-Force Red appeared first on Security Intelligence.

Is Your SOC Overwhelmed? Artificial Intelligence and MITRE ATT&CK Can Help Lighten the Load

Whether you have a security team of two or 100, your goal is to ensure that the business thrives. That means protecting critical systems, users and data, detecting and responding to threats, and staying one step ahead of cybercrime.

However, security teams today are faced with myriad challenges, such as fragmented threat data, an overabundance of poorly integrated point solutions and lengthy dwell times — not to mention an overwhelming volume of threat intelligence and a dearth of qualified talent to analyze it.

With the average cost of a data breach as high as $3.86 million, up 6.4 percent from 2017, security leaders need solutions and strategies that deliver demonstrable value to their business. But without a comprehensive framework by which to implement these technologies, even the most advanced tools will have little effect on the organization’s overall security posture. How can security teams lighten the load on their analysts while maximizing the value of their technology investments?

Introducing the MITRE ATT&CK Framework

The MITRE Corporation maintains several common cybersecurity industry standards, including Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE). MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations.

A cyber kill chain describes the various stages of a cyberattack as it pertains to network security. The actual framework, called the Cyber Kill Chain, was developed by Lockheed Martin to help organizations identify and prevent cyber intrusions.

The steps in a kill chain trace the typical stages of an attack from early reconnaissance to completion. Analysts use the chain to detect and prevent advanced persistent threats (APT).

Cyber Kill Chain

The MITRE ATT&CK builds on the Cyber Kill Chain, provides a deeper level of granularity and is behavior-centric.

MITRE Modified Cyber Kill Chain

Benefits of adopting the MITRE ATT&CK framework in your security operations center (SOC) include:

  • Helping security analysts understand adversary behavior by identifying tactics and techniques;

  • Guiding threat hunting and helping prioritize investigations based on tactics used;

  • Helping determine the coverage and detection capability (or lack thereof); and

  • Determining the overall impact using adversaries’ behaviors.

How Artificial Intelligence Brings the ATT&CK Framework to Life

To unlock the full range of benefits, organizations should adopt artificial intelligence (AI) solutions alongside the ATT&CK framework. This confluence enables security leaders to automate incident analysis, thereby force-multiplying the team’s efforts and enabling analysts to focus on the most important tasks in an investigation.

Artificial intelligence solutions can also help security teams drive more consistent and deeper investigations. Whether it’s 4:30 p.m. on a Friday or 10 a.m. on a Monday, your investigations should be equally thorough each and every time.

Finally, using advanced AI tools, such as the newly released QRadar Advisor with Watson 2.0, in the context of the ATT&CK framework can help organizations reduce dwell times with a quicker and more decisive escalation process. Security teams can determine root cause analysis and drive next steps with confidence by mapping the attack to their dynamic playbook.

Learn how QRadar Advisor with Watson 2.0 embraces the MITRE ATT&CK framework

The post Is Your SOC Overwhelmed? Artificial Intelligence and MITRE ATT&CK Can Help Lighten the Load appeared first on Security Intelligence.

Retail Cybersecurity Is Lagging in the Digital Transformation Race, and Attackers Are Taking Advantage

Digital transformation is dominating retailers’ attention — and their IT budgets. As a result, significant gaps in retail cybersecurity are left unfilled just as retail IT faces new challenges, from infrastructure moving to the cloud without clear security policies to an array of new threat vectors focused on personal customer information, ransomware and underprotected business-to-business (B2B) connections.

Just as with line-of-business functions like merchandising and operations, retailers’ cybersecurity functions must undergo a digital transformation to become more holistic, proactive and nimble when protecting their businesses, partners and customers.

Retailers Aren’t Prioritizing Security, and Attackers Are Exploiting the Gaps

According to the retail edition of the “2018 Thales Data Threat Report,” 75 percent of retailers have experienced at least one data breach in the past, with half seeing a breach in the past year alone. That puts retail among the most-attacked industries as ranked by the “2018 IBM X-Force Threat Intelligence Index.”

Underfunded security infrastructure is likely a big reason for this trend; organizations only dedicated an average of around 5 percent of their overall IT budgets to security and risk management, according to a 2016 Gartner report.

While retailers have done a great job addressing payment card industry (PCI) compliance, it has come at a cost to other areas. According to IBM X-Force Incident Response and Intelligence Services (IRIS) research, 78 percent of publicly disclosed point-of-sale (POS) malware breaches in 2017 occurred in the retail sector.

In addition to traditional POS attacks, malicious actors are targeting retailers with new threat vectors that deliver more bang for the buck, such as the following:

  • Personally identifiable information (PII) about customers — Accessible via retailers’ B2C portals, attackers use this information in bot networks to create false IDs and make fraudulent transactions. An increasingly popular approach involves making purchases with gift cards acquired via fraud.
  • Ransomware — Criminals are exploiting poorly configured apps and susceptible end users to access and lock up data, so they can then extract pricey ransoms from targeted retailers.
  • Unprotected B2B vendor connections — Threat actors can gain access to retail systems by way of digital connections to their partners. A growing target is a retailer’s B2B portals that have been constructed without sufficient security standards.

What Are the Biggest Flaws in Retail Cybersecurity?

These new types of attacks take advantage of retailers’ persistent underfunding of critical security defenses. Common gaps include inadequate vulnerability scanning capabilities, unsegmented and poorly designed networks, and using custom apps on legacy systems without compensating controls. When retailers do experience a breach, they tend to address the specific cause instead of taking a more holistic look at their environments.

Retailers also struggle to attract security talent, competing with financial services and other deeper-pocketed employers. The National Institute of Standards and Technology (NIST) reported in 2017 that the global cybersecurity workforce shortage is expected to reach 1.5 million by 2019.

In addition, flaws in governance make retailers more vulnerable to these new types of security threats. To keep up with rapidly evolving consumer demands, many line-of-business departments are adopting cloud and software-as-a-service (SaaS) solutions — but they often do so without any standardized security guidance from IT.

According to the “2017 Thales Data Threat Report,” the majority of U.S. retail organizations planned to use sensitive data in an advanced technology environment such as cloud, big data, Internet of Things (IoT) or containers this year. More than half believed that sensitive data use was happening at the time in these environments without proper security in place. Furthermore, companies undergoing cloud migration at the time of a breach incur $12 per record in additional costs, according to the “2018 Cost of a Data Breach Study.”

To protect their data, retailers need tools to both identify security threats and escalate the response back through their entire infrastructure, including SaaS and cloud services. But many enterprises lack that response capability. What’s more, the “Cost of a Data Breach Study” found that using an incident response (IR) team can reduce the cost of a breach by around $14 per compromised record.

Unfortunately, cybersecurity is not always on the radar in retailers’ C-suites. Without a regularly updated cybersecurity scorecard that reflects an organization’s current vulnerability to attack, senior executives might not regularly discuss the topic, take part in system testing or see cybersecurity as part of business continuity.

3 Steps to Close the Gaps in Your Security Stance

Time isn’t stopping as retailers grapple with these threats. Retail cybersecurity leaders must also monitor the General Data Protection Regulation (GDPR), where compliance requirements are sometimes poorly understood, as well as the emergence of artificial intelligence (AI) in both spoofing and security response. In addition, retailers should keep an eye on the continued uncertainty about the vulnerability of platform-as-a-service (PaaS), microservices, cloud-native apps and other emerging technologies.

By addressing the gaps in their infrastructure, governance and staffing, retailers can more effectively navigate known threats and those that will inevitably emerge. Change is never easy, but the following three steps can help retailers initiate digital transformation and evolve their current approach to better suit today’s conditions:

1. Increase Budgets

According to Thales, 84 percent of U.S. retailers plan to increase their security spending. While allocating these additional funds, it’s important for retailers to take a more holistic view, matching budgets to areas of the highest need. Understanding the costs and benefits of addressing security gaps internally or through outsourcing is a key part of this analysis.

2. Improve Governance

Enacting consistent security guidelines across internally run systems as well as cloud- and SaaS-based services can help retailers ensure that they do not inadvertently open up new vulnerabilities in their platforms. Senior-level endorsement is an important ingredient in prioritizing cybersecurity across the enterprise. Regular security scorecarding can be a valuable tool to keep cybersecurity at the top of executives’ minds.

3. Invest in MSS

A growing number of retailers have realized that starting or increasing their use of managed security services (MSS) can help them achieve a higher level of security maturity at the same price as managing activities in-house, if not at a lower cost. MSS allow retailers’ internal cybersecurity to operate more efficiently, address critical talent shortages and enable retailers to close critical gaps in their current security stance.

Why Digital Transformation Is Critical to Rapid Response

Digital transformation is all about becoming more proactive and nimble to respond to consumers’ rapidly growing expectations for seamless, frictionless shopping. Retailers’ cybersecurity efforts require a similar, large-scale transition to cope with new threat vectors, close significant infrastructure gaps and extend security protocols across new platforms, such as cloud and SaaS. By rethinking their budgets, boosting governance and incorporating MSS into their security operations, retail security professionals can support digital transformation while ensuring the business and customer data remains protected and secure.

Listen to the podcast

The post Retail Cybersecurity Is Lagging in the Digital Transformation Race, and Attackers Are Taking Advantage appeared first on Security Intelligence.

Threat Actors Exploit Equation Editor to Distribute Hawkeye Keylogger

A recent keylogger campaign leveraged an old Microsoft Office Equation Editor vulnerability to target user credentials, passwords and clipboard content.

As reported by Quick Heal, threat actors used Rich Text Format (RTF) files — either standalone or embedded in PDF files with DOC extensions — to distribute the Hawkeye keylogger malware.

While the attacks used typical phishing emails to target users and organizations, the campaign opted for a less common path to compromise: the Microsoft Office Equation Editor. The so-called “Hawkeye v8 Reborn” exploit CVE-2017-11882, which triggers a stack buffer overflow in Equation Editor by using an unbounded string of FONT name defined within a FONT record structure. If successful, attackers gain the ability to execute arbitrary code and deliver malware payloads.

Latest Version of Hawkeye Keylogger Brings Additional Capabilities

Obfuscation and evasion are critical to Hawkeye’s success. It starts with the use of Equation Editor: Despite a November 2017 fix from Microsoft, many unpatched versions still exist.

In addition, the Hawkeye keylogger attempts to evade detection by compiling code while executing, and loading its payload in memory rather than writing it to disk. By waiting until the last possible moment to compile code and limiting its attack surface to in-memory infections, Hawkeye makes it difficult for security professionals to identify the threat.

Once the keylogger payload is up and running, threat actors have access to myriad functions, including File Transfer Protocol (FTP) copying, mail credential theft and clipboard capture. The malware also leverages antidebugging with SuppressIldasm and ConfuserEx 1.0, and uses legitimate tools such as MailPassView and BrowserPassView to steal passwords. Furthermore, Hawkeye disables antivirus tools, task manager, command prompt and registry, and the restoration service rstrui.exe is also disrupted to prevent file recovery.

How Security Teams Can Dodge Hawkeye’s Attacks

To avoid Hawkeye keylogger campaigns and similar malspam efforts, organizations should start with patching. It comes down to the Pareto Principle: 20 percent of security issues cause around 80 percent of security problems. In the case of CVE-2017-11882, this means applying Microsoft’s November 2017 fix.

Security experts also recommend implementing multilayered malspam defense, including email filtering, endpoint protection and system hardening. Given the ability of determined attackers to bypass these measures, however, it’s also a good idea to deploy automated incident response (IR) processes capable of analyzing emails, extracting indicators of compromise (IoCs), and updating all filtering devices and services with this information.

Source: Quick Heal, Microsoft

The post Threat Actors Exploit Equation Editor to Distribute Hawkeye Keylogger appeared first on Security Intelligence.

Trusting Security Metrics: How Well Do We Know What We Think We Know?

Security leaders aren’t the only ones who seek valuable metrics from accurate instruments. If you remember analog dashboards in cars, you might also remember how unreliable fuel gauges tended to be. As a result, it was impossible to know when you needed to stop for gas.

So when we can’t trust our instruments, how sure can we be of what we think we know? In many ways, organizations today are coming to a similar realization about their security metrics. So what are we to do? Junk the car? Cover up the gauge? Replace it at a significant cost? How about learning to live with it once we realize it still has some value, but needs to be framed in the right way?

If some of the readings from your dashboard have varying levels of accuracy and lag, you might be able to adjust your interpretations of them to correct for those discrepancies.

Understand Measures, Metrics and Their Value

According to the National Institute of Standards and Technology (NIST)’s “Framework for Improving Critical Infrastructure Cybersecurity,” measures are “quantifiable, observable, objective data supporting metrics” and thus “most closely aligned with technical controls.” By contrast, metrics are used to “facilitate decision making and improve performance and accountability” and thus “create meaning and awareness of organizational security postures by aggregating and correlating measures,” according to the framework.

In other words, underlying measures express technical readings, as well as the proper configuration and effectiveness of your controls. The metrics, which are reported to top leadership, should be grounded in context and relevance to the business and its objectives.

Naturally, greater accuracy yields greater value. But complete context requires a diversity of insights. What if you can’t have both?

Why We Need to Check What We Think We Know

In a 2016 white paper titled “Unified Security Metrics,” Cisco shared its experience as the organization sought to unify its metrics to handle security concerns “much more strategically than reactively.” Since implementing its unified security metrics (USM) approach, the chief information officer (CIO) has been able to develop “an overall picture of the business risk,” which enables security professionals to remediate issues more quickly and better align their strategies with business objectives.

But along the way, Cisco’s experience also taught its leaders the importance of analyzing and documenting the quality and feasibility of measurements to determine whether they are “available, trusted and accessible,” and whether the collection and reporting of such measures could be automated. This helped set the company on a path to not only continuously monitor its posture, but also to seek to continuously improve its metrics.

What Happens When You Use Imperfect Measures in Decision-Making?

Before rolling up security measures into metrics, organizations should undergo a systematic review process as described in the Cisco white paper. Look closely at the measurement data collected, the questions that those data points help answer, the category of that measurement (people, process, technology, etc.), as well as attributes such as availability, scalability and, finally, quality.

One example of a process measure to consider is the number of closed and open incidents. Look particularly at whether the root cause and long-term solution of an incident has been identified and tracked to closure. While Cisco reported that this measure is readily available, it is only partially automated, and its quality is deemed as “partly” satisfactory.

Yet even this imperfect data — neither fully automated nor of high quality — has value to the organization when tracked and reported across time. Taken alongside other metrics, it can still contribute to a clearer picture of the organization’s cybersecurity posture and its handling of incidents.

Build Context to Initiate Conversations With Top Leadership

Although an isolated security measure might be an imperfect reflection of reality, it may yet still present enough value to the organization to be useful as one part of a metric that is shared with top leadership and the board. In the previous example, the number of open incidents still being investigated provides an important view into the organization’s ability to identify root causes, which allows the security team to continuously improve its ability to defend against similar attacks in the future.

Because some metrics are of higher quality than other, it is important to document which ones are based on near real-time, rock-solid measurements, and which ones are collected manually and of less-than-perfect quality. The key benefit of cyber risk metrics is to initiate conversations between security leadership and the board. This is the essential springboard for all future security projects.

The post Trusting Security Metrics: How Well Do We Know What We Think We Know? appeared first on Security Intelligence.

Why You Should Start Leveraging Network Flow Data Before the Next Big Breach

Organizations tend to end up in cybersecurity news because they failed to detect and/or contain a breach. Breaches are inevitable, but whether or not an organization ends up in the news depends on how quickly and effectively it can detect and respond to a cyber incident.

Beyond the fines, penalties and reputational damage associated with a breach, organizations should keep in mind that today’s adversaries represent a real, advanced and persistent threat. Once threat actors gain a foothold in your infrastructure or network, they will almost certainly try to maintain it.

To successfully protect their organizations, security teams need the full context of what is happening on their network. This means data from certain types of sources should be centrally collected and analyzed, with the goal of being able to extract and deliver actionable information.

What Is Network Flow Data?

One of the most crucial types of information to analyze is network flow data, which has unique properties that provide a solid foundation on which a security framework should be built. Network flow data is extracted — by a network device such as a router — from the sequence of packets observed within an interval between two internet protocol (IP) hosts. The data is then forwarded to a flow collector for analysis.

A unique flow is defined by the combination of the following seven key fields:

  1. Source IP address
  2. Destination IP address
  3. Source port number
  4. Destination port number
  5. Layer 3 protocol type
  6. Type of service (ToS)
  7. Input logical interface (router or switch interface)

If any one of the packet values for these fields is found to be unique, a new flow record is created. The depth of the extracted information depends on both the device that generates the flow records and the protocol used to export the information, such as NetFlow or IP Flow Information Export (IPFIX). Inspection of the traffic can be performed at different layers of the Open Systems Interconnection (OSI) model — from layer 2 (the data link layer) to layer 7 (the application layer). Each layer that is inspected adds more meaningful and actionable information for a security analyst.

One major difference between log event data and network flow data is that an event, which typically is a log entry, happens at a single point in time and can be altered. A network flow record, in contrast, describes a condition that has a life span, which can last minutes, hours or days, depending on the activities observed within a session, and cannot be altered. For example, a web GET request may pull down multiple files and images in less than a minute, but a user watching a movie on Netflix would have a session that lasts over an hour.

What Makes Network Flow Data So Valuable?

Let’s examine some of the aforementioned properties in greater detail.

Low Deployment Effort

Network flow data requires the least deployment effort because networks aggregate most of the traffic in a few transit points, such as the internet boundary, and the changes made to those transit points are not often prone to configuration mistakes.

Everything Is Connected

From a security perspective, we can assume that most of the devices used by organizations, if not all of them, operate on and interact with a network. Those devices can either be actively controlled by individuals — workstations, mobile devices, etc. — or operated autonomously — servers, security endpoints, etc.

Furthermore, threat actors typically try to remove traces of their attacks by manipulating security and access logs, but they cannot tamper with network flow data.

Reliable Visibility

The data relevant to security investigations is typically collected from two types of sources:

  • Logs from endpoints, servers and network devices, using either an agent or remote logging; or
  • Network flow data from the network infrastructure.

The issue with logs is that there will always be connected devices from which an organization cannot collect data. Even if security policies mandate that only approved devices may be connected to a network, being able to ensure that unmanaged devices or services have not been inserted into the network by a malicious user is crucial. Furthermore, history has shown that malicious users actively attempt to circumvent host agents and remote logging, making the log data from those hosts unreliable. The most direct source of information about unmanaged devices is the network.

Finally, network flow data is explicitly defined by the protocol, which changes very slowly. This is not the case with log data, where formats are very often poorly documented, tied to specific versions, not standardized and prone to more frequent changes.

Automatically Reduce False Positives

A firewall or access control list (ACL) permit notification does not mean that a successful communication actually took place. On the other hand, network flow data can be used to confirm that a successful communication took place. Being able to issue an alert unless a successful communication took place can dramatically reduce false positives and, therefore, save precious security analyst time.

Moving Beyond Traditional Network Data

Traditional network flow technology was originally designed to provide network administrators with the ability to monitor their network and pinpoint network congestion. More recently, security analysts discovered that network flow data was also useful to help them find network intrusions. However, basic network flow data was never designed to detect the most sophisticated advanced persistent threats (APTs). It does not provide the necessary in-depth visibility, such as the hash of a file transferred over a network or the detected application, as opposed to the port number, to name a few. By lacking this level of visibility, traditional network flow data greatly limits the ability to provide actionable information about a cyber incident.

Given the increasing level of sophistication of attacks, certain communications, such as inbound traffic from the internet, should be further scrutinized and inspected with a purpose-built solution. The solution must be able to perform detailed packet dissection and analysis — at line speed and in passive mode — and deliver extensive and enriched network flow data through a standard protocol such as IPFIX, which defines how to format and transfer IP flow data from an exporter to a collector.

The resulting enriched network flow data can be used to augment the prioritization of relevant alerts. Such data can also accelerate alert-handling research and resolution.

Why You Should Anaylze Network Flow Data?

Network flow data is a crystal ball into your environment because it delivers much-needed and immediate, in-depth visibility. It can also help security teams detect the most sophisticated attacks, which would otherwise be missed if investigation relied solely on log data. By reconciling network flow data with less-reliable log data, organizations can detect attacks more capably and conduct more thorough investigations. The bottom line is that network flow data can help organizations catch some of the most advanced attacks that exist, and it should not be ignored.

The post Why You Should Start Leveraging Network Flow Data Before the Next Big Breach appeared first on Security Intelligence.

Destructive Attacks Spike in Q3, Putting Election Security at Risk

A new report revealed that nearly one-third of cyber incidents reported in Q3 2018 were classified as “destructive attacks,” putting election security at risk in the lead-up to the 2018 midterms.

In its “Quarterly Incident Response Threat Report” for November 2018, Carbon Black found that 32 percent of election-season cyberattacks were destructive in nature — that is, “attacks that are tailored to specific targets, cause system outages and destroy data in ways designed to paralyze an organization’s operations.” These attacks targeted a wide range of industries, most notably financial services (78 percent) and healthcare (59 percent).

In addition, the report revealed that roughly half of cyberattacks now leverage island hopping, a technique that threatens not noly the target company, but its customers and partners as well. Thirty percent of survey respondents reported seeing victims’ websites converted into watering holes.

Time to Panic About Election Security? Not So Fast

Despite these alarming statistics and the very real risks they signify, Cris Thomas (aka Space Rogue) of IBM X-Force Red told TechRepublic that since voting machines are not connected to the internet, a malicious actor would need physical access to compromise one. This could prove challenging for attackers, who must understand not only the vulnerabilities in each individual voting machine, but also each precinct’s policies.

Bad actors could theoretically stage an attack by obtaining an official voting machine before the election and gaining physical access to it on voting day, but these machines come with checks and balances that detect when votes are changed, decreasing the liklihood of a successful attack.

Attacks Are Growing Increasingly Evasive — and Expensive

Still, the rise in destructive attacks is particularly concerning given that, as reported by Carbon Black, attacks across the board are becoming more difficult to detect. In addition, 51 percent of cases involved counter-incident response techniques, and nearly three-quarters of participants specifically witnessed the destruction of logs during these incidents. Meanwhile, 41 percent observed attackers circumventing network-based protections.

These evasive tactics could prove costly for companies. According to Accenture, threat actors could set companies back as much as $2.4 million with a single malware incident, with cybercrime costing each organization an average of $11.7 million per year.

How to Defend Against Destructive Attacks

Security professionals can defend their organizations against destructive attacks by developing a dedicated framework to predict what steps an adversary might take once inside the network. Security teams should supplement this framework with AI tools that can use pattern recognition and behavior analysis to stay one step ahead of cyberthreats.

Sources: Carbon Black, Accenture, TechRepublic

The post Destructive Attacks Spike in Q3, Putting Election Security at Risk appeared first on Security Intelligence.