Daily Archives: May 29, 2018

SecMon State of the Union: Refreshing Requirements

Posted under: Research and Analysis

Now that you understand the use cases for security monitoring, our next step is to translate them into requirements for your strategic security monitoring platform. In other words, now that you have an idea of the problem(s) you need to solve, what capabilities do you need to address them? Part of that discussion is inevitably about what you don’t get from your existing security monitoring approach – this research wouldn’t be very interesting if your existing tools were all peachy.


We made the case that Visibility Is Job #1 in our Security Decision Support series. Maintaining sufficient visibility across all the moving pieces in your environment is getting harder. So when we boil it down to a set of requirements, it looks like this:

  • Aggregate Existing Security Data: We could have called this requirement same as it ever was, because all your security controls generate a bunch of data you need to collect. Kind of like the stuff you were gathering in the early days of SEM (Security Event Management) or log management 15 years ago. Given all the other things on your plate, what you don’t want is to need to worry about integrating your security devices, or figuring out how to scale a solution to the size of your environment. To be clear, security data aggregation has commoditized, so this is really table stakes for whatever solution you consider.
  • Data Management: Amazingly enough, when you aggregate a bunch of security data, you need to manage it. So data management is still a thing. We don’t need to go back to SIEM 101 but aggregating, normalizing, reducing, and archiving security data is a core function for any security monitoring platform – regardless of whether it started life as SIEM or a security analytics product. One thing to consider (which we will dig into more when we get to procurement strategies) is the cost of storage, because some emerging cloud-based pricing models can be punitive when you significantly increase the amount of security data collected.
  • Embracing New Data Sources: In the old days the complaint was that vendors did not support all the devices (security, networking, and computing) in the organization. As explained above, that’s less of an issue now. But consuming and integrating cloud monitoring, threat intelligence, business context (such as asset information and user profiles), and non-syslog events – all drive a clear need for streamlined integration to get value from additional data faster.

Seeing into the Cloud

When considering the future requirements of a security monitoring platform, you need to understand how it will track what’s happening in the cloud, because it seems the cloud is here to stay (yes, that was facetious). Start with API support, the lingua franca of the cloud. Any platform you choose must be able to make API calls to the services you use, and/or pull information and alerts from a CASB (Cloud Access Security Broker) to track use of SaaS within your organization.

You’ll also want to understand the architecture involved in gathering data from multiple cloud sources. You definitely use multiple SaaS services and likely have many IaaS (Infrastructure as a Service) accounts, possibly with multiple providers, to consider. All these environments generate data which needs to be analyzed for security impact, so you should define a standard cloud logging and monitoring approach, and likely centralize aggregation of cloud security data. You also should consider how cloud monitoring integrates with your on-premise solution. For more detail on this please see our paper on Monitoring the Hybrid Cloud.

For specific considerations regarding different cloud environments:

  • Private cloud/virtualized data center: There are differences between monitoring your existing data center and a highly virtualized environment. You can tap the physical network within your data center for visibility. But for the abstracted layer above that – which contains virtualized networks, servers, and storage – you need proper access and instrumentation in the cloud environment to see what happens within virtual devices. You can also route network traffic within your private cloud through an inspection point, but the architectural flexibility cost is substantial. The good news is that security monitoring platforms can now generally monitor virtual environments by installing sensors within the private cloud.
  • IaaS: The biggest and most obvious challenge in monitoring IaaS is reduced visibility because you don’t control the physical stack. You are largely restricted to logs provided by your cloud service provider. IaaS vendors abstract the network, limiting your ability to see network traffic and capture network packets. You can run all network traffic through a cloud-based choke point for collection, regaining a faint taste of the visibility available inside your own data center, but again that sacrifices much of the architectural flexibility attracting you to the cloud. You also need to figure out where to aggregate and analyze collected logs from both the cloud service and individual instances. These decisions depend on a number of factors – including where your technology stacks run, the kinds of analyses to perform, and what expertise you have available on staff.
  • SaaS: Basically, you see what your SaaS provider shows you, and not much else. Most SaaS vendors provide logs to pull into your security monitoring environment. They don’t provide visibility into the vendor’s technology stack, but you are able to track your employees’ activity within their service – including administrative changes, record modifications, login history, and increasingly application activity. You can also pull information from a CASB which is polling SaaS APIs and analyzing egress web logs for further detail.

Threat Detection

The key to threat detection in this new world is the ability to detect both attacks you know about (rules-based), attacks you haven’t seen yet but someone else has (threat intelligence driven), and unknown attacks which cause anomalous activity on behalf of your users or devices (security analytics). The patterns you are trying to detect can be pretty much anything – including command and control, fraud, system misuse, malicious insiders, reconnaissance, and even data exfiltration. So there is no lack of stuff to look for – the question is what do you need to detect it?

  • Rules: You can’t ditch your rules – don’t even think about it. Actually you could – but you’d likely miss a bunch of attacks you should catch because you know their patterns. The behavioral models are focused on stuff you don’t know about, not optimized to find known bad stuff. Similar to endpoint protection, rules (signatures) are not an either/or proposition. If you already know about an attack, shame on you if you miss it.
  • Threat Intelligence: With attacks you hadn’t seen yet, in the old days you were out of luck. But today there is a decent chance someone else has been attacked by it, so that’s where threat intelligence comes into play. You pump a threat feed into your security monitoring platform – someone else has already seen the attack, and you want to be ready when it comes at you. Make sure you can categorize threat intelligence alerts differently so you can track the effectiveness of the information from various threat feeds to determine value and make sure you aren’t increasing alert noise.
  • Security Analytics: The final approach you need to consider is based on advanced math. You’ll hear terms like security big data, machine learning, and the generic “It’s fancy math, trust us!” to describe these techniques. Regardless of description security analytics involves profiling devices, networks, and applications to baseline normal activity – and then looking for deviations from that profile as indicators of malicious activity. It’s very difficult to discern the differences between one analytics approach and another, so understanding what will work for your organization requires actually trying them out. We’ll discuss procurement in our next post.

After a few years of using security monitoring technology, hopefully by this point you realize this isn’t (and likely will never be) a set it and forget it scenario. You’ll need to keep the system current and tune it, because not only are adversaries constantly changing and evolving their tactics, but your environment is constantly changing, requiring ongoing maintenance.

So you’ll want to build a learning and tuning step into your operational processes, ensuring you improve detection based on false positives (alerts which weren’t real attacks) and negatives (attacks you missed). If you want to be successful at detecting attacks, a feedback loop is critical.

Forensics and Response

Obviously you cannot prevent every attack, and even if you do fire an alert about a specific attack your Security Ops team might miss it. So your security monitoring platform will also play a major role in your incident response process. The challenge is less gathering the data or trying to link it together, but instead how to make sense of the information at your disposal in a structured fashion – which is what you need to accelerate identification of the root cause of attacks. As we discussed in Future of Security Operations paper, many aspects of the response process can be automated, so ensuring support for that is essential.

Key capabilities include:

  • Search: Modern attacks do not limit themselves to a single machine, so you need to quickly figure out how many devices have been attacked as part of a broader campaign. Some of that takes place during validation/triage as the analyst pivots, but determining the breadth of an attack requires them to search the entire environment for attack indicators, typically via metadata.
    • Natural Language/Cognitive Search: An emerging capability is the use of natural language search terms instead of arcane Boolean operators. This helps less sophisticated analysts be more productive without having to learn a new language.
    • Retrospective Search: Responders often have a sense of what caused an attack, so they should be able to search through historical security data to find activity which did not trigger an alert because it wasn’t recognized at the time.
  • Case Management: The objective is to make each analyst as effective and efficient as possible, so you should have a place to store all information related to each incident. This includes enrichment data from threat intel and other artifacts gathered during validation. This should also feed into a broader incident response platform if the forensics/response team uses one.
  • Visualization: To reliably and quickly validate an alert, it is very helpful to see a timeline of all activity related to ab incident. That way you can see what actually happened across devices and get a broader understanding of the issue’s depth. An analyst can take a quick look at the timeline and figure out what requires further investigation. Visualization can present all sorts of information but be wary of overcomplicating the console. It is definitely possible to present too much.
  • Drill down: Once an analyst has figured out which activity in the timeline raises concerns, they drill into it. At each stage of the attack they find other things to investigate, so being able to jump between events and devices helps identify the root causes of attacks quickly. There is also a decision to be made regarding how much data to collect and have at the ready. Obviously the more granular the available telemetry, the more accurate the validation and root cause analysis. But with increasingly granular metadata available you might not need full capture of networks or endpoints.
  • Workflows and Automation: The more structured you can make your response function, the better a shot junior analysts have of finding the root cause of an attack, and figuring out how to contain and remediate it. Given the skills gap every organization faces, every bit of assistance helps. Response playbooks for a variety of different kinds of attacks within the security monitoring environment can help standardize and structure the response process. Additionally, being able to integrate with automation platforms (now called SOAR – Security Orchestration, Automation, and Response) to streamline response – at least initial phases – dramatically improves effectiveness.
  • Integration with Malware Tools: During validation you will also want to check whether an executed file is actually malware. A security monitoring platform can store executables and integrate with network-based sandboxes to explode and analyze files – to figure out both whether a file is malicious and what it does. This provides context for eventual containment and remediation. Ideally this integration will be native, enabling you to select an executable within the response console to send to the sandbox, with the verdict and report filed with the case.
  • Hunting: Threat hunting has come into vogue over the past few years, as mature organizations decided they no longer wanted to be at the mercy of security monitoring, desiring a more active role finding attackers. So their more accomplished analysts started looking for trouble. They went hunting for adversaries rather than waiting for security monitors to report attacks in progress. Analysts need to figure out which behaviors and activities to hunt, then seek them out in your environment. The hunter starts with a hypothesis, and then runs through scenarios to either prove or disprove it. If the hunter finds suspicious activity then more traditional response functions – including searching, drilling down into available security data, and pivoting to other devices, all help to follow the trail.

Compliance and Reporting

As we have mentioned, compliance reporting is extremely resource intensive and doesn’t add a lot of value to an organization. But if you screw up it can cost a lot of money in fines or other problems. The idea is to streamline the process of substantiating your controls to the greatest degree possible, so you can get the reports done as quickly as possible and back to real work.

This distinctly unsexy requirement is admittedly old hat, but you don’t want to go back to preparing for your assessments by wading through reams of log printouts and assembling data in Excel, do you? You want your security monitoring tool to ship with dozens of reports to show your controls in place, and map them to compliance requirements – so you don’t need to do it manually.

You’ll want the ability to customize the reports which come with the tool, as well as develop your own reports when needed.


Over the past few years, as you added mobile and cloud services and possibly endpoint data to your security data collection, you have been dealing with a lot more data – and there are no signs of abatement in the volume of security data. So you need to plan for scale.

  • Security Data: Does your existing security monitoring platform keep pace with the increase in data and continue to perform smoothly? This is where the solution’s underlying architecture shows through. Is the data aggregated on an appliance which gets bogged down at high insertion rates? Does the offering leverage a cloud-based architecture, so you don’t know how it scales – it just does? Does it combine both to support both on-premise assets and cloud-native technology stacks? The architecture you select must match your requirements – make sure any solution you consider can double in size within a reasonable timeframe without a forklift upgrade. Because the only surety in technology is that you will be dealing with more data sooner than you expect.
  • Pricing Scalability: Security monitoring can be priced based on events per second, a historical metric from when all the data was collected by network sensors. We increasingly see pricing based on the volume of data aggregated per day. Either way you have a disincentive to collect more data, which is a problem when visibility into a sprawling IT environment is essential to your ability to detect attacks. So consider how the monitoring platform’s pricing scales.


As much as we’d like to rely solely on technical requirements to select the best security monitoring platform, other factors always come into play.

  • Integration with Broader Product Line: This is the age-old choice between big security company and focused upstart. We know smaller companies innovate faster, but many larger organizations are actively trying to reduce the number of vendors they deal with. A key question is: can you gain leverage by adopting a security monitoring platfrom from a big vendor which already provides various other solutions in your environment? One thing to check is that promised integration really exists. We don’t say that facetiously – just because a vendor acquired a smaller company, or signed an OEM technology agreement, doesn’t mean their solutions have been integrated beyond procurement. That’s something to confirm during PoC.
  • Ease of Management: How easy is it to manage the platform? To archive older data? To roll out new collectors, both on-premise and in the cloud? How about adding new use cases and customizing correlation rules? Are policy management screens easy to use, or do they consist of 500 checkboxes you don’t fully understand? Make sure you have good answers to these questions during the PoC, to make sure your new tool doesn’t create more work.
  • Vendor Viability: Have you ever bought a product from a smaller innovative company which didn’t make it for whatever reason, and been left holding the bag? We all have, so keep in mind that vendor fortunes change dramatically and quickly. Your innovative small company may get acquired by a big IT shop and run into the ground. Conversely, many larger security companies have struggled to scale (and show Wall Street growth and profits), forcing them to cut resources and miss huge innovations in the market. So buying from a big company isn’t always a safe bet either. Always consider each vendor’s ongoing viability and ability to deliver on its roadmap, to ensure it lines up with what you need going forward.

Now that you have an idea about what you need to look for in a security monitoring solution, it’s time to talk to vendors and figure out what to buy. We will wrap up this series with the procurement process, which is how you figure out what’s real and what’s not – before you write a large check.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Remote Authentication GeoFeasibility Tool – GeoLogonalyzer

Users have long needed to access important resources such as virtual private networks (VPNs), web applications, and mail servers from anywhere in the world at any time. While the ability to access resources from anywhere is imperative for employees, threat actors often leverage stolen credentials to access systems and data. Due to large volumes of remote access connections, it can be difficult to distinguish between a legitimate and a malicious login.

Today, we are releasing GeoLogonalyzer to help organizations analyze logs to identify malicious logins based on GeoFeasibility; for example, a user connecting to a VPN from New York at 13:00 is unlikely to legitimately connect to the VPN from Australia five minutes later.

Once remote authentication activity is baselined across an environment, analysts can begin to identify authentication activity that deviates from business requirements and normalized patterns, such as:

  1. User accounts that authenticate from two distant locations, and at times between which the user probably could not have physically travelled the route.
  2. User accounts that usually log on from IP addresses registered to one physical location such as a city, state, or country, but also have logons from locations where the user is not likely to be physically located.
  3. User accounts that log on from a foreign location at which no employees reside or are expected to travel to, and your organization has no business contacts at that location.
  4. User accounts that usually log on from one source IP address, subnet, or ASN, but have a small number of logons from a different source IP address, subnet, or ASN.
  5. User accounts that usually log on from home or work networks, but also have logons from an IP address registered to cloud server hosting providers.
  6. User accounts that log on from multiple source hostnames or with multiple VPN clients.

GeoLogonalyzer can help address these and similar situations by processing authentication logs containing timestamps, usernames, and source IP addresses.

GeoLogonalyzer can be downloaded from our FireEye GitHub.

GeoLogonalyzer Features

IP Address GeoFeasibility Analysis

For a remote authentication log that records a source IP address, it is possible to estimate the location each logon originated from using data such as MaxMind’s free GeoIP database. With additional information, such as a timestamp and username, analysts can identify a change in source location over time to determine if that user could have possibly traveled between those two physical locations to legitimately perform the logons.

For example, if a user account, Meghan, logged on from New York City, New York on 2017-11-24 at 10:00:00 UTC and then logged on from Los Angeles, California 10 hours later on 2017-11-24 at 20:00:00 UTC, that is roughly a 2,450 mile change over 10 hours. Meghan’s logon source change can be normalized to 245 miles per hour which is reasonable through commercial airline travel.

If a second user account, Harry, logged on from Dallas, Texas on 2017-11-25 at 17:00:00 UTC and then logged on from Sydney, Australia two hours later on 2017-11-25 at 19:00:00 UTC, that is roughly an 8,500 mile change over two hours. Harry’s logon source change can be normalized to 4,250 miles per hour, which is likely infeasible with modern travel technology.

By focusing on the changes in logon sources, analysts do not have to manually review the many times that Harry might have logged in from Dallas before and after logging on from Sydney.

Cloud Data Hosting Provider Analysis

Attackers understand that organizations may either be blocking or looking for connections from unexpected locations. One solution for attackers is to establish a proxy on either a compromised server in another country, or even through a rented server hosted in another country by companies such as AWS, DigitalOcean, or Choopa.

Fortunately, Github user “client9” tracks many datacenter hosting providers in an easily digestible format. With this information, we can attempt to detect attackers utilizing datacenter proxy to thwart GeoFeasibility analysis.

Using GeoLogonalyzer

Usable Log Sources

GeoLogonalyzer is designed to process remote access platform logs that include a timestamp, username, and source IP. Applicable log sources include, but are not limited to:

  1. VPN
  2. Email client or web applications
  3. Remote desktop environments such as Citrix
  4. Internet-facing applications

GeoLogonalyzer’s built-in –csv input type accepts CSV formatted input with the following considerations:

  1. Input must be sorted by timestamp.
  2. Input timestamps must all be in the same time zone, preferably UTC, to avoid seasonal changes such as daylight savings time.
  3. Input format must match the following CSV structure – this will likely require manually parsing or reformatting existing log formats:

YYYY-MM-DD HH:MM:SS, username, source IP, optional source hostname, optional VPN client details

GeoLogonalyzer’s code comments include instructions for adding customized log format support. Due to the various VPN log formats exported from VPN server manufacturers, version 1.0 of GeoLogonalyzer does not include support for raw VPN server logs.

GeoLogonalyzer Usage

Example Input

Figure 1 represents an example input VPNLogs.csv file that recorded eight authentication events for the two user accounts Meghan and Harry. The input data is commonly derived from logs exported directly from an application administration console or SIEM.  Note that this example dataset was created entirely for demonstration purposes.

Figure 1: Example GeoLogonalyzer input

Example Windows Executable Command

GeoLogonalyzer.exe --csv VPNLogs.csv --output GeoLogonalyzedVPNLogs.csv

Example Python Script Execution Command

python GeoLogonalyzer.py --csv VPNLogs.csv --output GeoLogonalyzedVPNLogs.csv

Example Output

Figure 2 represents the example output GeoLogonalyzedVPNLogs.csv file, which shows relevant data from the authentication source changes (highlights have been added for emphasis and some columns have been removed for brevity):

Figure 2: Example GeoLogonalyzer output


In the example output from Figure 2, GeoLogonalyzer helps identify the following anomalies in the Harry account’s logon patterns:

  1. FAST - For Harry to physically log on from New York and subsequently from Australia in the recorded timeframe, Harry needed to travel at a speed of 4,297 miles per hour.
  2. DISTANCE – Harry’s 8,990 mile trip from New York to Australia might not be expected travel.
  3. DCH – Harry’s logon from Australia originated from an IP address associated with a datacenter hosting provider.
  4. HOSTNAME and CLIENT – Harry logged on from different systems using different VPN client software, which may be against policy.
  5. ASN – Harry’s source IP addresses did not belong to the same ASN. Using ASN analysis helps cut down on reviewing logons with different source IP addresses that belong to the same provider. Examples include logons from different campus buildings or an updated residential IP address.

Manual analysis of the data could also reveal anomalies such as:

  1. Countries or regions where no business takes place, or where there are no employees located
  2. Datacenters that are not expected
  3. ASN names that are not expected, such as a university
  4. Usernames that should not log on to the service
  5. Unapproved VPN client software names
  6. Hostnames that are not part of the environment, do not match standard naming conventions, or do not belong to the associated user

While it may be impossible to determine if a logon pattern is malicious based on this data alone, analysts can use GeoLogonalyzer to flag and investigate potentially suspicious logon activity through other investigative methods.

GeoLogonalyzer Limitations

Reserved Addresses

Any RFC1918 source IP addresses, such as 192.168.X.X and 10.X.X.X, will not have a physical location registered in the MaxMind database. By default, GeoLogonalyzer will use the coordinates (0, 0) for any reserved IP address, which may alter results. Analysts can manually edit these coordinates, if desired, by modifying the RESERVED_IP_COORDINATES constant in the Python script.

Setting this constant to the coordinates of your office location may provide the most accurate results, although may not be feasible if your organization has multiple locations or other point-to-point connections.

GeoLogonalyzer also accepts the parameter –skip_rfc1918, which will completely ignore any RFC1918 source IP addresses and could result in missed activity.

Failed Logon and Logoff Data

It may also be useful to include failed logon attempts and logoff records with the log source data to see anomalies related to source information of all VPN activity. At this time, GeoLogonalyzer does not distinguish between successful logons, failed logon attempts, and logoff events. GeoLogonalyzer also does not detect overlapping logon sessions from multiple source IP addresses.

False Positive Factors

Note that the use of VPN or other tunneling services may create false positives. For example, a user may access an application from their home office in Wyoming at 08:00 UTC, connect to a VPN service hosted in Georgia at 08:30 UTC, and access the application again through the VPN service at 09:00 UTC. GeoLogonalyzer would process this application access log and detect that the user account required a FAST travel rate of roughly 1,250 miles per hour which may appear malicious. Establishing a baseline of legitimate authentication patterns is recommended to understand false positives.

Reliance on Open Source Data

GeoLogonalyzer relies on open source data to make cloud hosting provider determinations. These lookups are only as accurate as the available open source data.

Preventing Remote Access Abuse

Understanding that no single analysis method is perfect, the following recommendations can help security teams prevent the abuse of remote access platforms and investigate suspected compromise.

  1. Identify and limit remote access platforms that allow access to sensitive information from the Internet, such as VPN servers, systems with RDP or SSH exposed, third-party applications (e.g., Citrix), intranet sites, and email infrastructure.
  2. Implement a multi-factor authentication solution that utilizes dynamically generated one-time use tokens for all remote access platforms.
  3. Ensure that remote access authentication logs for each identified access platform are recorded, forwarded to a log aggregation utility, and retained for at least one year.
  4. Whitelist IP address ranges that are confirmed as legitimate for remote access users based on baselining or physical location registrations. If whitelisting is not possible, blacklist IP address ranges registered to physical locations or cloud hosting providers that should never legitimately authenticate to your remote access portal.
  5. Utilize either SIEM capabilities or GeoLogonalyzer.py to perform GeoFeasibility analysis of all remote access on a regular frequency to establish a baseline of accounts that legitimately perform unexpected logon activity and identify new anomalies. Investigating anomalies may require contacting the owner of the user account in question. FireEye Helix analyzes live log data for all techniques utilized by GeoLogonalyzer, and more!

Download GeoLogonalyzer today.


Christopher Schmitt, Seth Summersett, Jeff Johns, and Alexander Mulfinger.

TA18-149A: HIDDEN COBRA – Joanap Backdoor Trojan and Brambul Server Message Block Worm

Original release date: May 29, 2018 | Last revised: May 31, 2018

Systems Affected

Network systems


This joint Technical Alert (TA) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). Working with U.S. government partners, DHS and FBI identified Internet Protocol (IP) addresses and other indicators of compromise (IOCs) associated with two families of malware used by the North Korean government:

  • a remote access tool (RAT), commonly known as Joanap; and
  • a Server Message Block (SMB) worm, commonly known as Brambul.

The U.S. Government refers to malicious cyber activity by the North Korean government as HIDDEN COBRA. For more information on HIDDEN COBRA activity, visit https://www.us-cert.gov/hiddencobra.

FBI has high confidence that HIDDEN COBRA actors are using the IP addresses—listed in this report’s IOC files—to maintain a presence on victims’ networks and enable network exploitation. DHS and FBI are distributing these IP addresses and other IOCs to enable network defense and reduce exposure to any North Korean government malicious cyber activity.

This alert also includes suggested response actions to the IOCs provided, recommended mitigation techniques, and information on how to report incidents. If users or administrators detect activity associated with these malware families, they should immediately flag it, report it to the DHS National Cybersecurity and Communications Integration Center (NCCIC) or the FBI Cyber Watch (CyWatch), and give it the highest priority for enhanced mitigation.

See the following links for a downloadable copy of IOCs:

NCCIC conducted analysis on four malware samples and produced a Malware Analysis Report (MAR). MAR-10135536.3 – RAT/Worm examines the tactics, techniques, and procedures observed in the malware. Visit MAR-10135536.3 – HIDDEN COBRA RAT/Worm for the report and associated IOCs.


According to reporting of trusted third parties, HIDDEN COBRA actors have likely been using both Joanap and Brambul malware since at least 2009 to target multiple victims globally and in the United States—including the media, aerospace, financial, and critical infrastructure sectors. Users and administrators should review the information related to Joanap and Brambul from the Operation Blockbuster Destructive Malware Report [1] in conjunction with the IP addresses listed in the .csv and .stix files provided within this alert. Like many of the families of malware used by HIDDEN COBRA actors, Joanap, Brambul, and other previously reported custom malware tools, may be found on compromised network nodes. Each malware tool has different purposes and functionalities.

Joanap malware is a fully functional RAT that is able to receive multiple commands, which can be issued by HIDDEN COBRA actors remotely from a command and control server. Joanap typically infects a system as a file dropped by other HIDDEN COBRA malware, which users unknowingly downloaded either when they visit sites compromised by HIDDEN COBRA actors, or when they open malicious email attachments.

During analysis of the infrastructure used by Joanap malware, the U.S. Government identified 87 compromised network nodes. The countries in which the infected IP addresses are registered are as follows:

  • Argentina
  • Belgium
  • Brazil
  • Cambodia
  • China
  • Colombia
  • Egypt
  • India
  • Iran
  • Jordan
  • Pakistan
  • Saudi Arabia
  • Spain
  • Sri Lanka
  • Sweden
  • Taiwan
  • Tunisia

Malware often infects servers and systems without the knowledge of system users and owners. If the malware can establish persistence, it could move laterally through a victim’s network and any connected networks to infect nodes beyond those identified in this alert.

Brambul malware is a brute-force authentication worm that spreads through SMB shares. SMBs enable shared access to files between users on a network. Brambul malware typically spreads by using a list of hard-coded login credentials to launch a brute-force password attack against an SMB protocol for access to a victim’s networks.

Technical Details


Joanap is a two-stage malware used to establish peer-to-peer communications and to manage botnets designed to enable other operations. Joanap malware provides HIDDEN COBRA actors with the ability to exfiltrate data, drop and run secondary payloads, and initialize proxy communications on a compromised Windows device. Other notable functions include

  • file management,
  • process management,
  • creation and deletion of directories, and
  • node management.

Analysis indicates the malware encodes data using Rivest Cipher 4 encryption to protect its communication with HIDDEN COBRA actors. Once installed, the malware creates a log entry within the Windows System Directory in a file named mssscardprv.ax. HIDDEN COBRA actors use this file to capture and store victims’ information such as the host IP address, host name, and the current system time.


Brambul malware is a malicious Windows 32-bit SMB worm that functions as a service dynamic link library file or a portable executable file often dropped and installed onto victims’ networks by dropper malware. When executed, the malware attempts to establish contact with victim systems and IP addresses on victims’ local subnets. If successful, the application attempts to gain unauthorized access via the SMB protocol (ports 139 and 445) by launching brute-force password attacks using a list of embedded passwords. Additionally, the malware generates random IP addresses for further attacks.

Analysts suspect the malware targets insecure or unsecured user accounts and spreads through poorly secured network shares. Once the malware establishes unauthorized access on the victim’s systems, it communicates information about victim’s systems to HIDDEN COBRA actors using malicious email addresses. This information includes the IP address and host name—as well as the username and password—of each victim’s system. HIDDEN COBRA actors can use this information to remotely access a compromised system via the SMB protocol.

Analysis of a newer variant of Brambul malware identified the following built-in functions for remote operations:

  • harvesting system information,
  • accepting command-line arguments,
  • generating and executing a suicide script,
  • propagating across the network using SMB,
  • brute forcing SMB login credentials, and
  • generating Simple Mail Transport Protocol email messages containing target host system information.

Detection and Response

This alert’s IOC files provide HIDDEN COBRA IOCs related to Joanap and Brambul. DHS and FBI recommend that network administrators review the information provided, identify whether any of the provided IP addresses fall within their organizations’ allocated IP address space, and—if found—take necessary measures to remove the malware.

When reviewing network perimeter logs for the IP addresses, organizations may find instances of these IP addresses attempting to connect to their systems. Upon reviewing the traffic from these IP addresses, system owners may find some traffic relates to malicious activity and some traffic relates to legitimate activity.


A successful network intrusion can have severe impacts, particularly if the compromise becomes public. Possible impacts include

  • temporary or permanent loss of sensitive or proprietary information,
  • disruption to regular operations,
  • financial losses incurred to restore systems and files, and
  • potential harm to an organization’s reputation.


Mitigation Strategies

DHS recommends that users and administrators use the following best practices as preventive measures to protect their computer networks:

  • Keep operating systems and software up-to-date with the latest patches. Most attacks target vulnerable applications and operating systems. Patching with the latest updates greatly reduces the number of exploitable entry points available to an attacker.
  • Maintain up-to-date antivirus software, and scan all software downloaded from the internet before executing.
  • Restrict users’ abilities (permissions) to install and run unwanted software applications, and apply the principle of least privilege to all systems and services. Restricting these privileges may prevent malware from running or limit its capability to spread through the network.
  • Scan for and remove suspicious email attachments. If a user opens a malicious attachment and enables macros, embedded code will execute the malware on the machine. Enterprises and organizations should consider blocking email messages from suspicious sources that contain attachments. For information on safely handling email attachments, see Using Caution with Email Attachments. Follow safe practices when browsing the web. See Good Security Habits and Safeguarding Your Data for additional details.
  • Disable Microsoft’s File and Printer Sharing service, if not required by the user’s organization. If this service is required, use strong passwords or Active Directory authentication. See Choosing and Protecting Passwords for more information on creating strong passwords.
  • Enable a personal firewall on organization workstations and configure it to deny unsolicited connection requests.

Response to Unauthorized Network Access

Contact DHS or your local FBI office immediately. To report an intrusion and request resources for incident response or technical assistance, contact DHS NCCIC (NCCICCustomerService@hq.dhs.gov or 888-282-0870), FBI through a local field office, or FBI’s Cyber Division (CyWatch@fbi.gov or 855-292-3937).


Revision History

  • May 29, 2018: Initial version
  • May 31, 2018: Uploaded updated STIX and CSV files

This product is provided subject to this Notification and this Privacy & Use policy.

Weekly Cyber Risk Roundup: FBI Advises Home Router Resets

What’s Everyone Talking About? Trending Cybercrime Events

The big news for this week was the CISCO warning of 500,000 routers being hacked by Russian criminal hackers in a bid to attack Ukraine. According to CNBC, “Cisco’s Talos cyber intelligence unit said it has high confidence that the Russian government is behind the campaign, dubbed VPNFilter, because the hacking software shares code with malware used in previous cyber attacks that the U.S. government has attributed to Moscow.”

In subsequent reporting, the FBI has issued a statement and recommendation that all users with home or small-business router turn off the device and turn it back on. The reboot is meant to counter the Fancy Bear linked malware mentioned above.

Further details are being released as they are available. The details of the warnings were: “at least 500,000 in at least 54 countries. The known devices affected by VPNFilter are Linksys, MikroTik, NETGEAR and TP-Link networking equipment in the small and home office (SOHO) space, as well at QNAP network-attached storage (NAS) devices.”

You can read more here in the FBI warning.

Screen Shot 2018-05-29 at 5.00.06 AM

Other trending cybercrime events from the week include:

  • State data breach notifications: In October 2017, criminal hackers obtained the credentials for two employee accounts for Worldwide Insurance Services. A phishing campaign was used to steal credentials and may have resulted in private insurance details of their customers being viewed by unauthorized parties. In December 2017, a former employee of Muir Medical Group took personal details of clients with them before their employment ended. This could have resulted in the leak of personal identifiable information of clients. In March 2018, a contractor for the California Department of Public Health experienced a robbery where documents and a laptop were stolen.
  • Altcoin Experienced Second Hack: The alternative cryptocurrency Verge, experienced its second hack in recent months. $1.4 Million (USD) was stolen in this recent attack which started as a distributed denial-of-service (DDoS) attack. In the last event, the cryptocurrency suffered a 25% loss. 
  • Bitcoin Gold Suffers Attack: In a similar attack to the previous report with Verge, Bitcoin Gold suffered a 51% attack resulting in the loss of $18 million in Bitcoin Gold. Also known as double spending this type of attack works very similar to DDoS attacks in which they tie up the network resources of the targets. 
  • Fourteen Vulnerabilities Found in BMWs: In a recent security test, researchers found fourteen vulnerabilities as they hacked BMW cars. The reported vulnerabilities were, “the flaws could be exploited to gain local and remote access to infotainment (a.k.a head unit), the Telematics Control Unit (TCU or TCB) and UDS communication, as well as to gain control of the vehicles’ CAN bus.” 
  • App Leaks Passwords in Plaintext: Researchers discovered two servers owned by the app TeenSafe, which is an app parents and guardians can use to monitor phone activity of a child, were hosted without passwords to access data being stored. Over 10,000 accounts were exposed on the AWS hosted servers.

Cyber Risk Trends From the Past Week

Screen Shot 2018-05-29 at 5.02.34 AMA new report from security researchers this week is touting a new kind of banking malware. Researchers are calling the malware Backswap and discovered it attacking Polish banks. According to the report, “We have discovered a new banking malware family that uses an innovative technique to manipulate the browser: instead of using complex process injection methods to monitor browsing activity, the malware hooks key window message loop events in order to inspect values of the window objects for banking activity.”

The malware was first noticed in January 2018, and the first samples were analyzed in March 2018. According to the report, “the banker is distributed through malicious email spam campaigns that carry an attachment of a heavily obfuscated JavaScript downloader from a family commonly known as Nemucod. The spam campaigns are targeting Polish users.” As users see everyday, just because a malware strain is targeting a specific bank or country doesn’t mean it hasn’t started to spread or won’t be turned to other targets later.

You can read more here, as well as grab IOC’s related to Backswap Malware.


PrivateVPN review: It’s fast and flexible

  • P2P allowed: Yes
  • Business location: Sweden
  • Number of servers: 100+
  • Number of country locations: 55
  • Cost: $50 per year
  • VPN protocol: OpenVPN
  • Data encryption: AES-128-GCM (server-side negotiation can improve to AES-256)
  • Data authentication: SHA-256 with HMAC
  • Handshake encryption: TLS-ECDHE-RSA-AES256-GCM-SHA384(AEAD)

Every VPN service under the sun promises fast download speeds, but few can actually guarantee them. One service that truly delivers the goods (at least in our tests) is Sweden-based PrivateVPN. This simple and easy-to-use service has something for everyone.

To read this article in full, please click here