Category Archives: Ics

Attacks Targeting ICS & OT Assets Grew 2000% Since 2018, Report Reveals

The digital threat landscape is always changing. This year is an excellent (albeit extreme) example. With the help of Dimensional Research, Tripwire found out that 58% of IT security professionals were more concerned about the security of their employees’ home networks than they were before the outbreak of coronavirus 2019 (COVID-19). Slightly fewer percentages of […]… Read More

The post Attacks Targeting ICS & OT Assets Grew 2000% Since 2018, Report Reveals appeared first on The State of Security.

Monitoring ICS Cyber Operation Tools and Software Exploit Modules To Anticipate Future Threats

There has only been a small number of broadly documented cyber attacks targeting operational technologies (OT) / industrial control systems (ICS) over the last decade. While fewer attacks is clearly a good thing, the lack of an adequate sample size to determine risk thresholds can make it difficult for defenders to understand the threat environment, prioritize security efforts, and justify resource allocation.

To address this problem, FireEye Mandiant Threat Intelligence produces a range of reports for subscription customers that focus on different indicators to predict future threats. Insights from activity on dark web forums, anecdotes from the field, ICS vulnerability research, and proof of concept research makes it possible to illustrate the threat landscape even with limited incident data. This blog post focuses on one of those source sets—ICS-oriented intrusion and attack tools, which will be referred to together in this post as cyber operation tools.

ICS-oriented cyber operation tools refer to hardware and software that has the capability to either exploit weaknesses in ICS, or interact with the equipment in such a way that could be utilized by threat actors to support intrusions or attacks. For this blog post, we separated exploit modules that are developed to run on top of frameworks such as Metasploit, Core Impact, or Immunity Canvas from other cyber operation tools due to their exceedingly high number.

Cyber Operation Tools Reduce the Level of Specialized Knowledge Attackers Need to Target ICS

As ICS are a distinct sub-domain to information and computer technology, successful intrusions and attacks against these systems often requires specialized knowledge, establishing a higher threshold for successful attacks. Since intrusion and attack tools are often developed by someone who already has the expertise, these tools can help threat actors bypass the need for gaining some of this expertise themselves, or it can help them gain the requisite knowledge more quickly. Alternatively, experienced actors may resort to using known tools and exploits to conceal their identity or maximize their budget.


Figure 1: ICS attacker knowledge curve

The development and subsequent adoption of standardized cyber operation tools is a general indication of increasing adversarial capability. Whether these tools were developed by researchers as proof-of-concept or utilized during past incidents, access to them lowers the barrier for a variety of actors to learn and develop future skills or custom attack frameworks. Following this premise, equipment that is vulnerable to exploits using known cyber operation tools becomes low-hanging fruit for all sorts of attackers.

ICS Cyber Operation Tool Classification

Mandiant Intelligence tracks a large number of publicly available ICS-specific cyber operation tools. The term "ICS-specific," as we employ it, does not have a hard-edged definition. While the vast majority of cyber operation tools we track are clear-cut cases, we have, in some instances, considered the intent of the tool's creator(s) and the tool's reasonably foreseeable impact on ICS software and equipment. Note, we excluded tools that are IT-based but may affect OT systems, such as commodity malware or known network utilities.  We included only a few exceptions, where we identified specialized adaptations or features that enabled the tool to interact with ICS, such as the case of nmap scripts.

We assigned each tool to at least one of eight different categories or classes, based on functionality.


Table 1: Classes of ICS-specific intrusion and attack tools

While some of the tools included in our list were created as early as 2004, most of the development has taken place during the last 10 years. The majority of the tools are also vendor agnostic, or developed to target products from some of the largest ICS original equipment manufacturers (OEM). Siemens stands out in this area, with 60 percent of the vendor-specific tools potentially targeting its products. Other tools we identified were developed to target products from Schneider Electric, GE, ABB, Digi International, Rockwell Automation, and Wind River Systems.

Figure 2 depicts the number of tools by class. Of note, network discovery tools make up more than a quarter of the tools. We also highlight that in some cases, the software exploitation tools we track host extended repositories of modules to target specific products or vulnerabilities.


Figure 2: ICS-specific intrusion and attack tools by class

Software Exploit Modules

Software exploit modules are the most numerous subcomponents of cyber operation tools given their overall simplicity and accessibility. Most frequently, exploit modules are developed to take advantage of a specific vulnerability and automate the exploitation process. The module is then added to an exploit framework. The framework works as a repository that may contain hundreds of modules for targeting a wide variety of vulnerabilities, networks, and devices. The most popular frameworks include Metasploit, Core Impact, and Immunity Canvas. Also, since 2017, we have identified the development of younger ICS-specific exploit frameworks such as AutosploitIndustrial Exploitation Framework (ICSSPLOIT), and the Industrial Security Exploitation Framework.

Given the simplicity and accessibility of exploit modules, they are attractive to actors with a variety of skill levels. Even less sophisticated actors may take advantage of an exploit module without completely understanding how a vulnerability works or knowing each of the commands required to exploit it. We note that, although most of the exploit modules we track were likely developed for research and penetration testing, they could also be utilized throughout the attack lifecycle.

Exploit Modules Statistics

Since 2010, Mandiant Intelligence has tracked exploit modules for the three major exploitation frameworks: Metasploit, Core Impact, and Immunity Canvas. We currently track hundreds of ICS-specific exploit modules related to more than 500 total vulnerabilities, 71 percent of them being potential zero-days. The break down is depicted in Figure 3. Immunity Canvas currently has the most exploits due in large part to the efforts of Russian security research firm GLEG.


Figure 3: ICS exploit modules by framework

Metasploit framework exploit modules deserve particular attention. Even though it has the fewest number of modules, Metasploit is freely available and broadly used for IT penetration testing, while Core Impact and Immunity Canvas are both commercial tools. This makes Metasploit the most accessible of the three frameworks. However, it means that module development and maintenance are provided by the community, which is likely contributing to the lower number of modules.

It is also worthwhile to examine the number of exploit modules by ICS product vendor. The results of this analysis are depicted in Figure 4, which displays vendors with the highest number of exploit modules (over 10).


Figure 4: Vendors with 10 exploit modules or more

Figure 4 does not necessarily indicate which vendors are the most targeted, but which products have received the most attention from exploit writers. Several factors could contribute to this, including the availability of software to experiment with, general ease of writing an exploit on particular vulnerabilities, or how the vulnerability matches against the expertise of the exploit writers.

Some of the vendors included in the graph have been acquired by other companies, however we tracked them separately as the vulnerability was identified prior to the acquisition. One example of this is Schneider Electric, which acquired 7-Technologies in 2011 and altered the names of their product portfolio. We also highlight that the graph solely counts exploit modules, regardless of the vulnerability exploited. Modules from separate frameworks could target the same vulnerability and would each be counted separately.

ICS Cyber Operation Tools and Software Exploitation Frameworks Bridge Knowledge and Expertise Gaps

ICS-specific cyber operation tools often released by researchers and security practitioners are useful assets to help organizations learn about ongoing threats and product vulnerabilities. However, as anything publicly available, they can also lower the bar for threat actors that hold an interest in targeting OT networks. Although successful attacks against OT environments will normally require a high level of skills and expertise from threat actors, the tools and exploit modules discussed in this post are making it easier to bridge the knowledge gap.

Awareness about the proliferation of ICS cyber operation tools should serve as an important risk indicator of the evolving threat landscape. These tools provide defenders with an opportunity to perform risk assessments in test environments and to leverage aggregated data to communicate and obtain support from company executives. Organizations that do not pay attention to available ICS cyber operation tools risk becoming low-hanging fruit for both sophisticated and unexperienced threat actors exploring new capabilities.

FireEye Intelligence customers have access to the full list and analysis of ICS cyber operation tools and exploit modules. Visit our website to learn more about the FireEye Mandiant Cyber Physical Threat Intelligence subscription.

Ransomware Against the Machine: How Adversaries are Learning to Disrupt Industrial Production by Targeting IT and OT

Since at least 2017, there has been a significant increase in public disclosures of ransomware incidents impacting industrial production and critical infrastructure organizations. Well-known ransomware families like WannaCry, LockerGoga, MegaCortex, Ryuk, Maze, and now SNAKEHOSE (a.k.a. Snake / Ekans), have cost victims across a variety of industry verticals many millions of dollars in ransom and collateral costs. These incidents have also resulted in significant disruptions and delays to the physical processes that enable organizations to produce and deliver goods and services.

While lots of information has been shared about the victims and immediate impacts of industrial sector ransomware distribution operations, the public discourse continues to miss the big picture. As financial crime actors have evolved their tactics from opportunistic to post-compromise ransomware deployment, we have observed an increase in adversaries’ internal reconnaissance that enables them to target systems that are vital to support the chain of production. As a result, ransomware infections—either affecting critical assets in corporate networks or reaching computers in OT networks—often result in the same outcome: insufficient or late supply of end products or services.

Truly understanding the unique nuances of industrial sector ransomware distribution operations requires a combination of skillsets and visibility across both IT and OT systems. Using examples derived from our consulting engagements and threat research, we will explain how the shift to post-compromise ransomware operations is fueling adversaries’ ability to disrupt industrial operations.

Industrial Sector Ransomware Distribution Poses Increasing Risk as Actors Move to Post-Compromise Deployment

The traditional approach to ransomware attacks predominantly relies on a “shotgun” methodology that consists of indiscriminate campaigns spreading malware to encrypt files and data from a variety of victims. Actors following this model will extort victims for an average of $500 to $1,000 USD and hope to receive payments from as many individuals as possible. While early ransomware campaigns adopting this approach were often considered out of scope for OT security, recent campaigns targeting entire industrial and critical infrastructure organizations have moved toward adopting a more operationally complex post-compromise approach.

In post-compromise ransomware incidents, a threat actor may still often rely on broadly distributed malware to obtain their initial access to a victim environment, but once on a network they will focus on gaining privileged access so they can explore the target networks and identify critical systems before deploying the ransomware. This approach also makes it possible for the attacker to disable security processes that would normally be enough to detect known ransomware indicators or behaviors. Actors cast wider nets that may impact critical systems, which  expand the scale and effectiveness of their end-stage operations by inflicting maximum pain on the victim. As a result, they are better positioned to negotiate and can often demand much higher ransoms—which are commonly commensurate with the victims’ perceived ability to pay and the value of the ransomed assets themselves. For more information, including technical detail, on similar activity, see our recent blog posts on FIN6 and TEMP.MixMaster.


Figure 1: Comparison of indiscriminate vs. post-compromise ransomware approaches

Historical incidents involving the opportunistic deployment of ransomware have often been limited to impacting individual computers, which occasionally included OT intermediary systems that were either internet-accessible, poorly segmented, or exposed to infected portable media. In 2017, we also observed campaigns such as NotPetya and BadRabbit, where wiper malware with worm-like capabilities were released to disrupt organizations while masquerading as ransomware. While these types of campaigns pose a threat to industrial production, the adoption of post-compromise deployment presents three major twists in the plot.

  • As threat actors tailor their attacks to target specific industries or organizations, companies with high-availability requirements (e.g., public utilities, hospitals, and industrial manufacturing) and perceived abilities to pay ransoms (e.g., higher revenue companies) become prime targets. This represents an expansion of financial crime actors’ targeting of industries that process directly marketable information (e.g., credit card numbers or customer data) to include the monetization of production environments.
  • As threat actors perform internal reconnaissance and move laterally across target networks before deploying ransomware, they are now better positioned to cast wide nets that impact the target’s most critical assets and negotiate from a privileged position.
  • Most importantly, many of the tactics, techniques, and procedures (TTPs) often used by financial actors in the past, resemble those employed by high-skilled actors across the initial and middle stages of the attack lifecycle of past OT security incidents. Therefore, financial crime actors are likely capable of pivoting to and deploying ransomware in OT intermediary systems to further disrupt operations.

Organized Financial Crime Actors Have Demonstrated an Ability to Disrupt OT Assets

An actor’s capability to obtain financial benefits from post-compromise ransomware deployment depends on many factors, one of which is the ability to disrupt systems that are the most relevant to the core mission of the victim organizations. As a result, we can expect mature actors to gradually broaden their selection from only IT and business processes, to also OT assets monitoring and controlling physical processes. This is apparent in ransomware families such as SNAKEHOSE, which was designed to execute its payload only after stopping a series of processes that included some industrial software from vendors such as General Electric and Honeywell. At first glance, the SNAKEHOSE kill list appeared to be specifically tailored to OT environments due to the relatively small number of processes (yet high number of OT-related processes) identified with automated tools for initial triage. However, after manually extracting the list from the function that was terminating the processes, we determined that the kill list utilized by SNAKEHOSE actually targets over 1,000 processes.

In fact, we have observed very similar process kill lists deployed alongside samples from other ransomware families, including LockerGoga, MegaCortex, and Maze. Not surprisingly, all of these code families have been associated with high-profile incidents impacting industrial organizations for the past two years. The earliest kill list containing OT processes we identified was a batch script deployed alongside LockerGoga in January 2019. The list is very similar to those used later in MegaCortex incidents, albeit with notable exceptions, such as an apparent typo on an OT-related process that is not present in our SNAKEHOSE or MegaCortex samples: “proficyclient.exe4”. The absence of this typo in the SNAKEHOSE and MegaCortex samples could indicate that one of these malware authors identified and corrected the error when initially copying the OT-processes from the LockerGoga list, or that the LockerGoga author failed to properly incorporate the processes from some theoretical common source of origin, such as a dark web post.


Figure 2: ‘proficyclient.exe’ spelling in kill lists deployed with LockerGoga (left) and SNAKEHOSE (right)

Regardless of which ransomware family first employed the OT-related processes in a kill list or where the malware authors acquired the list, the seeming ubiquity of this list across malware families suggests that the list itself is more noteworthy than any individual malware family that has implemented it. While the OT processes identified in these lists may simply represent the coincidental output of automated process collection from target environments and not a targeted effort to impact OT, the existence of this list provides financial crime actors opportunities to disrupt OT systems. Furthermore, we expect that as financially motivated threat actors continue to impact industrial sector organizations, become more familiar with OT, and identify dependencies across IT and OT systems, they will develop capabilities—and potentially intent—to disrupt other systems and environments running industrial software products and technology.

Ransomware Deployments in Both IT and OT Systems Have Impacted Industrial Production

As a result of adversaries’ post-compromise strategy and increased awareness of industrial sector targets, ransomware incidents have effectively impacted industrial production regardless of whether the malware was deployed in IT or OT. Ransomware incidents encrypting data from servers and computers in corporate networks have resulted in direct or indirect disruptions to physical production processes overseen by OT networks. This has caused insufficient or late supply of end products or services, representing long-term financial losses in the form of missed business opportunities, costs for incident response, regulatory fines, reputational damage, and sometimes even paid ransoms. In certain sectors, such as utilities and public services, high availability is also critical to societal well-being.

The best-known example of ransomware impacting industrial production due to an IT network infection is Norsk Hydro’s incident from March 2019, where disruptions to Business Process Management Systems (BPMS) forced multiple sites to shut down automation operations. Among other collateral damage, the ransomware interrupted communication between IT systems that are commonly used to manage resources across the production chain. Interruptions to these flows of information containing for example product inventories, forced employees to identify manual alternatives to handle more than 6,500 stock-keeping units and 4,000 shelves. FireEye Mandiant has responded to at least one similar case where TrickBot was used to deploy Ryuk ransomware at an oil rig manufacturer. While the infection happened only on corporate networks, the biggest business impact was caused by disruptions of Oracle ERP software driving the company temporarily offline and negatively affecting production.

Ransomware may result in similar outcomes when it reaches IT-based assets in OT networks, for example human-machine interfaces (HMIs), supervisory control and data acquisition (SCADA) software, and engineering workstations. Most of this equipment relies on commodity software and standard operating systems that are vulnerable to a variety of IT threats. Mandiant Intelligence is aware of at least one incident in which an industrial facility suffered a plant shutdown due to a large-scale ransomware attack, based on sensitive sources. The facility's network was improperly segmented, which allowed the malware to propagate from the corporate network into the OT network, where it encrypted servers, HMIs, workstations, and backups. The facility had to reach out to multiple vendors to retrieve backups, many of which were decades old, which delayed complete restoration of production.

As recently as February 2020, the Cybersecurity Infrastructure and Security Agency (CISA) released Alert AA20-049A describing how a post-compromise ransomware incident had affected control and communication assets on the OT network of a natural gas compression facility. Impacts to HMIs, data historians, and polling servers resulted in loss of availability and loss of view for human operators. This prompted an intentional shut down of operations that lasted two days.

Mitigating the Effects of Ransomware Requires Defenses Across IT and OT

Threat actors deploying ransomware have made rapid advances both in terms of effectiveness and as a criminal business model, imposing high operational costs on victims. We encourage all organizations to evaluate their safety and industrial risks related to ransomware attacks. Note that these recommendations will also help to build resilience in the face of other threats to business operations (e.g., cryptomining malware infections). While every case will differ, we highlight the following recommendations.

For custom services and actionable intelligence in both IT and OT, contact FireEye Mandiant Consulting, Managed Defense, and Threat Intelligence.

  • Conduct tabletop and/or controlled red team exercises to assess the current security posture and ability of your organization to respond to the ransomware threat. Simulate attack scenarios (mainly in non-production environments) to understand how the incident response team can (or cannot) detect, analyze, and recover from such an attack. Revisit recovery requirements based on the exercise results. In general, repeatedly practicing various threat scenarios will improve awareness and ability to respond to real incidents.
  • Review operations, business processes, and workflows to identify assets that are critical to maintaining continuous industrial operations. Whenever possible, introduce redundancy for critical assets with low tolerance to downtime. The right amount and type of redundancy is unique for each organization and can be determined through risk assessments and cost-benefit analyses. Note that such analyses cannot be conducted without involving business process owners and collaborating across IT and OT.
  • Logically segregate primary and redundant assets either by a network-based or host-based firewall with subsequent asset hardening (e.g., disabling services typically used by ransomware for its propagation, like SMB, RDP, and WMI). In addition to creating policies to disable unnecessary peer-to-peer and remote connections, we recommend routine auditing of all systems that potentially host these services and protocols. Note that such architecture is generally more resilient to security incidents.
  • When establishing a rigorous back-up program, special attention should be paid to ensuring the security (integrity) of backups. Critical backups must be kept offline or, at minimum, on a segregated network.
  • Optimize recovery plans in terms of recovery time objective. Introduce required alternative workflows (including manual) for the duration of recovery. This is especially critical for organizations with limited or no redundancy of critical assets. When recovering from backups, harden recovered assets and the entire organization's infrastructure to prevent recurring ransomware infection and propagation.
  • Establish clear ownership and management of OT perimeter protection devices to ensure emergency, enterprise-wide changes are possible. Effective network segmentation must be maintained during containment and active intrusions.
  • Hunt for adversary intrusion activity in intermediary systems, which we define as the networked workstations and servers using standard operating systems and protocols. While the systems are further away from direct control of physical processes, there is a much higher likelihood of attacker presence.
  • Note, that every organization is different, with unique internal architectures and processes, stakeholder needs, and customer expectations. Therefore, all recommendations should be carefully considered in the context of the individual infrastructures. For instance, proper network segmentation is highly advisable for mitigating the spread of ransomware. However, organizations with limited budgets may instead decide to leverage redundant asset diversification, host-based firewalls, and hardening as an alternative to segregating with hardware firewalls.

A Totally Tubular Treatise on TRITON and TriStation

Introduction

In December 2017, FireEye's Mandiant discussed an incident response involving the TRITON framework. The TRITON attack and many of the publicly discussed ICS intrusions involved routine techniques where the threat actors used only what is necessary to succeed in their mission. For both INDUSTROYER and TRITON, the attackers moved from the IT network to the OT (operational technology) network through systems that were accessible to both environments. Traditional malware backdoors, Mimikatz distillates, remote desktop sessions, and other well-documented, easily-detected attack methods were used throughout these intrusions.

Despite the routine techniques employed to gain access to an OT environment, the threat actors behind the TRITON malware framework invested significant time learning about the Triconex Safety Instrumented System (SIS) controllers and TriStation, a proprietary network communications protocol. The investment and purpose of the Triconex SIS controllers leads Mandiant to assess the attacker's objective was likely to build the capability to cause physical consequences.

TriStation remains closed source and there is no official public information detailing the structure of the protocol, raising several questions about how the TRITON framework was developed. Did the actor have access to a Triconex controller and TriStation 1131 software suite? When did development first start? How did the threat actor reverse engineer the protocol, and to what extent? What is the protocol structure?

FireEye’s Advanced Practices Team was born to investigate adversary methodologies, and to answer these types of questions, so we started with a deeper look at the TRITON’s own Python scripts.

Glossary:

  • TRITON – Malware framework designed to operate Triconex SIS controllers via the TriStation protocol.
  • TriStation – UDP network protocol specific to Triconex controllers.
  • TRITON threat actor – The human beings who developed, deployed and/or operated TRITON.

Diving into TRITON's Implementation of TriStation

TriStation is a proprietary network protocol and there is no public documentation detailing its structure or how to create software applications that use TriStation. The current TriStation UDP/IP protocol is little understood, but natively implemented through the TriStation 1131 software suite. TriStation operates by UDP over port 1502 and allows for communications between designated masters (PCs with the software that are “engineering workstations”) and slaves (Triconex controllers with special communications modules) over a network.

To us, the Triconex systems, software and associated terminology sound foreign and complicated, and the TriStation protocol is no different. Attempting to understand the protocol from ground zero would take a considerable amount of time and reverse engineering effort – so why not learn from TRITON itself? With the TRITON framework containing TriStation communication functionality, we pursued studying the framework to better understand this mysterious protocol. Work smarter, not harder, amirite?

The TRITON framework has a multitude of functionalities, but we started with the basic components:

  • TS_cnames.pyc # Compiled at: 2017-08-03 10:52:33
  • TsBase.pyc # Compiled at: 2017-08-03 10:52:33
  • TsHi.pyc # Compiled at: 2017-08-04 02:04:01
  • TsLow.pyc # Compiled at: 2017-08-03 10:46:51

TsLow.pyc (Figure 1) contains several pieces of code for error handling, but these also present some cues to the protocol structure.


Figure 1: TsLow.pyc function print_last_error()

In the TsLow.pyc’s function for print_last_error we see error handling for “TCM Error”. This compares the TriStation packet value at offset 0 with a value in a corresponding array from TS_cnames.pyc (Figure 2), which is largely used as a “dictionary” for the protocol.


Figure 2: TS_cnames.pyc TS_cst array

From this we can infer that offset 0 of the TriStation protocol contains message types. This is supported by an additional function, tcm_result, which declares type, size = struct.unpack('<HH', data_received[0:4]), stating that the first two bytes should be handled as integer type and the second two bytes are integer size of the TriStation message. This is our first glimpse into what the threat actor(s) understood about the TriStation protocol.

Since there are only 11 defined message types, it really doesn't matter much if the type is one byte or two because the second byte will always be 0x00.

We also have indications that message type 5 is for all Execution Command Requests and Responses, so it is curious to observe that the TRITON developers called this “Command Reply.” (We won’t understand this naming convention until later.)

Next we examine TsLow.pyc’s print_last_error function (Figure 3) to look at “TS Error” and “TS_names.” We begin by looking at the ts_err variable and see that it references ts_result.


Figure 3: TsLow.pyc function print_last_error() with ts_err highlighted

We follow that thread to ts_result, which defines a few variables in the next 10 bytes (Figure 4): dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). Now things are heating up. What fun. There’s a lot to unpack here, but the most interesting thing is how this piece script breaks down 10 bytes from ts_packet into different variables.


Figure 4: ts_result with ts_packet header variables highlighted


Figure 5: tcm_result

Referencing tcm_result (Figure 5) we see that it defines type and size as the first four bytes (offset 0 – 3) and tcm_result returns the packet bytes 4:-2 (offset 4 to the end minus 2, because the last two bytes are the CRC-16 checksum). Now that we know where tcm_result leaves off, we know that the ts_reply “cmd” is a single byte at offset 6, and corresponds to the values in the TS_cnames.pyc array and TS_names (Figure 6). The TRITON script also tells us that any integer value over 100 is a likely “command reply.” Sweet.

When looking back at the ts_result packet header definitions, we begin to see some gaps in the TRITON developer's knowledge: dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). We're clearly speculating based on naming conventions, but we get an impression that offsets 4, 5 and 6 could be "direction", "controller ID" and "command", respectively. Values such as "unk" show that the developer either did not know or did not care to identify this value. We suspect it is a constant, but this value is still unknown to us.


Figure 6: Excerpt TS_cnames.pyc TS_names array, which contain TRITON actor’s notes for execution command function codes

TriStation Protocol Packet Structure

The TRITON threat actor’s knowledge and reverse engineering effort provides us a better understanding of the protocol. From here we can start to form a more complete picture and document the basic functionality of TriStation. We are primarily interested in message type 5, Execution Command, which best illustrates the overall structure of the protocol. Other, smaller message types will have varying structure.


Figure 7: Sample TriStation "Allocate Program" Execution Command, with color annotation and protocol legend

Corroborating the TriStation Analysis

Minute discrepancies aside, the TriStation structure detailed in Figure 7 is supported by other public analyses. Foremost, researchers from the Coordinated Science Laboratory (CSL) at University of Illinois at Urbana-Champaign published a 2017 paper titled "Attack Induced Common-Mode Failures on PLC-based Safety System in a Nuclear Power Plant". The CSL team mentions that they used the Triconex System Access Application (TSAA) protocol to reverse engineer elements of the TriStation protocol. TSAA is a protocol developed by the same company as TriStation. Unlike TriStation, the TSAA protocol structure is described within official documentation. CSL assessed similarities between the two protocols would exist and they leveraged TSAA to better understand TriStation. The team's overall research and analysis of the general packet structure aligns with our TRITON-sourced packet structure.

There are some awesome blog posts and whitepapers out there that support our findings in one way or another. Writeups by Midnight Blue Labs, Accenture, and US-CERT each explain how the TRITON framework relates to the TriStation protocol in superb detail.

TriStation's Reverse Engineering and TRITON's Development

When TRITON was discovered, we began to wonder how the TRITON actor reverse engineered TriStation and implemented it into the framework. We have a lot of theories, all of which seemed plausible: Did they build, buy, borrow, or steal? Or some combination thereof?

Our initial theory was that the threat actor purchased a Triconex controller and software for their own testing and reverse engineering from the "ground up", although if this was the case we do not believe they had a controller with the exact vulnerable firmware version, else they would have had fewer problems with TRITON in practice at the victim site. They may have bought or used a demo version of the TriStation 1131 software, allowing them to reverse engineer enough of TriStation for the framework. They may have stolen TriStation Python libraries from ICS companies, subsidiaries or system integrators and used the stolen material as a base for TriStation and TRITON development. But then again, it is possible that they borrowed TriStation software, Triconex hardware and Python connectors from government-owned utility that was using them legitimately.

Looking at the raw TRITON code, some of the comments may appear oddly phrased, but we do get a sense that the developer is clearly using many of the right vernacular and acronyms, showing smarts on PLC programming. The TS_cnames.pyc script contains interesting typos such as 'Set lable', 'Alocate network accepted', 'Symbol table ccepted' and 'Set program information reponse'. These appear to be normal human error and reflect neither poor written English nor laziness in coding. The significant amount of annotation, cascading logic, and robust error handling throughout the code suggests thoughtful development and testing of the framework. This complicates the theory of "ground up" development, so did they base their code on something else?

While learning from the TriStation functionality within TRITON, we continued to explore legitimate TriStation software. We began our search for "TS1131.exe" and hit dead ends sorting through TriStation DLLs until we came across a variety of TriStation utilities in MSI form. We ultimately stumbled across a juicy archive containing "Trilog v4." Upon further inspection, this file installed "TriLog.exe," which the original TRITON executable mimicked, and a couple of supporting DLLs, all of which were timestamped around August 2006.

When we saw the DLL file description "Tricon Communications Interface" and original file name "TricCom.DLL", we knew we were in the right place. With a simple look at the file strings, "BAZINGA!" We struck gold.

File Name

tr1com40.dll

MD5

069247DF527A96A0E048732CA57E7D3D

Size

110592

Compile Date

2006-08-23

File Description

Tricon Communications Interface

Product Name

TricCom Dynamic Link Library

File Version

4.2.441

Original File Name

TricCom.DLL

Copyright

Copyright © 1993-2006 Triconex Corporation

The tr1com40.DLL is exactly what you would expect to see in a custom application package. It is a library that helps support the communications for a Triconex controller. If you've pored over TRITON as much as we have, the moment you look at strings you can see the obvious overlaps between the legitimate DLL and TRITON's own TS_cnames.pyc.


Figure 8: Strings excerpt from tr1com40.DLL

Each of the execution command "error codes" from TS_cnames.pyc are in the strings of tr1com40.DLL (Figure 8). We see "An MP has re-educated" and "Invalid Tristation I command". Even misspelled command strings verbatim such as "Non-existant data item" and "Alocate network accepted". We also see many of the same unknown values. What is obvious from this discovery is that some of the strings in TRITON are likely based on code used in communications libraries for Trident and Tricon controllers.

In our brief survey of the legitimate Triconex Corporation binaries, we observed a few samples with related string tables.

Pe:dllname

Compile Date

Reference CPP Strings Code

Lagcom40.dll

2004/11/19

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Tr1com40.dll

2006/08/23

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Tridcom.dll

2008/07/23

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Triccom.dll

2008/07/23

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Tridcom.dll

2010/09/29

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0 

Tr1com.dll

2011/04/27

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Lagcom.dll

2011/04/27

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Triccom.dll

2011/04/27

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

We extracted the CPP string tables in TR1STRS and LAGSTRS and the TS_cnames.pyc TS_names array from TRITON, and compared the 210, 204, and 212 relevant strings from each respective file.

TS_cnames.pyc TS_names and tr1com40.dll share 202 of 220 combined table strings. The remaining strings are unique to each, as seen here:

TS_cnames.TS_names (2017 pyc)

Tr1com40.dll (2006 CPP)

Go to DOWNLOAD mode

<200>

Not set

<209>

Unk75

Bad message from module

Unk76

Bad message type

Unk77

Bad TMI version number

Unk78

Module did not respond

Unk79

Open Connection: Invalid SAP %d

Unk81

Unsupported message for this TMI version

Unk83

 

Wrong command

 

TS_cnames.pyc TS_names and Tridcom.dll (1999 CPP) shared only 151 of 268 combined table strings, showing a much smaller overlap with the seemingly older CPP library. This makes sense based on the context that Tridcom.dll is meant for a Trident controller, not a Tricon controller. It does seem as though Tr1com40.dll and TR1STRS.CPP code was based on older work.

We are not shocked to find that the threat actor reversed legitimate code to bolster development of the TRITON framework. They want to work smarter, not harder, too. But after reverse engineering legitimate software and implementing the basics of the TriStation, the threat actors still had an incomplete understanding of the protocol. In TRITON's TS_cnames.pyc we saw "Unk75", "Unk76", "Unk83" and other values that were not present in the tr1com40.DLL strings, indicating that the TRITON threat actor may have explored the protocol and annotated their findings beyond what they reverse engineered from the DLL. The gaps in TriStation implementation show us why the actors encountered problems interacting with the Triconex controllers when using TRITON in the wild.

You can see more of the Trilog and Triconex DLL files on VirusTotal.

Item Name

MD5

Description

Tr1com40.dll

069247df527a96a0e048732ca57e7d3d

Tricom Communcations DLL

Data1.cab

e6a3c93a6d433cbaf6f573b6c09d76c4

Parent of Tr1com40.dll

Trilog v4.1.360R

13a3b83ba2c4236ca59aba679941c8a5

RAR Archive of TriLog

TridCom.dll

5c2ed617fdec4779cb33c89082a43100

Trident Communications DLL

Afterthoughts

Seeing Triconex systems targeted with malicious intent was new to the world six months ago. Moving forward it would be reasonable to anticipate additional frameworks, such as TRITON, designed for usage against other SIS controllers and associated technologies. If Triconex was within scope, we may see similar attacker methodologies affecting the dominant industrial safety technologies.

Basic security measures do little to thwart truly persistent threat actors and monitoring only IT networks is not an ideal situation. Visibility into both the IT and OT environments is critical for detecting the various stages of an ICS intrusion. Simple detection concepts such as baseline deviation can provide insight into abnormal activity.

While the TRITON framework was actively in use, how many traditional ICS “alarms” were set off while the actors tested their exploits and backdoors on the Triconex controller? How many times did the TriStation protocol, as implemented in their Python scripts, fail or cause errors because of non-standard traffic? How many TriStation UDP pings were sent and how many Connection Requests? How did these statistics compare to the baseline for TriStation traffic? There are no answers to these questions for now. We believe that we can identify these anomalies in the long run if we strive for increased visibility into ICS technologies.

We hope that by holding public discussions about ICS technologies, the Infosec community can cultivate closer relationships with ICS vendors and give the world better insight into how attackers move from the IT to the OT space. We want to foster more conversations like this and generally share good techniques for finding evil. Since most of all ICS attacks involve standard IT intrusions, we should probably come together to invent and improve any guidelines for how to monitor PCs and engineering workstations that bridge the IT and OT networks. We envision a world where attacking or disrupting ICS operations costs the threat actor their cover, their toolkits, their time, and their freedom. It's an ideal world, but something nice to shoot for.

Thanks and Future Work

There is still much to do for TRITON and TriStation. There are many more sub-message types and nuances for parsing out the nitty gritty details, which is hard to do without a controller of our own. And although we’ve published much of what we learned about the TriStation here on the blog, our work will continue as we continue our study of the protocol.

Thanks to everyone who did so much public research on TRITON and TriStation. We have cited a few individuals in this blog post, but there is a lot more community-sourced information that gave us clues and leads for our research and testing of the framework and protocol. We also have to acknowledge the research performed by the TRITON attackers. We borrowed a lot of your knowledge about TriStation from the TRITON framework itself.

Finally, remember that we're here to collaborate. We think most of our research is right, but if you notice any errors or omissions, or have ideas for improvements, please spear phish contact: smiller@fireeye.com.

Recommended Reading

Appendix A: TriStation Message Type Codes

The following table consists of hex values at offset 0 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x0

Message Type

1

Connection Request

2

Connection Response

3

Disconnect Request

4

Disconnect Response

5

Execution Command

6

Ping Command

7

Connection Limit Reached

8

Not Connected

9

MPS Are Dead

10

Access Denied

11

Connection Failed

Appendix B: TriStation Execution Command Function Codes

The following table consists of hex values at offset 6 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x6

TS_cnames String

0

0: 'Start download all',

1

1: 'Start download change',

2

2: 'Update configuration',

3

3: 'Upload configuration',

4

4: 'Set I/O addresses',

5

5: 'Allocate network',

6

6: 'Load vector table',

7

7: 'Set calendar',

8

8: 'Get calendar',

9

9: 'Set scan time',

A

10: 'End download all',

B

11: 'End download change',

C

12: 'Cancel download change',

D

13: 'Attach TRICON',

E

14: 'Set I/O address limits',

F

15: 'Configure module',

10

16: 'Set multiple point values',

11

17: 'Enable all points',

12

18: 'Upload vector table',

13

19: 'Get CP status ',

14

20: 'Run program',

15

21: 'Halt program',

16

22: 'Pause program',

17

23: 'Do single scan',

18

24: 'Get chassis status',

19

25: 'Get minimum scan time',

1A

26: 'Set node number',

1B

27: 'Set I/O point values',

1C

28: 'Get I/O point values',

1D

29: 'Get MP status',

1E

30: 'Set retentive values',

1F

31: 'Adjust clock calendar',

20

32: 'Clear module alarms',

21

33: 'Get event log',

22

34: 'Set SOE block',

23

35: 'Record event log',

24

36: 'Get SOE data',

25

37: 'Enable OVD',

26

38: 'Disable OVD',

27

39: 'Enable all OVDs',

28

40: 'Disable all OVDs',

29

41: 'Process MODBUS',

2A

42: 'Upload network',

2B

43: 'Set lable',

2C

44: 'Configure system variables',

2D

45: 'Deconfigure module',

2E

46: 'Get system variables',

2F

47: 'Get module types',

30

48: 'Begin conversion table download',

31

49: 'Continue conversion table download',

32

50: 'End conversion table download',

33

51: 'Get conversion table',

34

52: 'Set ICM status',

35

53: 'Broadcast SOE data available',

36

54: 'Get module versions',

37

55: 'Allocate program',

38

56: 'Allocate function',

39

57: 'Clear retentives',

3A

58: 'Set initial values',

3B

59: 'Start TS2 program download',

3C

60: 'Set TS2 data area',

3D

61: 'Get TS2 data',

3E

62: 'Set TS2 data',

3F

63: 'Set program information',

40

64: 'Get program information',

41

65: 'Upload program',

42

66: 'Upload function',

43

67: 'Get point groups',

44

68: 'Allocate symbol table',

45

69: 'Get I/O address',

46

70: 'Resend I/O address',

47

71: 'Get program timing',

48

72: 'Allocate multiple functions',

49

73: 'Get node number',

4A

74: 'Get symbol table',

4B

75: 'Unk75',

4C

76: 'Unk76',

4D

77: 'Unk77',

4E

78: 'Unk78',

4F

79: 'Unk79',

50

80: 'Go to DOWNLOAD mode',

51

81: 'Unk81',

52

 

53

83: 'Unk83',

54

 

55

 

56

 

57

 

58

 

59

 

5A

 

5B

 

5C

 

5D

 

5E

 

5F

 

60

 

61

 

62

 

63

 

64

100: 'Command rejected',

65

101: 'Download all permitted',

66

102: 'Download change permitted',

67

103: 'Modification accepted',

68

104: 'Download cancelled',

69

105: 'Program accepted',

6A

106: 'TRICON attached',

6B

107: 'I/O addresses set',

6C

108: 'Get CP status response',

6D

109: 'Program is running',

6E

110: 'Program is halted',

6F

111: 'Program is paused',

70

112: 'End of single scan',

71

113: 'Get chassis configuration response',

72

114: 'Scan period modified',

73

115: '<115>',

74

116: '<116>',

75

117: 'Module configured',

76

118: '<118>',

77

119: 'Get chassis status response',

78

120: 'Vectors response',

79

121: 'Get I/O point values response',

7A

122: 'Calendar changed',

7B

123: 'Configuration updated',

7C

124: 'Get minimum scan time response',

7D

125: '<125>',

7E

126: 'Node number set',

7F

127: 'Get MP status response',

80

128: 'Retentive values set',

81

129: 'SOE block set',

82

130: 'Module alarms cleared',

83

131: 'Get event log response',

84

132: 'Symbol table ccepted',

85

133: 'OVD enable accepted',

86

134: 'OVD disable accepted',

87

135: 'Record event log response',

88

136: 'Upload network response',

89

137: 'Get SOE data response',

8A

138: 'Alocate network accepted',

8B

139: 'Load vector table accepted',

8C

140: 'Get calendar response',

8D

141: 'Label set',

8E

142: 'Get module types response',

8F

143: 'System variables configured',

90

144: 'Module deconfigured',

91

145: '<145>',

92

146: '<146>',

93

147: 'Get conversion table response',

94

148: 'ICM print data sent',

95

149: 'Set ICM status response',

96

150: 'Get system variables response',

97

151: 'Get module versions response',

98

152: 'Process MODBUS response',

99

153: 'Allocate program response',

9A

154: 'Allocate function response',

9B

155: 'Clear retentives response',

9C

156: 'Set initial values response',

9D

157: 'Set TS2 data area response',

9E

158: 'Get TS2 data response',

9F

159: 'Set TS2 data response',

A0

160: 'Set program information reponse',

A1

161: 'Get program information response',

A2

162: 'Upload program response',

A3

163: 'Upload function response',

A4

164: 'Get point groups response',

A5

165: 'Allocate symbol table response',

A6

166: 'Program timing response',

A7

167: 'Disable points full',

A8

168: 'Allocate multiple functions response',

A9

169: 'Get node number response',

AA

170: 'Symbol table response',

AB

 

AC

 

AD

 

AE

 

AF

 

B0

 

B1

 

B2

 

B3

 

B4

 

B5

 

B6

 

B7

 

B8

 

B9

 

BA

 

BB

 

BC

 

BD

 

BE

 

BF

 

C0

 

C1

 

C2

 

C3

 

C4

 

C5

 

C6

 

C7

 

C8

200: 'Wrong command',

C9

201: 'Load is in progress',

CA

202: 'Bad clock calendar data',

CB

203: 'Control program not halted',

CC

204: 'Control program checksum error',

CD

205: 'No memory available',

CE

206: 'Control program not valid',

CF

207: 'Not loading a control program',

D0

208: 'Network is out of range',

D1

209: 'Not enough arguments',

D2

210: 'A Network is missing',

D3

211: 'The download time mismatches',

D4

212: 'Key setting prohibits this operation',

D5

213: 'Bad control program version',

D6

214: 'Command not in correct sequence',

D7

215: '<215>',

D8

216: 'Bad Index for a module',

D9

217: 'Module address is invalid',

DA

218: '<218>',

DB

219: '<219>',

DC

220: 'Bad offset for an I/O point',

DD

221: 'Invalid point type',

DE

222: 'Invalid Point Location',

DF

223: 'Program name is invalid',

E0

224: '<224>',

E1

225: '<225>',

E2

226: '<226>',

E3

227: 'Invalid module type',

E4

228: '<228>',

E5

229: 'Invalid table type',

E6

230: '<230>',

E7

231: 'Invalid network continuation',

E8

232: 'Invalid scan time',

E9

233: 'Load is busy',

EA

234: 'An MP has re-educated',

EB

235: 'Invalid chassis or slot',

EC

236: 'Invalid SOE number',

ED

237: 'Invalid SOE type',

EE

238: 'Invalid SOE state',

EF

239: 'The variable is write protected',

F0

240: 'Node number mismatch',

F1

241: 'Command not allowed',

F2

242: 'Invalid sequence number',

F3

243: 'Time change on non-master TRICON',

F4

244: 'No free Tristation ports',

F5

245: 'Invalid Tristation I command',

F6

246: 'Invalid TriStation 1131 command',

F7

247: 'Only one chassis allowed',

F8

248: 'Bad variable address',

F9

249: 'Response overflow',

FA

250: 'Invalid bus',

FB

251: 'Disable is not allowed',

FC

252: 'Invalid length',

FD

253: 'Point cannot be disabled',

FE

254: 'Too many retentive variables',

FF

255: 'LOADER_CONNECT',

 

256: 'Unknown reject code'