Monthly Archives: April 2019

CARBANAK Week Part Four: The CARBANAK Desktop Video Player

Part One, Part Two and Part Three of CARBANAK Week are behind us. In this final blog post, we dive into one of the more interesting tools that is part of the CARBANAK toolset. The CARBANAK authors wrote their own video player and we happened to come across an interesting video capture from CARBANAK of a network operator preparing for an offensive engagement. Can we replay it?

About the Video Player

The CARBANAK backdoor is capable of recording video of the victim’s desktop. Attackers reportedly viewed recorded desktop videos to gain an understanding of the operational workflow of employees working at targeted banks, allowing them to successfully insert fraudulent transactions that remained undetected by the banks’ verification processes. As mentioned in a previous blog post announcing the arrest of several FIN7 members, the video data file format and the player used to view the videos appeared to be custom written. The video player, shown in Figure 1, and the C2 server for the bots were designed to work together as a pair.


Figure 1: CARBANAK desktop video player

The C2 server wraps video stream data received from a CARBANAK bot in a custom video file format that the video player understands, and writes these video files to a location on disk based on a convention assumed by the video player. The StreamVideo constructor shown in Figure 2 creates a new video file that will be populated with the video capture data received from a CARBANAK bot, prepending a header that includes the signature TAG, timestamp data, and the IP address of the infected host. This code is part of the C2 server project.


Figure 2: carbanak\server\Server\Stream.cs – Code snippet from the C2 server that serializes video data to file

Figure 3 shows the LoadVideo function that is part of the video player project. It validates the file type by looking for the TAG signature, then reads the timestamp values and IP address just as they were written by the C2 server code in Figure 2.


Figure 3: carbanak\server\Player\Video.cs – Player code that loads a video file created by the C2 server

Video files have the extension .frm as shown in Figure 4 and Figure 5. The C2 server’s CreateStreamVideo function shown in Figure 4 formats a file path following a convention defined in the MakeStreamFileName function, and then calls the StreamVideo constructor from Figure 2.


Figure 4: carbanak\server\Server\RecordFromBot.cs – Function in the C2 server that formats a video file name and adds the extension "frm"

The video player code snippet shown in Figure 5 follows video file path convention, searching all video file directories for files with the extension .frm that have begin and end timestamps that fall within the range of the DateTime variable dt.


Figure 5: carbanak\server\Player\Video.cs – Snippet from Player code that searches for video files with "frm" extension

An Interesting Video

We came across several video files, but only some were compatible with this video player. After some analysis, it was discovered that there are at least two different versions of the video file format, one with compressed video data and the other is raw. After some slight adjustments to the video processing code, both formats are now supported and we can play all videos.

Figure 6 shows an image from one of these videos in which the person being watched appears to be testing post-exploitation commands and ensuring they remain undetected by certain security monitoring tools.


Figure 6: Screenshot of video playback captured by CARBANAK video capability

The list of commands in the figure centers around persistence, screenshot creation, and launching various payloads. Red teamers often maintain such generic notes and command snippets for accomplishing various tasks like persisting, escalating, laterally moving, etc. An extreme example of this is Ben Clark’s book RTFM. In advance of an operation, it is customary to tailor the file names, registry value names, directories, and other parameters to afford better cover and prevent blue teams from drawing inferences based on methodology. Furthermore, Windows behavior sometimes yields surprises, such as value length limitations, unexpected interactions between payloads and specific persistence mechanisms, and so on. It is in the interest of the attacker to perform a dry run and ensure that unanticipated issues do not jeopardize the access that was gained.

The person being monitored via CARBANAK in this video appears to be a network operator preparing for attack. This could either be because the operator was testing CARBANAK, or because they were being monitored. The CARBANAK builder and other interfaces are never shown, and the operator is seen preparing several publicly available tools and tactics. While purely speculation, it is possible that this was an employee of the front company Combi Security which we now know was operated by FIN7 to recruit potentially unwitting operators. Furthermore, it could be the case that FIN7 used CARBANAK’s tinymet command to spawn Meterpreter instances and give unwitting operators access to targets using publicly available tools under the false premise of a penetration test.

Conclusion

This final installment concludes our four-part series, lovingly dubbed CARBANAK Week. To recap, we have shared at length many details concerning our experience as reverse engineers who, after spending dozens of hours reverse engineering a large, complex family of malware, happened upon the source code and toolset for the malware. This is something that rarely ever happens!

We hope this week’s lengthy addendum to FireEye’s continued CARBANAK research has been interesting and helpful to the broader security community in examining the functionalities of the framework and some of the design considerations that went into its development.

So far we have received lots of positive feedback about our discovery of the source code and our recent exposé of the CARBANAK ecosystem. It is worth highlighting that much of what we discussed in CARBANAK Week was originally covered in our FireEye’s Cyber Defense Summit 2018 presentation titled “Hello, Carbanak!”, which is freely available to watch online (a must-see for malware and lederhosen enthusiasts alike). You can expect similar topics and an electrifying array of other malware analysis, incident response, forensic investigation and threat intelligence discussions at FireEye’s upcoming Cyber Defense Summit 2019, held Oct. 7 to Oct. 10, 2019, in Washington, D.C.

CARBANAK Week Part Three: Behind the CARBANAK Backdoor

We covered a lot of ground in Part One and Part Two of our CARBANAK Week blog series. Now let's take a look back at some of our previous analysis and see how it holds up.

In June 2017, we published a blog post sharing novel information about the CARBANAK backdoor, including technical details, intel analysis, and some interesting deductions about its operations we formed from the results of automating analysis of hundreds of CARBANAK samples. Some of these deductions were claims about the toolset and build practices for CARBANAK. Now that we have a snapshot of the source code and toolset, we also have a unique opportunity to revisit these deductions and shine a new light on them.

Was There a Build Tool?

Let’s first take a look at our deduction about a build tool for CARBANAK:

“A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.”

We came to this deduction from the following evidence:

“Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.”

Figure 1 shows three keys used to decode the strings in CARBANAK, each pulled from a different CARBANAK sample.


Figure 1: Decryption keys for strings in CARBANAK are unique for each build

It turns out we were spot-on with this deduction. A build tool was discovered in the CARBANAK source dump, pictured with English translations in Figure 2.


Figure 2: CARBANAK build tool

With this build tool, you specify a set of configuration options along with a template CARBANAK binary, and it bakes the configuration data into the binary to produce the final build for distribution. The Prefix text field allows the operator to specify a campaign code. The Admin host text fields are for specifying C2 addresses, and the Admin password text field is the secret used to derive the RC2 key for encrypting communication over CARBANAK’s pseudo-HTTP protocol. This covers part of our deduction: we now know for a fact that a build tool exists and is used to configure the campaign code and RC2 key for the build, amongst other items. But what about the encoded strings? Since this would be something that happens seamlessly behind the scenes, it makes sense that no evidence of it would be found in the GUI of the build tool. To learn more, we had to go to the source code for both the backdoor and the build tool.

Figure 3 shows a preprocessor identifier named ON_CODE_STRING defined in the CARBANAK backdoor source code that when enabled, defines macros that wrap all strings the programmer wishes to encode in the binary. These functions sandwich the strings to be encoded with the strings “BS” and “ES”. Figure 4 shows a small snippet of code from the header file of the build tool source code defining BEG_ENCODE_STRING as “BS” and END_ENCODE_STRING as “ES”. The build tool searches the template binary for these “BS” and “ES” markers, extracts the strings between them, encodes them with a randomly generated key, and replaces the strings in the binary with the encoded strings. We came across an executable named bot.dll that happened to be one of the template binaries to be used by the build tool. Running strings on this binary revealed that most meaningful strings that were specific to the workings of the CARBANAK backdoor were, in fact, sandwiched between “BS” and “ES”, as shown in Figure 5.


Figure 3: ON_CODE_STRING parameter enables easy string wrapper macros to prepare strings for encoding by build tool


Figure 4: builder.h macros for encoded string markers


Figure 5: Encoded string markers in template CARBANAK binary

Operators’ Access To Source Code

Let’s look at two more related deductions from our blog post:

“Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).”

“Some of the operators may be compiling their own builds of the backdoor independently.”

The first deduction was based on the following evidence:

“Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.”

To investigate further, we performed a diff of two CARBANAK samples with very close compile times to see what, if anything, was changed in the code. Figure 6 shows one such difference.


Figure 6: Minor differences between two closely compiled CARBANAK samples

The POSLogMonitorThread function is only executed in Sample A, while the blizkoThread function is only executed in Sample B (Blizko is a Russian funds transfer service, similar to PayPal). The POSLogMonitorThread function monitors for changes made to log files for specific point of sale software and sends parsed data to the C2 server. The blizkoThread function determines whether the user of the computer is a Blizko customer by searching for specific values in the registry. With knowledge of these slight differences, we searched the source code and discovered once again that preprocessor parameters were put to use. Figure 7 shows how this function will change depending on which of three compile-time parameters are enabled.


Figure 7: Preprocessor parameters determine which functionality will be included in a template binary

This is not definitive proof that operators had access to the source code, but it certainly makes it much more plausible. The operators would not need to have any programming knowledge in order to fine tune their builds to meet their needs for specific targets, just simple guidance on how to add and remove preprocessor parameters in Visual Studio.

Evidence for the second deduction was found by looking at the binary C2 protocol implementation and how it has evolved over time. From our previous blog post:

“This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.”

Five versions of the binary C2 protocol were discovered amongst our sample set, as shown in Figure 8. This figure shows the first noted compile time that each protocol version was found amongst our sample set. Each new version improved the security and complexity of the protocol.


Figure 8: Binary C2 protocol evolution shown through binary compilation times

If the CARBANAK project was centrally located and only the template binaries were delivered to the operators, it would be expected that sample compile times should fall in line with the evolution of the binary protocol. Except for one sample that implements what we call “version 3” of the protocol, this is how our timeline looks. A probable explanation for the date not lining up for version 3 is that our sample set was not wide enough to include the first sample of this version. This is not the only case we found of an outdated protocol being implemented in a sample; Figure 9 shows another example of this.


Figure 9: CARBANAK sample using outdated version of binary protocol

In this example, a CARBANAK sample found in the wild was using protocol version 4 when a newer version had already been available for at least two months. This would not be likely to occur if the source code were kept in a single, central location. The rapid-fire fine tuning of template binaries using preprocessor parameters, combined with several samples of CARBANAK in the wild implementing outdated versions of the protocol indicate that the CARBANAK project is distributed to operators and not kept centrally.

Names of Previously Unidentified Commands

The source code revealed the names of commands whose names were previously unidentified. In fact, it also revealed commands that were altogether absent from the samples we previously blogged about because the functionality was disabled. Table 1 shows the commands whose names were newly discovered in the CARBANAK source code, along with a summary of our analysis from the blog post.

Hash

Prior FireEye Analysis

Name

0x749D968

(absent)

msgbox

0x6FD593

(absent)

ifobs

0xB22A5A7

Add/update klgconfig

updklgcfg

0x4ACAFC3

Upload files to the C2 server

findfiles

0xB0603B4

Download and execute shellcode

tinymet

Table 1: Command hashes previously not identified by name, along with description from prior FireEye analysis

The msgbox command was commented out altogether in the CARBANAK source code, and is strictly for debugging, so it never appeared in public analyses. Likewise, the ifobs command did not appear in the samples we analyzed and publicly documented, but likely for a different reason. The source code in Figure 10 shows the table of commands that CARBANAK understands, and the ifobs command (0x6FD593) is surrounded by an #ifdef, preventing the ifobs code from being compiled into the backdoor unless the ON_IFOBS preprocessor parameter is enabled.


Figure 10: Table of commands from CARBANAK tasking code

One of the more interesting commands, however, is tinymet, because it illustrates how source code can be both helpful and misleading.

The tinymet Command and Associated Payload

At the time of our initial CARBANAK analysis, we indicated that command 0xB0603B4 (whose name was unknown at the time) could execute shellcode. The source code reveals that the command (whose actual name is tinymet) was intended to execute a very specific piece of shellcode. Figure 12 shows an abbreviated listing of the code for handling the tinymet command, with line numbers in yellow and selected lines hidden (in gray) to show the code in a more compact format.


Figure 11: Abbreviated tinymet code listing

The comment starting on line 1672 indicates:

tinymet command
Command format: tinymet {ip:port | plugin_name} [plugin_name]
Retrieve meterpreter from specified address and launch in memory

On line 1710, the tinymet command handler uses the single-byte XOR key 0x50 to decode the shellcode. Of note, on line 1734 the command handler allocates five extra bytes and line 1739 hard-codes a five-byte mov instruction into that space. It populates the 32-bit immediate operand of the mov instruction with the socket handle number for the server connection that it retrieved the shellcode from. The implied destination operand for this mov instruction is the edi register.

Our analysis of the tinymet command ended here, until the binary file named met.plug was discovered. The hex dump in Figure 12 shows the end of this file.


Figure 12: Hex dump of met.plug

The end of the file is misaligned by five missing bytes, corresponding to the dynamically assembled mov edi preamble in the tasking source code. However, the single-byte XOR key 0x50 that was found in the source code did not succeed in decoding this file. After some confusion and further analysis, it was realized that the first 27 bytes of this file are a shellcode decoder that looked very similar to call4_dword_xor. Figure 13 shows the shellcode decoder and the beginning of the encoded metsrv.dll. The XOR key the shellcode uses is 0xEF47A2D0 which fits with how the five-byte mov edi instruction, decoder, and adjacent metsrv.dll will be laid out in memory.


Figure 13: Shellcode decoder

Decoding yielded a copy of metsrv.dll starting at offset 0x1b. When shellcode execution exits the decoder loop, it executes Metasploit’s executable DOS header.

Ironically, possessing source code biased our binary analysis in the wrong direction, suggesting a single-byte XOR key when really there was a 27-byte decoder preamble using a four-byte XOR key. Furthermore, the name of the command being tinymet suggested that the TinyMet Meterpreter stager was involved. This may have been the case at one point, but the source code comments and binary files suggest that the developers and operators have moved on to simply downloading Meterpreter directly without changing the name of the command.

Conclusion

Having access to the source code and toolset for CARBANAK provided us with a unique opportunity to revisit our previous analysis. We were able to fill in some missing analysis and context, validate our deductions in some cases, and provide further evidence in other cases, strengthening our confidence in them but not completely proving them true. This exercise proves that even without access to the source code, with a large enough sample set and enough analysis, accurate deductions can be reached that go beyond the source code. It also illustrates, such as in the case of the tinymet command, that sometimes, without the proper context, you simply cannot see the full and clear purpose of a given piece of code. But some source code is also inconsistent with the accompanying binaries. If Bruce Lee had been a malware analyst, he might have said that source code is like a finger pointing away to the moon; don’t concentrate on the finger, or you will miss all that binary ground truth. Source code can provide immensely rich context, but analysts must be cautious not to misapply that context to binary or forensic artifacts.

In the next and final blog post, we share details on an interesting tool that is part of the CARBANAK kit: a video player designed to play back desktop recordings captured by the backdoor.

CARBANAK Week Part Two: Continuing the CARBANAK Source Code Analysis

FireEye has observed the certificate most recently being served on the following IPs (Table 4):

IP

Hostname

Last Seen

104.193.252.151:443

vds2.system-host[.]net

2019-04-26T14:49:12

185.180.196.35:443

customer.clientshostname[.]com

2019-04-24T07:44:30

213.227.155.8:443

 

2019-04-24T04:33:52

94.156.133.69:443

 

2018-11-15T10:27:07

185.174.172.241:443

vds9992.hyperhost[.]name

2019-04-27T13:24:36

109.230.199.227:443

 

2019-04-27T13:24:36

Table 4: Recent Test Company certificate use

While these IPs have not been observed in any CARBANAK activity, this may be an indication of a common developer or a shared toolkit used for testing various malware. Several of these IPs have been observed hosting Cobalt Strike BEACON payloads and METERPRETER listeners. Virtual Private Server (VPS) IPs may change hands frequently and additional malicious activity hosted on these IPs, even in close time proximity, may not be associated with the same users.

I also parsed an unprotected private key from the source code dump. Figure 4 and Table 5 show the private key parameters at a glance and in detail, respectively.


Figure 4: Parsed 512-bit private key

Field

Value

bType

7

bVersion

2

aiKeyAlg

0xA400 (CALG_RSA_KEYX) – RSA public key exchange algorithm

Magic

RSA2

Bitlen

512

PubExp

65537

Modulus

0B CA 8A 13 FD 91 E4 72 80 F9 5F EE 38 BC 2E ED

20 5D 54 03 02 AE D6 90 4B 6A 6F AE 7E 06 3E 8C

EA A8 15 46 9F 3E 14 20 86 43 6F 87 BF AE 47 C8

57 F5 1F D0 B7 27 42 0E D1 51 37 65 16 E4 93 CB

P

8B 01 8F 7D 1D A2 34 AE CA B6 22 EE 41 4A B9 2C

E0 05 FA D0 35 B2 BF 9C E6 7C 6E 65 AC AE 17 EA

Q

81 69 AB 3D D7 01 55 7A F8 EE 3C A2 78 A5 1E B1

9A 3B 83 EC 2F F1 F7 13 D8 1A B3 DE DF 24 A1 DE

Dp

B5 C7 AE 0F 46 E9 02 FB 4E A2 A5 36 7F 2E ED A4

9E 2B 0E 57 F3 DB 11 66 13 5E 01 94 13 34 10 CB

Dq

81 AC 0D 20 14 E9 5C BF 4B 08 54 D3 74 C4 57 EA

C3 9D 66 C9 2E 0A 19 EA C1 A3 78 30 44 52 B2 9F

Iq

C2 D2 55 32 5E 7D 66 4C 8B 7F 02 82 0B 35 45 18

24 76 09 2B 56 71 C6 63 C4 C5 87 AD ED 51 DA 2ª

D

01 6A F3 FA 6A F7 34 83 75 C6 94 EB 77 F1 C7 BB

7C 68 28 70 4D FB 6A 67 03 AE E2 D8 8B E9 E8 E0

2A 0F FB 39 13 BD 1B 46 6A D9 98 EA A6 3E 63 A8

2F A3 BD B3 E5 D6 85 98 4D 1C 06 2A AD 76 07 49

Table 5: Private key parameters

I found a value named PUBLIC_KEY defined in a configuration header, with comments indicating it was for debugging purposes. The parsed values are shown in Table 6.

Field

Value

bType

6

bVersion

2

aiKeyAlg

0xA400 (CALG_RSA_KEYX) – RSA public key exchange algorithm

Magic

RSA1

Bitlen

512

PubExp

65537

Modulus

0B CA 8A 13 FD 91 E4 72 80 F9 5F EE 38 BC 2E ED

20 5D 54 03 02 AE D6 90 4B 6A 6F AE 7E 06 3E 8C

EA A8 15 46 9F 3E 14 20 86 43 6F 87 BF AE 47 C8

57 F5 1F D0 B7 27 42 0E D1 51 37 65 16 E4 93 CB

Table 6: Key parameters for PUBLIC_KEY defined in configuration header

Network Based Indicators

The source code and binaries contained multiple Network-Based Indicators (NBIs) having significant overlap with CARBANAK backdoor activity and FIN7 operations previously observed and documented by FireEye. Table 7 shows these indicators along with the associated FireEye public documentation. This includes the status of each NBI as it was encountered (active in source code, commented out, or compiled into a binary). Domain names are de-fanged to prevent accidental resolution or interaction by browsers, chat clients, etc.

NBI

Status

Threat Group Association

comixed[.]org

Commented out

Earlier CARBANAK activity

194.146.180[.]40

Commented out

Earlier CARBANAK activity

aaaabbbbccccc[.]org

Active

 

stats10-google[.]com

Commented out

FIN7

192.168.0[.]100:700

Active

 

80.84.49[.]50:443

Commented out

 

52.11.125[.]44:443

Commented out

 

85.25.84[.]223

Commented out

 

qwqreererwere[.]com

Active

 

akamai-technologies[.]org

Commented out

Earlier CARBANAK activity

192.168.0[.]100:700

Active

 

37.1.212[.]100:700

Commented out

 

188.138.98[.]105:710

Commented out

Earlier CARBANAK activity

hhklhlkhkjhjkjk[.]org

Compiled

 

192.168.0[.]100:700

Compiled

 

aaa.stage.4463714.news.meteonovosti[.]info

Compiled

DNS infrastructure overlap with later FIN7 associated POWERSOURCE activity

193.203.48[.]23:800

Active

Earlier CARBANAK activity

Table 7: NBIs and prevously observed activity

Four of these TCP endpoints (80.84.49[.]50:443, 52.11.125[.]44:443, 85.25.84[.]223, and 37.1.212[.]100:700) were new to me, although some have been documented elsewhere.

Conclusion

Our analysis of this source code dump confirmed it was CARBANAK and turned up a few new and interesting data points. We were able to notify vendors about disclosures that specifically targeted their security suites. The previously documented NBIs, Windows API function resolution, backdoor command hash values, usage of Windows cabinet file APIs, and other artifacts associated with CARBANAK all match, and as they say, if the shoe fits, wear it. Interestingly though, the project itself isn’t called CARBANAK or even Anunak as the information security community has come to call it based on the string artifacts found within the malware. The authors mainly refer to the malware as “bot” in the Visual Studio project, filenames, source code comments, output binaries, user interfaces, and manuals.

The breadth and depth of this analysis was a departure from the usual requests we receive on the FLARE team. The journey included learning some Russian, searching through a hundred thousand of lines of code for new information, and analyzing a few dozen binaries. In the end, I’m thankful I had the opportunity to take this request.

In the next post, Tom Bennett takes the reins to provide a retrospective on his and Barry Vengerik’s previous analysis in light of the source code. Part Four of CARBANAK Week is available as well.

CARBANAK Week Part One: A Rare Occurrence

It is very unusual for FLARE to analyze a prolifically-used, privately-developed backdoor only to later have the source code and operator tools fall into our laps. Yet this is the extraordinary circumstance that sets the stage for CARBANAK Week, a four-part blog series that commences with this post.

CARBANAK is one of the most full-featured backdoors around. It was used to perpetrate millions of dollars in financial crimes, largely by the group we track as FIN7. In 2017, Tom Bennett and Barry Vengerik published Behind the CARBANAK Backdoor, which was the product of a deep and broad analysis of CARBANAK samples and FIN7 activity across several years. On the heels of that publication, our colleague Nick Carr uncovered a pair of RAR archives containing CARBANAK source code, builders, and other tools (both available in VirusTotal: kb3r1p and apwmie).

FLARE malware analysis requests are typically limited to a few dozen files at most. But the CARBANAK source code was 20MB comprising 755 files, with 39 binaries and 100,000 lines of code. Our goal was to find threat intelligence we missed in our previous analyses. How does an analyst respond to a request with such breadth and open-ended scope? And what did we find?

My friend Tom Bennett and I spoke about this briefly in our 2018 FireEye Cyber Defense Summit talk, Hello, Carbanak! In this blog series, we will expound at length and share a written retrospective on the inferences drawn in our previous public analysis based on binary code reverse engineering. In this first part, I’ll discuss Russian language concerns, translated graphical user interfaces of CARBANAK tools, and anti-analysis tactics as seen from a source code perspective. We will also explain an interesting twist where analyzing the source code surprisingly proved to be just as difficult as analyzing the binary, if not more. There’s a lot here; buckle up!

File Encoding and Language Considerations

The objective of this analysis was to discover threat intelligence gaps and better protect our customers. To begin, I wanted to assemble a cross-reference of source code files and concepts of specific interest.

Reading the source code entailed two steps: displaying the files in the correct encoding, and learning enough Russian to be dangerous. Figure 1 shows CARBANAK source code in a text editor that is unaware of the correct encoding.


Figure 1: File without proper decoding

Two good file encoding guesses are UTF-8 and code page 1251 (Cyrillic). The files were mostly code page 1251 as shown in Figure 2.


Figure 2: Code Page 1251 (Cyrillic) source code

Figure 2 is a C++ header file defining error values involved in backdoor command execution. Most identifiers were in English, but some were not particularly descriptive. Ergo, the second and more difficult step was learning some Russian to benefit from the context offered by the source code comments.

FLARE has fluent Russian speakers, but I took it upon myself to minimize my use of other analysts’ time. To this end, I wrote a script to tear through files and create a prioritized vocabulary list. The script, which is available in the FireEye vocab_scraper GitHub repository, walks source directories finding all character sequences outside the printable lower ASCII range: decimal values 32 (the space character) through 126 (the tilde character “~”) inclusive. The script adds each word to a Python defaultdict_ and increments its count. Finally, the script orders this dictionary by frequency of occurrence and dumps it to a file.

The result was a 3,400+ word vocabulary list, partially shown in Figure 3.


Figure 3: Top 19 Cyrillic character sequences from the CARBANAK source code

I spent several hours on Russian language learning websites to study the pronunciation of Cyrillic characters and Russian words. Then, I looked up the top 600+ words and created a small dictionary. I added Russian language input to an analysis VM and used Microsoft’s on-screen keyboard (osk.exe) to navigate the Cyrillic keyboard layout and look up definitions.

One helpful effect of learning to pronounce Cyrillic characters was my newfound recognition of English loan words (words that are borrowed from English and transliterated to Cyrillic). My small vocabulary allowed me to read many comments without looking anything up. Table 1 shows a short sampling of some of the English loan words I encountered.

Cyrillic

English Phonetic

English

Occurrences

Rank

Файл

f ah y L

file

224

5

сервер

s e r v e r

server

145

13

адрес

a d r e s

address

52

134

команд

k o m a n d

command

110+

27

бота

b o t a

bot

130

32

плагин

p l ah g ee n

plugin

116

39

сервис

s e r v ee s

service

70

46

процесс

p r o ts e s s

process

130ish

63

Table 1: Sampling of English loan words in the CARBANAK source code

Aside from source code comments, understanding how to read and type in Cyrillic came in handy for translating the CARBANAK graphical user interfaces I found in the source code dump. Figure 4 shows a Command and Control (C2) user interface for CARBANAK that I translated.


Figure 4: Translated C2 graphical user interface

These user interfaces included video management and playback applications as shown in Figure 5 and Figure 6 respectively. Tom will share some interesting work he did with these in a subsequent part of this blog series.


Figure 5: Translated video management application user interface


Figure 6: Translated video playback application user interface

Figure 7 shows the backdoor builder that was contained within the RAR archive of operator tools.


Figure 7: Translated backdoor builder application user interface

The operator RAR archive also contained an operator’s manual explaining the semantics of all the backdoor commands. Figure 8 shows the first few commands in this manual, both in Russian and English (translated).


Figure 8: Operator manual (left: original Russian; right: translated to English)

Down the Rabbit Hole: When Having Source Code Does Not Help

In simpler backdoors, a single function evaluates the command ID received from the C2 server and dispatches control to the correct function to carry out the command. For example, a backdoor might ask its C2 server for a command and receive a response bearing the command ID 0x67. The dispatch function in the backdoor will check the command ID against several different values, including 0x67, which as an example might call a function to shovel a reverse shell to the C2 server. Figure 9 shows a control flow graph of such a function as viewed in IDA Pro. Each block of code checks against a command ID and either passes control to the appropriate command handling code, or moves on to check for the next command ID.


Figure 9: A control flow graph of a simple command handling function

In this regard, CARBANAK is an entirely different beast. It utilizes a Windows mechanism called named pipes as a means of communication and coordination across all the threads, processes, and plugins under the backdoor’s control. When the CARBANAK tasking component receives a command, it forwards the command over a named pipe where it travels through several different functions that process the message, possibly writing it to one or more additional named pipes, until it arrives at its destination where the specified command is finally handled. Command handlers may even specify their own named pipe to request more data from the C2 server. When the C2 server returns the data, CARBANAK writes the result to this auxiliary named pipe and a callback function is triggered to handle the response data asynchronously. CARBANAK’s named pipe-based tasking component is flexible enough to control both inherent command handlers and plugins. It also allows for the possibility of a local client to dispatch commands to CARBANAK without the use of a network. In fact, not only did we write such a client to aid in analysis and testing, but such a client, named botcmd.exe, was also present in the source dump.

Tom’s Perspective

Analyzing this command-handling mechanism within CARBANAK from a binary perspective was certainly challenging. It required maintaining tabs for many different views into the disassembly, and a sort of textual map of command ids and named pipe names to describe the journey of an inbound command through the various pipes and functions before arriving at its destination. Figure 10 shows the control flow graphs for seven of the named pipe message handling functions. While it was difficult to analyze this from a binary reverse engineering perspective, having compiled code combined with the features that a good disassembler such as IDA Pro provides made it less harrowing than Mike’s experience. The binary perspective saved me from having to search across several source files and deal with ambiguous function names. The disassembler features allowed me to easily follow cross-references for functions and global variables and to open multiple, related views into the code.


Figure 10: Control flow graphs for the named pipe message handling functions

Mike’s Perspective

Having source code sounds like cheat-mode for malware analysis. Indeed, source code contains much information that is lost through the compilation and linking process. Even so, CARBANAK’s tasking component (for handling commands sent by the C2 server) serves as a counter-example. Depending on the C2 protocol used and the command being processed, control flow may take divergent paths through different functions only to converge again later and accomplish the same command. Analysis required bouncing around between almost 20 functions in 5 files, often backtracking to recover information about function pointers and parameters that were passed in from as many as 18 layers back. Analysis also entailed resolving matters of C++ class inheritance, scope ambiguity, overloaded functions, and control flow termination upon named pipe usage. The overall effect was that this was difficult to analyze, even in source code.

I only embarked on this top-to-bottom journey once, to search for any surprises. The effort gave me an appreciation for the baroque machinery the authors constructed either for the sake of obfuscation or flexibility. I felt like this was done at least in part to obscure relationships and hinder timely analysis.

Anti-Analysis Mechanisms in Source Code

CARBANAK’s executable code is filled with logic that pushes hexadecimal numbers to the same function, followed by an indirect call against the returned value. This is easily recognizable as obfuscated function import resolution, wherein CARBANAK uses a simple string hash known as PJW (named after its author, P.J. Weinberger) to locate Windows API functions without disclosing their names. A Python implementation of the PJW hash is shown in Figure 11 for reference.

def pjw_hash(s):
    ctr = 0
    for i in range(len(s)):
        ctr = 0xffffffff & ((ctr << 4) + ord(s[i]))
        if ctr & 0xf0000000:
            ctr = (((ctr & 0xf0000000) >> 24) ^ ctr) & 0x0fffffff

 

    return ctr

Figure 11: PJW hash

This is used several hundred times in CARBANAK samples and impedes understanding of the malware’s functionality. Fortunately, reversers can use the flare-ida scripts to annotate the obfuscated imports, as shown in Figure 12.


Figure 12: Obfuscated import resolution annotated with FLARE's shellcode hash search

The CARBANAK authors achieved this obfuscated import resolution throughout their backdoor with relative ease using C preprocessor macros and a pre-compilation source code scanning step to calculate function hashes. Figure 13 shows the definition of the relevant API macro and associated machinery.


Figure 13: API macro for import resolution

The API macro allows the author to type API(SHLWAPI, PathFindFileNameA)(…) and have it replaced with GetApiAddrFunc(SHLWAPI, hashPathFindFileNameA)(…). SHLWAPI is a symbolic macro defined to be the constant 3, and hashPathFindFileNameA is the string hash value 0xE3685D1 as observed in the disassembly. But how was the hash defined?

The CARBANAK source code has a utility (unimaginatively named tool) that scans source code for invocations of the API macro to build a header file defining string hashes for all the Windows API function names encountered in the entire codebase. Figure 14 shows the source code for this utility along with its output file, api_funcs_hash.h.


Figure 14: Source code and output from string hash utility

When I reverse engineer obfuscated malware, I can’t help but try to theorize about how authors implement their obfuscations. The CARBANAK source code gives another data point into how malware authors wield the powerful C preprocessor along with custom code scanning and code generation tools to obfuscate without imposing an undue burden on developers. This might provide future perspective in terms of what to expect from malware authors in the future and may help identify units of potential code reuse in future projects as well as rate their significance. It would be trivial to apply this to new projects, but with the source code being on VirusTotal, this level of code sharing may not represent shared authorship. Also, the source code is accessibly instructive in why malware would push an integer as well as a hash to resolve functions: because the integer is an index into an array of module handles that are opened in advance and associated with these pre-defined integers.

Conclusion

The CARBANAK source code is illustrative of how these malware authors addressed some of the practical concerns of obfuscation. Both the tasking code and the Windows API resolution system represent significant investments in throwing malware analysts off the scent of this backdoor. Check out Part Two of this series for a round-up of antivirus evasions, exploits, secrets, key material, authorship artifacts, and network-based indicators. Part Three and Part Four are available now as well!

FLASHMINGO: The FireEye Open Source Automatic Analysis Tool for Flash

Adobe Flash is one of the most exploited software components of the last decade. Its complexity and ubiquity make it an obvious target for attackers. Public sources list more than one thousand CVEs being assigned to the Flash Player alone since 2005. Almost nine hundred of these vulnerabilities have a Common Vulnerability Scoring System (CVSS) score of nine or higher.

After more than a decade of playing cat and mouse with the attackers, Adobe is finally deprecating Flash in 2020. To the security community this move is not a surprise since all major browsers have already dropped support for Flash.

A common misconception exists that Flash is already a thing of the past; however, history has shown us that legacy technologies linger for quite a long time. If organizations do not phase Flash out in time, the security threat may grow beyond Flash's end of life due to a lack of security patches.

As malware analysts on the FLARE team, we still see Flash exploits within malware samples. We must find a compromise between the need to analyse Flash samples and the correct amount of resources to be spent on a declining product. To this end we developed FLASHMINGO, a framework to automate the analysis of SWF files. FLASHMINGO enables analysts to triage suspicious Flash samples and investigate them further with minimal effort. It integrates into various analysis workflows as a stand-alone application or can be used as a powerful library. Users can easily extend the tool's functionality via custom Python plug-ins.

Background: SWF and ActionScript3

Before we dive into the inner workings of FLASHMINGO, let’s learn about the Flash architecture. Flash’s SWF files are composed of chunks, called tags, implementing a specific functionality. Tags are completely independent from each other, allowing for compatibility with older versions of Flash. If a tag is not supported, the software simply ignores it. The main source of security issues revolves around SWF’s scripting language: ActionScript3 (AS3). This scripting language is compiled into bytecode and placed within a Do ActionScript ByteCode (DoABC) tag. If a SWF file contains a DoABC tag, the bytecode is extracted and executed by a proprietary stack-based virtual machine (VM), known as AVM2 in the case of AS3, shipped within Adobe’s Flash player. The design of the AVM2 was based on the Java VM and was similarly plagued by memory corruption and logical issues that allowed malicious AS3 bytecode to execute native code in the context of the Flash player. In the few cases where the root cause of past vulnerabilities was not in the AVM2, ActionScript code was still necessary to put the system in a state suitable for reliable exploitation. For example, by grooming the heap before triggering a memory corruption. For these reasons, FLASHMINGO focuses on the analysis of AS3 bytecode.

Tool Architecture

FLASHMINGO leverages the open source SWIFFAS library to do the heavy lifting of parsing Flash files. All binary data and bytecode are parsed and stored in a large object named SWFObject. This object contains all the information about the SWF relevant to our analysis: a list of tags, information about all methods, strings, constants and embedded binary data, to name a few. It is essentially a representation of the SWF file in an easily queryable format.

FLASHMINGO is a collection of plug-ins that operate on the SWFObject and extract interesting information. Figure 1 shows the relationship between FLASHMINGO, its plug-ins, and the SWFObject.


Figure 1: High level software structure

Several useful plug-ins covering a wide range of common analysis are already included with FLASHMINGO, including:

  • Find suspicious method names. Many samples contain method names used during development, like “run_shell” or “find_virtualprotect”. This plug-in flags samples with methods containing suspicious substrings.
  • Find suspicious constants. The presence of certain constant values in the bytecode may point to malicious or suspicious code. For example, code containing the constant value 0x5A4D may be shellcode searching for an MZ header.
  • Find suspicious loops. Malicious activity often happens within loops. This includes encoding, decoding, and heap spraying. This plug-in flags methods containing loops with interesting operations such as XOR or bitwise AND. It is a simple heuristic that effectively detects most encoding and decoding operations, and otherwise interesting code to further analyse.
  • Retrieve all embedded binary data.
  • A decompiler plug-in that uses the FFDEC Flash Decompiler. This decompiler engine, written in Java, can be used as a stand-alone library. Since FLASHMINGO is written in Python, using this plug-in requires Jython to interoperate between these two languages.

Extending FLASHMINGO With Your Own Plug-ins

FLASHMINGO is very easy to extend. Every plug-in is located in its own directory under the plug-ins directory. At start-up FLASHMINGO searches all plug-in directories for a manifest file (explained later in the post) and registers the plug-in if it is marked as active.

To accelerate development a template plug-in is provided. To add your own plug-in, copy the template directory, rename it, and edit its manifest and code. The template plug-in’s manifest, written in YAML, is shown below:

```
# This is a template for easy development
name: Template
active: no
description: copy this to kickstart development
returns: nothing

```

The most important parameters in this file are: name and active. The name parameter is used internally by FLASHMINGO to refer to it. The active parameter is a Boolean value (yes or no) indicating whether this plug-in should be active or not. By default, all plug-ins (except the template) are active, but there may be cases where a user would want to deactivate a plug-in. The parameters description and returns are simple strings to display documentation to the user. Finally, plug-in manifests are parsed once at program start. Adding new plug-ins or enabling/disabling plug-ins requires restarting FLASHMINGO.

Now for the actual code implementing the business logic. The file plugin.py contains a class named Plugin; the only thing that is needed is to implement its run method. Each plug-in receives an instance of a SWFObject as a parameter. The code will interact with this object and return data in a custom format, defined by the user. This way, the user's plug-ins can be written to produce data that can be directly ingested by their infrastructure.

Let's see how easy it is to create plug-ins by walking through one that is included, named binary_data. This plugin returns all embedded data in a SWF file by default. If the user specifies an optional parameter pattern then the plug-in searches for matches of that byte sequence within the embedded data, returning a dictionary of embedded data and the offset at which the pattern was found.

First, we define the optional argument pattern to be supplied by the user (line 2 and line 4):

Afterwards, implement a custom run method and all other code needed to support it:

This is a simple but useful plugin and illustrates how to interact with FLASHMINGO. The plug-in has a logging facility accessible through the property “ml” (line 2). By default it logs to FLASHMINGO’s main logger. If unspecified, it falls back to a log file within the plug-in’s directory. Line 10 to line 16 show the custom run method, extracting information from the SWF’s embedded data with the help of the custom _inspect_binary_data method. Note the source of this binary data: it is being read from a property named “swf”. This is the SWFObject passed to the plug-in as an argument, as mentioned previously. More complex analysis can be performed on the SWF file contents interacting with this swf object. Our repository contains documentation for all available methods of a SWFObject.

Conclusion

Even though Flash is set to reach its end of life at the end of 2020 and most of the development community has moved away from it a long time ago, we predict that we’ll see Flash being used as an infection vector for a while. Legacy technologies are juicy targets for attackers due to the lack of security updates. FLASHMINGO provides malware analysts a flexible framework to quickly deal with these pesky Flash samples without getting bogged down in the intricacies of the execution environment and file format.

Find the FLASHMINGO tool on the FireEye public GitHub Repository.

Churning Out Machine Learning Models: Handling Changes in Model Predictions

Introduction

Machine learning (ML) is playing an increasingly important role in cyber security. Here at FireEye, we employ ML for a variety of tasks such as: antivirus, malicious PowerShell detection, and correlating threat actor behavior. While many people think that a data scientist’s job is finished when a model is built, the truth is that cyber threats constantly change and so must our models. The initial training is only the start of the process and ML model maintenance creates a large amount of technical debt. Google provides a helpful introduction to this topic in their paper “Machine Learning: The High-Interest Credit Card of Technical Debt.” A key concept from the paper is the principle of CACE: change anything, change everything. Because ML models deliberately find nonlinear dependencies between input data, small changes in our data can create cascading effects on model accuracy and downstream systems that consume those model predictions. This creates an inherent conflict in cyber security modeling: (1) we need to update models over time to adjust to current threats and (2) changing models can lead to unpredictable outcomes that we need to mitigate.

Ideally, when we update a model, the only change in model outputs are improvements, e.g. fixes to previous errors. Both false negatives (missing malicious activity) and false positives (alerts on benign activity), have significant impact and should be minimized. Since no ML model is perfect, we mitigate mistakes with orthogonal approaches: whitelists and blacklists, external intelligence feeds, rule-based systems, etc. Combining with other information also provides context for alerts that may not otherwise be present. However, CACE! These integrated systems can suffer unintended side effects from a model update. Even when the overall model accuracy has increased, individual changes in model output are not guaranteed to be improvements. Introduction of new false negatives or false positives in an updated model, called churn, creates the potential for new vulnerabilities and negative interactions with cyber security infrastructure that consumes model output. In this article, we discuss churn, how it creates technical debt when considering the larger cyber security product, and methods to reduce it.

Prediction Churn

Whenever we retrain our cyber security-focused ML models, we need to able to calculate and control for churn. Formally, prediction churn is defined as the expected percent difference between two different model predictions (note that prediction churn is not the same as customer churn, the loss of customers over time, which is the more common usage of the term in business analytics). It was originally defined by Cormier et al. for a variety of applications. For cyber security applications, we are often concerned with just those differences where the newer model performs worse than the older model. Let’s define bad churn when retraining a classifier as the percentage of misclassified samples in the test set which the original model correctly classified.

Churn is often a surprising and non-intuitive concept. After all, if the accuracy of our new model is better than the accuracy of our old model, what’s the problem? Consider the simple linear classification problem of malicious red squares and benign blue circles in Figure 1. The original model, A, makes three misclassifications while the newer model, B, makes only two errors. B is the more accurate model. Note, however, that B introduces a new mistake in the lower right corner, misclassifying a red square as benign. That square was correctly classified by model A and represents an instance of bad churn. Clearly, it’s possible to reduce the overall error rate while introducing a small number of new errors which did not exist in the older model.


Figure 1: Two linear classifiers with errors highlighted in orange. The original classifier A has lower accuracy than B. However, B introduces a new error in the bottom right corner.

Practically, churn introduces two problems in our models. First, bad churn may require changes to whitelist/blacklists used in conjunction with ML models. As we previously discussed, these are used to handle the small but inevitable number of incorrect classifications. Testing on large repositories of data is necessary to catch such changes and update associated whitelists and blacklists. Second, churn may create issues for other ML models or rule-based systems which rely on the output of the ML model. For example, consider a hypothetical system which evaluates URLs using both a ML model and a noisy blacklist. The system generates an alert if

  • P(URL = ‘malicious’) > 0.9 or
  • P(URL = ‘malicious’) > 0.5 and the URL is on the blacklist

After retraining, the distribution of P(URL=‘malicious’) changes and all .com domains receive a higher score. The alert rules may need to be readjusted to maintain the required overall accuracy of the combined system. Ultimately, finding ways of reducing churn minimizes this kind of technical debt.

Experimental Setup

We’re going to explore churn and churn reduction techniques using EMBER, an open source malware classification data set. It consists of 1.1 million PE files first seen in 2017, along with their labels and features. The objective is to classify the files as either goodware or malware. For our purposes we need to construct not one model, but two, in order to calculate the churn between models. We have split the data set into three pieces:

  1. January through August is used as training data
  2. September and October are used to simulate running the model in production and retraining (test 1 in Figure 2).
  3. November and December are used to evaluate the models from step 1 and 2 (test 2 in Figure 2).


Figure 2: A comparison of our experimental setup versus the original EMBER data split. EMBER has a ten-month training set and a two-month test set. Our setup splits the data into three sets to simulate model training, then retraining while keeping an independent data set for final evaluation.

Figure 2 shows our data split and how it compares to the original EMBER data split. We have built a LightGBM classifier on the training data, which we’ll refer to as the baseline model. To simulate production testing, we run the baseline model on test 1 and record the FPs and FNs. Then, we retrain our model using both the training data and the FPs/FNs from test 1. We’ll refer to this model as the standard retrain. This is a reasonably realistic simulation of actual production data collection and model retraining. Finally, both the baseline model and the standard retrain are evaluated on test 2. The standard retrain has a higher accuracy than the baseline on test 2, 99.33% vs 99.10% respectively. However, there are 246 misclassifications made by the retrain model that were not made by the baseline or 0.12% bad churn.

Incremental Learning

Since our rationale for retraining is that cyber security threats change over time, e.g. concept drift, it’s a natural suggestion to use techniques like incremental learning to handle retraining. In incremental learning we take new data to learn new concepts without forgetting (all) previously learned concepts. That also suggests that an incrementally trained model may not have as much churn, as the concepts learned in the baseline model still exist in the new model. Not all ML models support incremental learning, but linear and logistic regression, neural networks, and some decision trees do. Other ML models can be modified to implement incremental learning. For our experiment, we incrementally trained the baseline LightGBM model by augmenting the training data with FPs and FNs from test 1 and then trained an additional 100 trees on top of the baseline model (for a total of 1,100 trees). Unlike the baseline model we use regularization (L2 parameter of 1.0); using no regularization resulted in overfitting to the new points. The incremental model has a bad churn of 0.05% (113 samples total) and 99.34% accuracy on test 2. Another interesting metric is the model’s performance on the new training data; how many of the baseline FPs and FNs from test 1 does the new model fix? The incrementally trained model correctly classifies 84% of the previous incorrect classifications. In a very broad sense, incrementally training on a previous model’s mistake provides a “patch” for the “bugs” of the old model.

Churn-Aware Learning

Incremental approaches only work if the features of the original and new model are identical. If new features are added, say to improve model accuracy, then alternative methods are required. If what we desire is both accuracy and low churn, then the most straightforward solution is to include both of these requirements when training. That’s the approach taken by Cormier et al., where samples received different weights during training in such a way as to minimize churn. We have made a few deviations in our approach: (1) we are interested in reducing bad churn (churn involving new misclassifications) as opposed to all churn and (2) we would like to avoid the extreme memory requirements of the original method. In a similar manner to Cormier et al., we want to reduce the weight, e.g. importance, of previously misclassified samples during training of a new model. Practically, the model sees making the same mistakes as the previous model as cheaper than making a new mistake. Our weighing scheme gives all samples correctly classified by the original model a weight of one and all other samples have a weight of: w = α – β |0.5 – Pold (χi)|, where Pold (χi) is the output of the old model on sample χi and αβ are adjustable hyperparameters. We train this reduced churn operator model (RCOP) using an α of 0.9, a β of 0.6 and the same training data as the incremental model. RCOP produces 0.09% bad churn, 99.38% accuracy on test 2.

Results

Figure 3 shows both accuracy and bad churn of each model on test set 2. We compare the baseline model, the standard model retrain, the incrementally learned model and the RCOP model.


Figure 3: Bad churn versus accuracy on test set 2.

Table 1 summarizes each of these approaches, discussed in detail above.

Name

Trained on

Method

Total # of trees

Baseline

train

LightGBM

1000

Standard retrain

train + FPs/FNs from baseline on test 1

LightGBM

1100

Incremental model

train + FPs/FNs from baseline on test 1

Trained 100 new trees, starting from the baseline model

1100

RCOP

train + FPs/FNs from baseline on test 1

LightGBM with altered sample weights

1100

Table 1: A description of the models tested

The baseline model has 100 fewer trees than the other models, which could explain the comparatively reduced accuracy. However, we tried increasing the number of trees which resulted in only a minor increase in accuracy of < 0.001%. The increase in accuracy for the non-baseline methods is due to the differences in data set and training methods. Both incremental training and RCOP work as expected producing less churn than the standard retrain, while showing accuracy improvements over the baseline. In general, there is usually a trend of increasing accuracy being correlated with increasing bad churn: there is no free lunch. That increasing accuracy occurs due to changes in the decision boundary, the more improvement the more changes occur. It seems reasonable the increasing decision boundary changes correlate with an increase in bad churn although we see no theoretical justification for why that must always be the case.

Unexpectedly, both the incremental model and RCOP produce more accurate models with less churn than the standard retrain. We would have assumed that given their additional constraints both models would have less accuracy with less churn. The most direct comparison is RCOP versus the standard retrain. Both models use identical data sets and model parameters, varying only by the weights associated with each sample. RCOP reduces the weight of incorrectly classified samples by the baseline model. That reduction is responsible for the improvement in accuracy. A possible explanation of this behavior is mislabeled training data. Multiple authors have suggested identifying and removing points with label noise, often using the misclassifications of a previously trained model to identify those noisy points. Our scheme, which reduces the weight of those points instead of removing them, is not dissimilar to those other noise reduction approaches which could explain the accuracy improvement.

Conclusion

ML models experience an inherent struggle: not retraining means being vulnerable to new classes of threats, while retraining causes churn and potentially reintroduces old vulnerabilities. In this blog post, we have discussed two different approaches to modifying ML model training in order to reduce churn: incremental model training and churn-aware learning. Both demonstrate effectiveness in the EMBER malware classification data set by reducing the bad churn, while simultaneously improving accuracy. Finally, we also demonstrated the novel conclusion that reducing churn in a data set with label noise can result in a more accurate model. Overall, these approaches provide low technical debt solutions to updating models that allow data scientists and machine learning engineers to keep their models up-to-date against the latest cyber threats at minimal cost. At FireEye, our data scientists work closely with the FireEye Labs detection analysts to quickly identify misclassifications and use these techniques to reduce the impact of churn on our customers.

Cyber Security: Three Parts Art, One Part Science

As I reflect upon my almost 40 years as a cyber security professional, I think of the many instances where the basic tenets of cyber security—those we think have common understanding—require a lot of additional explanation. For example, what is a vulnerability assessment? If five cyber professionals are sitting around a table discussing this question, you will end up with seven or eight answers. One will say that a vulnerability assessment is vulnerability scanning only. Another will say an assessment is much bigger than scanning, and addresses ethical hacking and internal security testing. Another will say that it is a passive review of policies and controls. All are correct in some form, but the answer really depends on the requirements or criteria you are trying to achieve. And it also depends on the skills and experience of the risk owner, auditor, or assessor. Is your head spinning yet? I know mine is! Hence the “three parts art.”

There is quite a bit of subjectivity in the cyber security business. One auditor will look at evidence and agree you are in compliance; another will say you are not. If you are going to protect sensitive information, do you encrypt it, obfuscate it, or segment it off and place it behind very tight identification and access controls before allowing users to access the data? Yes. As we advise our client base, it is essential that we have all the context necessary to make good risk-based decisions and recommendations.

Let’s talk about Connection’s artistic methodology. We start with a canvas that has the core components of cyber security: protection, detection, and reaction. By addressing each of these three pillars in a comprehensive way, we ensure that the full conversation around how people, process, and technology all work together to provide a comprehensive risk strategy is achieved.

Protection:

People
Users understand threat and risk, and know what role they play in the protection strategy. For example, if you see something, say something. Don’t let someone surf in behind you through a badge check entry. And don’t think about trying to shut off your end-point anti-virus or firewall.

Process
Policy are established, documented, and socialized. For example, personal laptops should never be connected to the corporate network. Also, don’t send sensitive information to your personal email account so you can work from home.

Technology
Some examples of the barriers used to deter attackers and breaches are edge security with firewalls, intrusion detection and prevention, sandboxing, and advanced threat detection.

Detection:

The average mean time to identify an active incident in a network is 197 days. The mean time to contain an incident is 69 days.

People
Incident response teams need to be identified and trained, and all employees need to be trained on the concept of “if you see something, say something.” Detection is a proactive process.

Process
What happens when an alert occurs? Who sees it? What is the documented process for taking action?

Technology
What is in place to ensure you are detecting malicious activity? Is it configured to ignore noise and only alert you of a real event? Will it help you bring that 197-day mean time to detection way down?

Reaction:

People
What happens when an event occurs? Who responds? How do you recover? Does everyone understand their role? Do you War Game to ensure you are prepared WHEN an incident occurs?

Process
What is the documented process to reduce the Kill Chain—the mean time to detect and contain—from 69 days to 69 minutes? Do you have a Business Continuity and Disaster Recovery Plan to ensure the ability to react to a natural disaster, significant cyber breach such as ransomware, DDoS, or—dare I say it—a pandemic?

Technology
What cyber security consoles have been deployed that allow quick access to patch a system, change a firewall rule, switch ACL, or policy setting at an end point, or track a security incident through the triage process?

All of these things are important to create a comprehensive InfoSec Program. The science is the technology that will help you build a layered, in-depth defense approach. The art is how to assess the threat, define and document the risk, and create a strategy that allows you to manage your cyber risk as it applies to your environment, users, systems, applications, data, customers, supply chain, third party support partners, and business process.

More Art: Are You a Risk Avoider or Risk Transference Expert?

A better way to state that is, “Do you avoid all risk responsibility or do you give your risk responsibility to someone else?” Hint: I don’t believe in risk avoidance or risk transference.

Yes, there is an art to risk management. There is also science if you use, for example, The Carnegie Mellon risk tools. But a good risk owner and manager documents risk, prioritizes it by risk criticality, turns it into a risk register or roadmap plan, remediates what is necessary, and accepts what is reasonable from a business and cyber security perspective. Oh, by the way, those same five cyber security professional we talked about earlier? They have 17 definitions of risk.

As we wrap up this conversation, let’s talk about the importance of selecting a risk framework. It’s kind of like going to a baseball game and recognizing the program helps you know the players and the stats. What framework will you pick? Do you paint in watercolors or oils? Are you a National Institute of Standards (NIST) artist, an Internal Standards Organization artist, or have you developed your own framework like the Nardone puzzle chart? I developed this several years ago when I was the CTO/CSO of the Commonwealth of Massachusetts. It has been artistically enhanced over the years to incorporate more security components, but it is loosely coupled on the NIST 800-53 and ISO 27001 standards.

When it comes to selecting a security framework as a CISO, I lean towards the NIST Cyber Security Framework (CSF) pictured below. This framework is comprehensive, and provides a scoring model that allows risk owners to measure and target what risk level they believe they need to achieve based on their business model, threat profile, and risk tolerance. It has five functional focus areas. The ISO 27001 framework is also a very solid and frequently used model. Both of these frameworks can result in a Certificate of Attestation demonstrating adherence to the standard. Many commercial corporations do an annual ISO 27001 assessment for that very reason. More and more are leaning towards the NIST CSF, especially commercial corporations doing work with the government.

The art in cyber security is in the interpretation of the rules, standards, and requirements that are primarily based on a foundation in science in some form. The more experience one has in the cyber security industry, the more effective the art becomes. As a last thought, keep in mind that Connection’s Technology Solutions Group Security Practice has over 150 years of cyber security expertise on tap to apply to that art.

The post Cyber Security: Three Parts Art, One Part Science appeared first on Connected.

Troubleshooting NSM Virtualization Problems with Linux and VirtualBox

I spent a chunk of the day troubleshooting a network security monitoring (NSM) problem. I thought I would share the problem and my investigation in the hopes that it might help others. The specifics are probably less important than the general approach.

It began with ja3. You may know ja3 as a set of Zeek scripts developed by the Salesforce engineering team to profile client and server TLS parameters.

I was reviewing Zeek logs captured by my Corelight appliance and by one of my lab sensors running Security Onion. I had coverage of the same endpoint in both sensors.

I noticed that the SO Zeek logs did not have ja3 hashes in the ssl.log entries. Both sensors did have ja3s hashes. My first thought was that SO was misconfigured somehow to not record ja3 hashes. I quickly dismissed that, because it made no sense. Besides, verifying that intution required me to start troubleshooting near the top of the software stack.

I decided to start at the bottom, or close to the bottom. I had a sinking suspicion that, for some reason, Zeek was only seeing traffic sent from remote systems, and not traffic originating from my network. That would account for the creation of ja3s hashes, for traffic sent by remote systems, but not ja3 hashes, as Zeek was not seeing traffic sent by local clients.

I was running SO in VirtualBox 6.0.4 on Ubuntu 18.04. I started sniffing TCP network traffic on the SO monitoring interface using Tcpdump. As I feared, it didn't look right. I ran a new capture with filters for ICMP and a remote IP address. On another system I tried pinging the remote IP address. Sure enough, I only saw ICMP echo replies, and no ICMP echoes. Oddly, I also saw doubles and triples of some of the ICMP echo replies. That worried me, because unpredictable behavior like that could indicate some sort of software problem.

My next step was to "get under" the VM guest and determine if the VM host could see traffic properly. I ran Tcpdump on the Ubuntu 18.04 host on the monitoring interface and repeated my ICMP tests. It saw everything properly. That meant I did not need to bother checking the switch span port that was feeding traffic to the VirtualBox system.

It seemed I had a problem somewhere between the VM host and guest. On the same VM host I was also running an instance of RockNSM. I ran my ICMP tests on the RockNSM VM and, sadly, I got the same one-sided traffic as seen on SO.

Now I was worried. If the problem had only been present in SO, then I could fix SO. If the problem is present in both SO and RockNSM, then the problem had to be with VirtualBox -- and I might not be able to fix it.

I reviewed my configurations in VirtualBox, ensuring that the "Promiscuous Mode" under the Advanced options was set to "Allow All". At this point I worried that there was a bug in VirtualBox. I did some Google searches and reviewed some forum posts, but I did not see anyone reporting issues with sniffing traffic inside VMs. Still, my use case might have been weird enough to not have been reported.

I decided to try a different approach. I wondered if running VirtualBox with elevated privileges might make a difference. I did not want to take ownership of my user VMs, so I decided to install a new VM and run it with elevated privileges.

Let me stop here to note that I am breaking one of the rules of troubleshooting. I'm introducing two new variables, when I should have introduced only one. I should have built a new VM but run it with the same user privileges with which I was running the existing VMs.

I decided to install a minimal edition of Ubuntu 9, with VirtualBox running via sudo. When I started the VM and sniffed traffic on the monitoring port, lo and behold, my ICMP tests revealed both sides of the traffic as I had hoped. Unfortunately, from this I erroneously concluded that running VirtualBox with elevated privileges was the answer to my problems.

I took ownership of the SO VM in my elevated VirtualBox session, started it, and performed my ICMP tests. Womp womp. Still broken.

I realized I needed to separate the two variables that I had entangled, so I stopped VirtualBox, and changed ownership of the Debian 9 VM to my user account. I then ran VirtualBox with user privileges, started the Debian 9 VM, and ran my ICMP tests. Success again! Apparently elevated privileges had nothing to do with my problem.

By now I was glad I had not posted anything to any user forums describing my problem and asking for help. There was something about the monitoring interface configurations in both SO and RockNSM that resulted in the inability to see both sides of traffic (and avoid weird doubles and triples).

I started my SO VM again and looked at the script that configured the interfaces. I commented out all the entries below the management interface as shown below.

$ cat /etc/network/interfaces

# This configuration was created by the Security Onion setup script.
#
# The original network interface configuration file was backed up to:
# /etc/network/interfaces.bak.
#
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# loopback network interface
auto lo
iface lo inet loopback

# Management network interface
auto enp0s3
iface enp0s3 inet static
  address 192.168.40.76
  gateway 192.168.40.1
  netmask 255.255.255.0
  dns-nameservers 192.168.40.1
  dns-domain localdomain

#auto enp0s8
#iface enp0s8 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

#auto enp0s9
#iface enp0s9 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

I rebooted the system and brought the enp0s8 interface up manually using this command:

$ sudo ip link set enp0s8 promisc on arp off up

Fingers crossed, I ran my ICMP sniffing tests, and voila, I saw what I needed -- traffic in both directions, without doubles or triples no less.

So, there appears to be some sort of problem with the way SO and RockNSM set parameters for their monitoring interfaces, at least as far as they interact with VirtualBox 6.0.4 on Ubuntu 18.04. You can see in the network script that SO disables a bunch of NIC options. I imagine one or more of them is the culprit, but I didn't have time to work through them individually.

I tried taking a look at the network script in RockNSM, but it runs CentOS, and I'll be darned if I can't figure out where to look. I'm sure it's there somewhere, but I didn't have the time to figure out where.

The moral of the story is that I should have immediately checked after installation that both SO and RockNSM were seeing both sides of the traffic I expected them to see. I had taken that for granted for many previous deployments, but something broke recently and I don't know exactly what. My workaround will hopefully hold for now, but I need to take a closer look at the NIC options because I may have introduced another fault.

A second moral is to be careful of changing two or more variables when troubleshooting. When you do that you might fix a problem, but not know what change fixed the issue.

Finding Weaknesses Before the Attackers Do

This blog post originally appeared as an article in M-Trends 2019.

FireEye Mandiant red team consultants perform objectives-based assessments that emulate real cyber attacks by advanced and nation state attackers across the entire attack lifecycle by blending into environments and observing how employees interact with their workstations and applications. Assessments like this help organizations identify weaknesses in their current detection and response procedures so they can update their existing security programs to better deal with modern threats.

A financial services firm engaged a Mandiant red team to evaluate the effectiveness of its information security team’s detection, prevention and response capabilities. The key objectives of this engagement were to accomplish the following actions without detection:

  • Compromise Active Directory (AD): Gain domain administrator privileges within the client’s Microsoft Windows AD environment.
  • Access financial applications: Gain access to applications and servers containing financial transfer data and account management functionality.
  • Bypass RSA Multi-Factor Authentication (MFA): Bypass MFA to access sensitive applications, such as the client’s payment management system.
  • Access ATM environment: Identify and access ATMs in a segmented portion of the internal network.

Initial Compromise

Based on Mandiant’s investigative experience, social engineering has become the most common and efficient initial attack vector used by advanced attackers. For this engagement, the red team used a phone-based social engineering scenario to circumvent email detection capabilities and avoid the residual evidence that is often left behind by a phishing email.

While performing Open-source intelligence (OSINT) reconnaissance of the client’s Internet-facing infrastructure, the red team discovered an Outlook Web App login portal hosted at https://owa.customer.example. The red team registered a look-alike domain (https://owacustomer.example) and cloned the client’s login portal (Figure 1).


Figure 1: Cloned Outlook Web Portal

After the OWA portal was cloned, the red team identified IT helpdesk and employee phone numbers through further OSINT. Once these phone numbers were gathered, the red team used a publicly available online service to call the employees while spoofing the phone number of the IT helpdesk.

Mandiant consultants posed as helpdesk technicians and informed employees that their email inboxes had been migrated to a new company server. To complete the “migration,” the employee would have to log into the cloned OWA portal. To avoid suspicion, employees were immediately redirected to the legitimate OWA portal once they authenticated. Using this campaign, the red team captured credentials from eight employees which could be used to establish a foothold in the client’s internal network.

Establishing a Foothold

Although the client’s virtual private network (VPN) and Citrix web portals implemented MFA that required users to provide a password and RSA token code, the red team found a singlefactor bring-your-own-device (BYOD) portal (Figure 2).


Figure 2: Single factor mobile device management portal

Using stolen domain credentials, the red team logged into the BYOD web portal to attempt enrollment of an Android phone for CUSTOMER\user0. While the red team could view user settings, they were unable to add a new device. To bypass this restriction, the consultants downloaded the IBM MaaS360 Android app and logged in via their phone. The device configuration process installed the client’s VPN certificate (Fig. 13), which was automatically imported to the Cisco AnyConnect app—also installed on the phone.


Figure 3: Setting up mobile device management

After launching the AnyConnect app, the red team confirmed the phone received an IP address on the client’s VPN. Using a generic tethering app from the Google Play store, the red team then tethered a laptop to the phone to access the client’s internal network.

Escalating Privileges

Once connected to the internal network, the red team used the Windows “runas” command to launch PowerShell as CUSTOMER\user0 and perform a “Kerberoast” attack. Kerberoasting abuses legitimate features of Active Directory to retrieve service accounts’ ticketgranting service (TGS) tickets and brute-force accounts with weak passwords.

To perform the attack, the red team queried an Active Directory domain controller for all accounts with a service principal name (SPN). The typical Kerberoast attack would then request a TGS for the SPN of the associated user account. While Kerberos ticket requests are common, the default Kerberoast attack tool generates an increased volume of requests, which is anomalous and could be identified as suspicious. Using a keyword search for terms such as “Admin”, “SVC” and “SQL,” the consultants identified 18 potentially high-value accounts. To avoid detection, the red team retrieved tickets for this targeted subset of accounts and inserted random delays between each request. The Kerberos tickets for these accounts were then uploaded to a Mandiant password-cracking server which successfully brute-forced the passwords of 4 out of 18 accounts within 2.5 hours.

The red team then compiled a list of Active Directory group memberships for the cracked accounts, uncovering several groups that followed the naming scheme of {ComputerName}_Administrators. The red team confirmed the accounts possessed local administrator privileges to the specified computers by performing a remote directory listing of \\ {ComputerName}\C$. The red team also executed commands on the system using PowerShell Remoting to gain information about logged on users and running software. After reviewing this data, the red team identified an endpoint detection and response (EDR) agent which had the capability to perform in-memory detections that were likely to identify and alert on the execution of suspicious command line arguments and parent/ child process heuristics associated with credential theft.

To avoid detection, the red team created LSASS process memory dumps by using a custom utility executed via WMI. The red team retrieved the LSASS dump files over SMB and extracted cleartext passwords and NTLM hashes using Mimikatz. The red team performed this process on 10 unique systems identified to potentially have active privileged user sessions. From one of these 10 systems, the red team successfully obtained credentials for a member of the Domain Administrators group.

With access to this Domain Administrator account, the red team gained full administrative rights for all systems and users in the customer’s domain. This privileged account was then used to focus on accessing several high-priority applications and network segments to demonstrate the risk of such an attack on critical customer assets.

Accessing High-Value Objectives

For this phase, the client identified their RSA MFA systems, ATM network and high-value financial applications as three critical objectives for the Mandiant red team to target.

Targeting Financial Applications

The red team began this phase by querying Active Directory data for hostnames related to the objectives and found multiple servers and databases that included references to their key financial application. The red team reviewed the files and documentation on financial application web servers and found an authentication og indicating the following users accessed the financial application:

  • CUSTOMER\user1
  • CUSTOMER\user2
  • CUSTOMER\user3
  • CUSTOMER\user4

The red team navigated to the financial application’s web interface (Figure 4) and found that authentication required an “RSA passcode,” clearly indicating access required an MFA token.


Figure 4: Financial application login portal

Bypassing Multi-Factor Authentication

The red team targeted the client’s RSA MFA implementation by searching network file shares for configuration files and IT documentation. In one file share (Figure 5), the red team discovered software migration log files that revealed the hostnames of three RSA servers.


Figure 5: RSA migration logs from \\ CUSTOMER-FS01\ Software

Next, the red team focused on identifying the user who installed the RSA authentication module. The red team performed a directory listing of the C:\Users and C:\ data folders of the RSA servers, finding CUSTOMER\ CUSTOMER_ADMIN10 had logged in the same day the RSA agent installer was downloaded. Using these indicators, the red team targeted CUSTOMER\ CUSTOMER_ADMIN10 as a potential RSA administrator.


Figure 6: Directory listing output

By reviewing user details, the red team identified the CUSTOMER\CUSTOMER_ADMIN10 account was actually the privileged account for the corresponding standard user account CUSTOMER\user103. The red team then used PowerView, an open source PowerShell tool, to identify systems in the environment where CUSTOMER\user103 was or had recently logged in (Figure 7).


Figure 7: Running the PowerView Invoke-UserHunter command

The red team harvested credentials from the LSASS memory of 10.1.33.133 and successfully obtained the cleartext password for CUSTOMER\user103 (Figure 8).


Figure 8: Mimikatz output

The red team used the credential for CUSTOMER\user103 to login, without MFA, to the web front-end of the RSA security console with administrative rights (Figure 9).


Figure 9: RSA console

Many organizations have audit procedures to monitor for the creation of new RSA tokens, so the red team decided the stealthiest approach would be to provision an emergency tokencode. However, since the client was using software tokens, the emergency tokens still required a user’s RSA SecurID PIN. The red team decided to target individual users of the financial application and attempt to discover an RSA PIN stored on their workstation.

While the red team knew which users could access the financial application, they did not know the system assigned to each user. To identify these systems, the red team targeted the users through their inboxes. The red team set a malicious Outlook homepage for the financial application user CUSTOMER\user1 through MAPI over HTTP using the Ruler11 utility. This ensured that whenever the user reopened Outlook on their system, a backdoor would launch.

Once CUSTOMER\user1 had re-launched Outlook and their workstation was compromised, the red team began enumerating installed programs on the system and identified that the target user used KeePass, a common password vaulting solution.

The red team performed an attack against KeePass to retrieve the contents of the file without having the master password by adding a malicious event trigger to the KeePass configuration file (Figure 10). With this trigger, the next time the user opened KeePass a comma-separated values (CSV) file was created with all passwords in the KeePass database, and the red team was able to retrieve the export from the user’s roaming profile.


Figure 10: Malicious configuration file

One of the entries in the resulting CSV file was login credentials for the financial application, which included not only the application password, but also the user’s RSA SecurID PIN. With this information the red team possessed all the credentials needed to access the financial application.

The red team logged into the RSA Security Console as CUSTOMER\user103 and navigated to the user record for CUSTOMER\user1. The red team then generated an online emergency access token (Figure 11). The token was configured so that the next time CUSTOMER\ user1 authenticated with their legitimate RSA SecurID PIN + tokencode, the emergency access code would be disabled. This was done to remain covert and mitigate any impact to the user’s ability to conduct business.


Figure 11: Emergency access token

The red team then successfully authenticated to the financial application with the emergency access token (Figure 12).


Figure 12: Financial application accessed with emergency access token

Accessing ATMs

The red team’s final objective was to access the ATM environment, located on a separate network segment from the primary corporate domain. First, the red team prepared a list of high-value users by querying the member list of potentially relevant groups such as ATM_ Administrators. The red team then searched all accessible systems for recent logins by these targeted accounts and dumped their passwords from memory.

After obtaining a password for ATM administrator CUSTOMER\ADMIN02, the red team logged into the client’s internal Citrix portal to access the employee’s desktop. The red team reviewed the administrator’s documentation and determined the client’s ATMs could be accessed through a server named JUMPHOST01, which connected the corporate and ATM network segments. The red team also found a bookmark saved in Internet Explorer for “ATM Management.” While this link could not be accessed directly from the Citrix desktop, the red team determined it would likely be accessible from JUMPHOST01.

The jump server enforced MFA for users attempting to RDP into the system, so the red team used a previously compromised domain administrator account, CUSTOMER\ ADMIN01, to execute a payload on JUMPHOST01 through WMI. WMI does not support MFA, so the red team was able to establish a connection between JUMPHOST01 and the red team’s CnC server, create a SOCKS proxy, and access the ATM Management application without an RSA pin. The red team successfully authenticated to the ATM Management application and could then dispense money, add local administrators, install new software and execute commands with SYSTEM privileges on all ATM machines (Figure 13).


Figure 13: Executing commands on ATMs as SYSTEM

Takeaways: Multi-Factor Authentication, Password Policy and Account Segmentation

Multi-Factor Authentication

Mandiant experts have seen a significant uptick in the number of clients securing their VPN or remote access infrastructure with MFA. However, there is frequently a lack of MFA for applications being accessed from within the internal corporate network. Therefore, FireEye recommends that customers enforce MFA for all externally accessible login portals and for any sensitive internal applications.

Password Policy

During this engagement, the red team compromised four privileged service accounts due to the use of weak passwords which could be quickly brute forced. FireEye recommends that customers enforce strong password practices for all accounts. Customers should enforce a minimum of 20-character passwords for service accounts. When possible, customers should also use Microsoft Managed Service Accounts (MSAs) or enterprise password vaulting solutions to manage privileged users.

Account Segmentation

Once the red team obtained initial access to the environment, they were able to escalate privileges in the domain quickly due to a lack of account segmentation. FireEye recommends customers follow the “principle of least-privilege” when provisioning accounts. Accounts should be separated by role so normal users, administrative users and domain administrators are all unique accounts even if a single employee needs one of each. 

Normal user accounts should not be given local administrator access without a documented business requirement. Workstation administrators should not be allowed to log in to servers and vice versa. Finally, domain administrators should only be permitted to log in to domain controllers, and server administrators should not have access to those systems. By segmenting accounts in this way, customers can greatly increase the difficulty of an attacker escalating privileges or moving laterally from a single compromised account.

Conclusion

As demonstrated in this case study, the Mandiant red team was able to gain a foothold in the client’s environment, obtain full administrative control of the company domain and compromise all critical business applications without any software or operating system exploits. Instead, the red team focused on identifying system misconfigurations, conducting social engineering attacks and using the client’s internal tools and documentation. The red team was able to achieve their objectives due to the configuration of the client’s MFA, service account password policy and account segmentation.

Exploitation of Critical Cisco ASA Vulnerability

The ACSC has become aware of a change in the threat situation surrounding the recently announced Cisco ASA critical remote code execution vulnerability. Proof of concept code is now available which results in a denial of service condition on targeted vulnerable devices. Cisco first released a security advisory on 29 January detailing the vulnerability and affected devices but has since identified additional attack vectors and released additional, more comprehensive patches.