"I am all about useful tools. One of my mottos is 'the right tool for the right job.'" –Martha Stewart
If your "right job" involves wrangling computer networks and figuring out how to do digital things effectively and efficiently or diagnosing why digital things aren't working as they're supposed to, you've got your hands full. Not only does your job evolve incredibly quickly becoming evermore complex, but whatever tools you use need frequent updating and/or replacing to keep pace, and that's what we're here for; to help in your quest for the right tools.
We've done several roundups of free network tools in the past, and since the last one, technology has, if anything, sped up even more. To help you keep up, we've compiled a new shortlist of seven of the most useful tools that you should add to your toolbox.
Researched and written by Donny Maasland and Rindert Kramer
During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s System Center Orchestrator.
According to Microsoft, Orchestrator is a workflow management solution for data centers and can be used to automate the creation, monitoring and deployment of resources in your environment.1 This blogpost covers the encryption aspect of Orchestrator and new tools to decrypt sensitive information that is stored in the Orchestrator database.
Orchestrator, variables, encryption and SQL
In Orchestrator, it is possible to create variables that can be used in runbooks. One of the possibilities is to store credentials in these variables. These variables can then be used to authenticate with other systems. Runbooks can use these variables to create an authenticated session towards the target system and run all the steps that are defined in the runbook in the context of the credentials that are specified in the variable.
Information, such as passwords, that is of a sensitive nature can be encrypted by using encrypted variables. The contents of these variables are stored encrypted in the database when they are created and are decrypted when they are used in the runbooks. The picture below displays the dialog to create an encrypted variable.
Orchestrator uses the internal encryption functionality of Microsoft SQL server2 (MSSQL). The decryption keys are stored in the SYS database and have to be loaded in to the SQL session in order to decrypt data.
To decypt the data, we need the encrypted content first. The following query returns the encrypted content:
SELECT VARIABLES.value, objects.Name FROM VARIABLES INNER JOIN OBJECTS ON OBJECTS.UniqueID = VARIABLES.UniqueID;
If there are secret variables stored in the database, this will result in encrypted data, such as:
The data between
\`d.T.~De/ is the data we are interested in, which leaves us with the following string:
Please note that the
\`d.T.~De/ value might differ per data type.
Since this data is encrypted, the decryption key needs to be loaded in the SQL session. To establish this, we open an SQL session and run the following query:
OPEN SYMMETRIC KEY ORCHESTRATOR_SYM_KEY DECRYPTION BY ASYMMETRIC KEY ORCHESTRATOR_ASYM_KEY;
This will load the decryption key into the SQL session.
Now we run this string against the
decryptbykey function in MSSQL3 to decrypt the content with the encryption key that was loaded earlier in the SQL session. If successful, this will result in a
varbinary object that we need to convert to
nvarchar for human readable output.
The complete SQL query will look like this:
SELECT convert(nvarchar, decryptbykey(0x00F04DA615688A4C96C2891105226AE90100000059A187C285E8AC6C1090F48D0BFD2775165F9558EAE37729DA43BE92AD133CF697D2C5CC1E6E27E534754099780A0362C794C95F3747A1E65E869D2D43EC3597));
Executing this query will return the unencrypted value of the variable, as can be seen in the following screenshot.
Automating the process
Fox-IT created a script that automates this process. The script queries the secrets and decrypts them automatically. The result of the script can be seen in the screenshot below.
The script can be downloaded from Fox-IT’s Github repository: https://github.com/fox-it/Decrypt-OrchestratorSecretVariables
Fox-IT also wrote a Metasploit post module to run this script through a meterpreter.
The Metasploit module supports integrated login. Optionally, it is possible to use MSSQL authentication by specifying the username and password parameters. By default, the script will use the ‘Orchestrator’ database, but it is also possible to specify another database with the database parameter. Fox-IT did a pull request to add this module to Metasploit, so hopefully the module will be available soon. The pull request can be found here: https://github.com/rapid7/metasploit-framework/pull/9929
During the RSA 2018 conference, Lastline launched Breach Defender, a new solution to facilitate the analysis of suspicious anomalies in monitored networks. As part of our internal product QA leading up to any release, we often coordinate with our partners to carry out tests on real data. During our most recent iteration, we happened to detect a port scan within the network of one of our customers (you can see a screenshot of the UI in Figure 1; the orange node represents the event). Normally we tend to gloss over port scans, although we still generate an informational event, as they are often used as part of network security policy to identify hosts running unexpected services. Overall, they are often part of the background noise, and most commonly they are just used to decorate some network activity maps.
Not Your Typical Port Scan
What was unusual in this instance was some additional suspicious activity related to rogue and malformed FTP connections (see the “Suspicious Network Interaction/FTP Based Covert Data Channel” node in Figure 1, click to enlarge). Although quite an old protocol, FTP is still frequently used to exfiltrate data (see the HawkEye keylogger for example). However, a malformed FTP connection can simply be caused by a poorly implemented client. We quickly ruled out this possibility as soon as we noticed how the events were clearly overlapping and involving the very same internal host that had launched the port scan. As visible from the graph, the very same external hosts were also the target/destination of both the port scan and the malformed FTP connection.
It definitely looked like a local host was actively looking for a way to exfiltrate data.
Analyzing the Traffic
It was definitely time to analyze the traffic in a bit more detail. When we started to dig more in depth into the information at our disposal, more and more suspicious inconsistencies surfaced.
First, as displayed in Figure 2 (click to enlarge), our heuristics flagged the hosts as running multiple operating systems. The heuristics build upon network indicators such as user agents or remote endpoints to infer information on the software configuration of each host. The fact that the very same host appears as running two different mobile operating systems (iOS and Android) is unusual and suggests that at least some of the network activities are spoofed. For instance, an iOS application may be hardcoding an Android user agent in its HTTP requests.
Second, the FTP control connection was attempting to store and retrieve the very same file (/home/ftp/db.txt). Note username and password are blank in Figure 2: looking at the raw data, random binary characters appear in those fields, and the characters have been sanitized by our UI. Why would a malicious client want to store and retrieve the same file? Also, the two commands for uploading and downloading are being issued approximately at the same time.
Overall, it felt like something was trying hard to make it look like a legitimate FTP interaction, so we started to suspect we were dealing with something very different. Maybe a clumsy attempt to update a shared resource thereby registering a new infected machine?
To collect further details related to the FTP connections, we queried our backend and sought to select all connections on port 21 outgoing from the internal host that was under investigation. We found 129 connection attempts (to 129 distinct IPs). Of these, only 13 were successful. Every successful connection translated to similar FTP transactions simultaneously attempting to upload and download a resource with the same name.
A quick check on some of the server IPs revealed that they were still responsive. However, attempting to use a normal FTP client to connect led to strange results: the server responses did not match the commands issued by the client. So rather than using a standard client, we switched to a transport level client (the Linux utility netcat) and attempted to deliver manual commands to the server. We managed to replicate the interaction we saw in Figure 2 using netcat. However, when we tried to introduce some variations, it became obvious that the FTP server dialogue, apparently legitimate, was completely scripted: no matter what input the client provided, the server responses were deterministic and “staged.” Figure 3 shows where we “netcat” into the server and type a bunch of random strings, after which the server replies as if the commands were valid.
Apparently, the client and server somehow “emulated” an FTP control channel to establish a seemingly legitimate bidirectional connection over the data channel. Once again, this behavior seemed to be indicative of an infected host trying to reach out to a C&C server using a stealthy connection.
From the perspective of C&C activity, the attempt to store and retrieve the same file via the STOR and RETR commands suddenly opens a potentially reasonable explanation. Passive mode FTP transfers dynamically open data channels on separate network flows, where the server port is dynamically decided by the server. If a stateful firewall is present in the network, it will need to support this by reacting to the control channel interactions and open the associated ports accordingly to allow the transfer. A store and retrieve on the same passive channel can then become an attempt to fool a stateful firewall into allowing bidirectional communication on the port opened by the passive mode.
The FTP traffic was not the only anomaly. Throughout the Breach Defender user interface in Figure 1 we could pivot to the web requests established during the same time-frame by the same host (see Figure 4, click to enlarge). We further correlated the extracted web requests with those available in our backend, giving us a total of 293 connection attempts (towards 293 distinct IPs), of which only 15 were successful.
As shown in Figure 4, the requests were limited to three different hostnames: 8v9m[.]com, www.bing[.]com, and www.intercom[.]com. All web requests were POST, and besides those directed to 8v9m[.]com (which were using a constant and specific path and user agent), each connection was accessing a different resource, each time spoofing the user agent. Not a single DNS resolution was performed for the last two domains. Indeed, despite the HTTP headers indicating connections towards these hosts, the endpoints involved in the interaction were not associated in any way to the hosted infrastructure of these domains.
- Path: /ClientApi
- User-Agent: Go-http-client/1.1
- Response code: 200
- Path: 6-char strings (e.g., /r7y9sp, /uhmq3a, /tm5qwn)
- Mozilla/5.0 (iPhone; CPU iPhone OS 10_2_1 like Mac OS X)
- Mozilla/5.0 (Linux; Android 7.0; SM-G9550 Build/NRD90M)
- Response code: mostly 4xx
- Path: 6-char strings (e.g., /ye4zkv, /8yakfu, /qzgp6c)
- Mozilla/5.0 (iPhone; CPU iPhone OS 10_2_1 like Mac OS X)
- Mozilla/5.0 (Linux; Android 7.0; SM-G9550 Build/NRD90M)
- Response code: mostly 4xx
Solving the Mystery
Summarizing the evidence collected so far, we seem to be dealing with something emulating FTP passive transfers and uploading and downloading data across the generated FTP channels, and generating very suspicious HTTP POST requests. This behavior seems clearly deceptive, and the use of these mechanisms for C&C data exfiltration seems a logical conclusion. But how to move the investigation forward?
We proceeded with the investigation by gauging the extent of this behavior and started searching for other endpoints connecting to the same hosts (see Figure 5, click to enlarge). It turned out that our original local host was not an isolated case: many other local hosts were exhibiting the very same traffic dynamics, collectively contacting several thousand external IPs, often belonging to the same CIDR blocks.This is when we started considering whether the actual culprit was instead a legitimate application; we searched for the domain names extracted when sifting through all the web requests and, as detailed in a public forum here, we were indeed on the right path. The network footprint matches the behavior of a known VPN client (X-VPN) famous for punching holes through corporate firewalls to evade restrictive local network policies.
The first thing such a client does is connect to a set of IPs on ports assigned to common protocols. This is done to find online and reachable servers (which eventually triggered our port scan alert). The reason why the client abuses the FTP protocol by establishing connections resembling C&C channels is twofold: first, even corporate firewalls often allow connections to the FTP control port 21 (most likely for legacy reasons); and second, unlike normal file transfers, the resulting data channels can be established in either direction, allowing bidirectional dialogue-like interactions.
If FTP connections are filtered or dropped, then the client tries several other protocols, including HTTP, fully explaining the web requests directed to the very same hosts. To further evade advanced policy filtering (for example denying specific operating systems and devices) the client goes even further and spoofs the “Host” and “User-Agent” header fields, a fact we saw in Figure 2.
We were definitely amazed by the rather creative way with which modern VPN clients attempt to punch holes through corporate firewalls and attempt to establish a connection regardless of corporate policy. The high volume of data points generated by these connection attempts clearly shows why tracing network events and producing insights from a corporate network can be quite a challenge for a trained network engineer even when the network is bereft of malicious activity.
On the other hand, with the right tools in hand, we have also demonstrated that it is indeed possible to easily pivot across multiple information domains, and use that information to differentiate security incidents from mere network anomalies. As we showed in this blog post, having an increased visibility over network events can often reveal organizational policy violations like the presence of unexpected or unwanted tools, a common effect of BYOD policies which are only partially enforced.