Daily Archives: January 11, 2019

That’s a Wrap! Read the Top Technology Takeaways From CES 2019

The sun has finally set on The International Consumer Electronics Show (CES) in Las Vegas. Every year, practically everyone in the consumer electronics industry comes from all over to show off the latest and greatest cutting-edge innovations in technology. From flying taxis, self-driving suitcases, and robots that will fold your laundry, CES 2019 did not disappoint. Here are some of my main takeaways from the event:

5G is the future

It seems that anyone and everyone who attended the event was talking about 5G. However, there wasn’t exactly a definitive answer to when the service would be available to consumers. According to Forbes, 5G is an abbreviation that stands for the fifth generation of the cellular wireless transmission. And while many companies at CES discussed 5G, the number of products that are actually capable of tapping into the network is minimal. This doesn’t mean we shouldn’t get excited about 5G. The faster connection, speed, and responsiveness of the 5G network will help enable IoT, autonomous driving, and technology that hasn’t even been invented yet.

Gaming gets an upgrade

Gamers everywhere are sure to enjoy the exciting new gadgets that launched this year. From wireless charging grips for the Nintendo Switch to curved monitors for better peripheral vision, tech companies across the board seemed to be creating products to better the gaming experience. In addition to products that are enhancing gamer’s capabilities, we also saw gaming products that are bringing the digital world closer to reality. For example, Holoride partnered with Disney and Audi to create a Guardians of the Galaxy virtual reality (VR) experience for car passengers that mimics the movements of the vehicle.

Optimized IoT devices, AI-driven assistants

This year’s event was colored with tons of new smart home and health IoT technology. Although smart home technology made a big splash at last year’s show, CES 2019 focused on bringing more integrated smart home products to consumers. For example, the AtmosControl touch panel acts as a simplified universal remote so consumers can control all of their gadgets from a single interface. We also saw the Bowflex Intelligent Max, a platform that allows consumers to download an app to complete Bowflex’s fitness assessment and adjust their workout plan based on the results.

Voice assistants seemed to dominate this year’s show, as well. Google and Amazon upped the ante with their use of improved AI technology for the Google Assistant and Amazon Alexa. Not only has Google brought Google Assistant to Google Maps, but they’ve also created a Google Assistant Interpreter Mode that works in more than 20 languages. Not to be shown up, Amazon announced some pretty intriguing Alexa-enabled products as well, including the Ring Door View Cam, a smart shower system called U by Moen, and the Numi 2.0 Intelligent Toilet.

The takeoff of autonomous vehicles

Not only did AI guide new innovations in IoT device technology, but it also paved the way for some futuristic upgrades to vehicles. Mercedes showcased their self-driving car called the Vision Urbanetic, an AI-powered concept vehicle that can hold up to 12 people. BMW created a rider-less motorcycle designed to gather data on how to make motorcycles safer on the road. And we can’t forget about Uber’s futuristic flying taxi, created in partnership with Bell Nexus, and expected to take flight in 2020.

Cybersecurity’s role in the evolving technological landscape

At McAfee, we understand the importance of securing all of these newfangled IoT gadgets that make their way into consumers’ homes. To do this, we announced the launch of Secure Home Platform voice commands for the Google Assistant, allowing users to keep track of their entire network through one interface.

To reflect the upgrades in gaming technology, we also launched the beta mode of McAfee Gamer Security. Many antivirus solutions are notorious for slowing down PCs, which can really hinder the gaming experience. This security solution, designed for PC gamers, provides a light but mighty layer of protection that optimizes users’ computing resources.

If there’s one thing we took away from this year’s event, it’s that technological innovations won’t be slowing down any time soon. With all of these new advancements and greater connectivity comes the need for increased cybersecurity protection. All in all, CES 2019 showed us that as software and hardware continues to improve and develop, cybersecurity will also adapt to the needs of everyday consumers.

Stay on top of the latest consumer and mobile security threats by following @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post That’s a Wrap! Read the Top Technology Takeaways From CES 2019 appeared first on McAfee Blogs.

Moving to marcoramilli.com

After more then 10 years on this amazing platform I decided to move forward to a professional blogging platform. I've reached hundred of  thousands of awesome professionals getting thousands of readers per day. I need a more sophisticated platform able to manage contents and graphically flexible enough to allow my new contents on cybersecurity.

I've set up a simple client meta-redirect-field so that your browser would automatically redirect to my new domain (https://marcoramilli.com). If your plugins block my "redirector" please visit www.marcoramilli.com for fresh new content. If you are a "feed reader" or an "email reader" please update your feeds/email to new my address.

See you on my new web corner and thank you for following me !

PHA Family Highlights: Zen and its cousins

Posted by Lukasz Siewierski, Android Security & Privacy Team
Google Play Protect detects Potentially Harmful Applications (PHAs) which Google Play Protect defines as any mobile app that poses a potential security risk to users or to user data—commonly referred to as "malware." in a variety of ways, such as static analysis, dynamic analysis, and machine learning. While our systems are great at automatically detecting and protecting against PHAs, we believe the best security comes from the combination of automated scanning and skilled human review.
With this blog series we will be sharing our research analysis with the research and broader security community, starting with the PHA family, Zen. Zen uses root permissions on a device to automatically enable a service that creates fake Google accounts. These accounts are created by abusing accessibility services. Zen apps gain access to root permissions from a rooting trojan in its infection chain. In this blog post, we do not differentiate between the rooting component and the component that abuses root: we refer to them interchangeably as Zen. We also describe apps that we think are coming from the same author or a group of authors. All of the PHAs that are mentioned in this blog post were detected and removed by Google Play Protect.


Uncovering PHAs takes a lot of detective work and unraveling the mystery of how they're possibly connected to other apps takes even more. PHA authors usually try to hide their tracks, so attribution is difficult. Sometimes, we can attribute different apps to the same author based on a small, unique pieces of evidence that suggest similarity, such as a repetition of an exceptionally rare code snippet, asset, or a particular string in the debug logs. Every once in a while, authors leave behind a trace that allows us to attribute not only similar apps, but also multiple different PHA families to the same group or person.
However, the actual timeline of the creation of different variants is unclear. In April 2013, we saw the first sample, which made heavy use of dynamic code loading (i.e., fetching executable code from remote sources after the initial app is installed). Dynamic code loading makes it impossible to state what kind of PHA it was. This sample displayed ads from various sources. More recent variants blend rooting capabilities and click fraud. As rooting exploits on Android become less prevalent and lucrative, PHA authors adapt their abuse or monetization strategy to focus on tactics like click fraud.
This post doesn't follow the chronological evolution of Zen, but instead covers relevant samples from least to most complex.

Apps with a custom-made advertisement SDK

The simplest PHA from the author's portfolio used a specially crafted advertisement SDK to create a proxy for all ads-related network traffic. By proxying all requests through a custom server, the real source of ads is opaque. This example shows one possible implementation of this technique.

This approach allows the authors to combine ads from third-party advertising networks with ads they created for their own apps. It may even allow them to sell ad space directly to application developers. The advertisement SDK also collects statistics about clicks and impressions to make it easier to track revenue. Selling the ad traffic directly or displaying ads from other sources in a very large volume can provide direct profit to the app author from the advertisers.
We have seen two types of apps that use this custom-made SDK. The first are games of very low quality that mimic the experience of popular mobile games. While the counterfeit games claim to provide similar functionality to the popular apps, they are simply used to display ads through a custom advertisement SDK.
The second type of apps reveals an evolution in the author's tactics. Instead of implementing very basic gameplay, the authors pirated and repackaged the original game in their app and bundled with it their advertisement SDK. The only noticeable difference is the game has more ads, including ads on the very first screen.
In all cases, the ads are used to convince users to install other apps from different developer accounts, but written by the same group. Those apps use the same techniques to monetize their actions.

Click fraud apps

The authors' tactics evolved from advertisement spam to real PHA (Click Fraud). Click fraud PHAs simulate user clicks on ads instead of simply displaying ads and waiting for users to click them. This allows the PHA authors to monetize their apps more effectively than through regular advertising. This behavior negatively impacts advertisement networks and their clients because advertising budget is spent without acquiring real customers, and impacts user experience by consuming their data plan resources.
The click fraud PHA requests a URL to the advertising network directly instead of proxying it through an additional SDK. The command & control server (C&C server) returns the URL to click along with a very long list of additional parameters in JSON format. After rendering the ad on the screen, the app tries to identify the part of the advertisement website to click. If that part is found, the app loads Javascript snippets from the JSON parameters to click a button or other HTML element, simulating a real user click. Because a user interacting with an ad often leads to a higher chance of the user purchasing something, ad networks often "pay per click" to developers who host their ads. Therefore, by simulating fraudulent clicks, these developers are making money without requiring a user to click on an advertisement.
This example code shows a JSON reply returned by the C&C server. It has been shortened for brevity.
"data": [{
"id": "107",
"url": "<ayud_url>",
"click_type": "2",
"keywords_js": [{
"keyword": "<a class=\"show_hide btnnext\"",
"js": "javascript:window:document.getElementsByClassName(\"show_hide btnnext\")[0].click();",
"keyword": "value=\"Subscribe\" id=\"sub-click\"",
"js": "javascript:window:document.getElementById(\"sub-click\").click();"
Based on this JSON reply, the app looks for an HTML snippet that corresponds to the active element (show_hide btnnext) and, if found, the Javascript snippet tries to perform a click() method on it.

Rooting trojans

The Zen authors have also created a rooting trojan. Using a publicly available rooting framework, the PHA attempts to root devices and gain persistence on them by reinstalling itself on the system partition of rooted device. Installing apps on the system partition makes it harder for the user to remove the app.
This technique only works for unpatched devices running Android 4.3 or lower. Devices running Android 4.4 and higher are protected by Verified Boot.
Zen's rooting trojan apps target a specific device model with a very specific system image. After achieving root access the app tries to replace the framework.jar file on the system partition. Replicating framework.jar allows the app to intercept and modify the behavior of the Android standard API. In particular, these apps try to add an additional method called statistics() into the Activity class. When inserted, this method runs every time any Activity object in any Android app is created. This happens all the time in regular Android apps, as Activity is one of the fundamental Android UI elements. The only purpose of this method is to connect to the C&C server.

The Zen trojan

After achieving persistence, the trojan downloads additional payloads, including another trojan called Zen. Zen requires root to work correctly on the Android operating system.
The Zen trojan uses its root privileges to turn on accessibility service (a service used to allow Android users with disabilities to use their devices) for itself by writing to a system-wide setting value enabled_accessibility_services. Zen doesn't even check for the root privilege: it just assumes it has it. This leads us to believe that Zen is just part of a larger infection chain. The trojan implements three accessibility services directed at different Android API levels and uses these accessibility services, chosen by checking the operating system version, to create new Google accounts. This is done by opening the Google account creation process and parsing the current view. The app then clicks the appropriate buttons, scrollbars, and other UI elements to go through account sign-up without user intervention.
During the account sign-up process, Google may flag the account creation attempt as suspicious and prompt the app to solve a CAPTCHA. To get around this, the app then uses its root privilege to inject code into the Setup Wizard, extract the CAPTCHA image, and sends it to a remote server to try to solve the CAPTCHA. It is unclear if the remote server is capable of solving the CAPTCHA image automatically or if this is done manually by a human in the background. After the server returns the solution, the app enters it into the appropriate text field to complete the CAPTCHA challenge.
The Zen trojan does not implement any kind of obfuscation except for one string that is encoded using Base64 encoding. It's one of the strings - "How you'll sign in" - that it looks for during the account creation process. The code snippet below shows part of the screen parsing process.
if (!title.containsKey("Enter the code")) { 
if (!title.containsKey("Basic information")) {
if (!title.containsKey(new String(android.util.Base64.decode("SG93IHlvdeKAmWxsIHNpZ24gaW4=".getBytes(), 0)))) {
if (!title.containsKey("Create password")) {
if (!title.containsKey("Add phone number")) {

Apart from injecting code to read the CAPTCHA, the app also injects its own code into the system_server process, which requires root privileges. This indicates that the app tries to hide itself from any anti-PHA systems that look for a specific app process name or does not have the ability to scan the memory of the system_server process.
The app also creates hooks to prevent the phone from rebooting, going to sleep or allowing the user from pressing hardware buttons during the account creation process. These hooks are created using the root access and a custom native code called Lmt_INJECT, although the algorithm for this is well known.
First, the app has to turn off SELinux protection. Then the app finds a process id value for the process it wants to inject with code. This is done using a series of syscalls as outlined below. The "source process" refers to the Zen trojan running as root, while the "target process" refers to the process to which the code is injected and [pid] refers to the target process pid value.
  1. The source process checks the mapping between a process id and a process name. This is done by reading the /proc/[pid]/cmdline file.
    This very first step fails in Android 7.0 and higher, even with a root permission. The /proc filesystem is now mounted with a hidepid=2 parameter, which means that the process cannot access other process /proc/[pid] directory.
  2. A ptrace_attach syscall is called. This allows the source process to trace the target.
  3. The source process looks at its own memory to calculate the offset between the beginning of the libc library and the mmap address.
  4. The source process reads /proc/[pid]/maps to find where libc is located in the target process memory. By adding the previously calculated offset, it can get the address of the mmap function in the target process memory.
  5. The source process tries to determine the location of dlopen, dlsym, and dlclose functions in the target process. It uses the same technique as it used to determine the offset to the mmap function.
  6. The source process writes the native shellcode into the memory region allocated by mmap. Additionally, it also writes addresses of dlopen, dlsym, and dlclose into the same region, so that they can be used by the shellcode. Shellcode simply uses dlopen to open a .so file within the target process and then dlsym to find a symbol in that file and run it.
  7. The source process changes the registers in the target process so that PC register points directly to the shellcode. This is done using the ptrace syscall.
This diagram illustrates the whole process.


PHA authors go to great lengths to come up with increasingly clever ways to monetize their apps.
Zen family PHA authors exhibit a wide range of techniques, from simply inserting an advertising SDK to a sophisticated trojan. The app that resulted in the largest number of affected users was the click fraud version, which was installed over 170,000 times at its peak in February 2018. The most affected countries were India, Brazil, and Indonesia. In most cases, these click fraud apps were uninstalled by the users, probably due to the low quality of the apps.
If Google Play Protect detects one of these apps, Google Play Protect will show a warning to users.
We are constantly on the lookout for new threats and we are expanding our protections. Every device with Google Play includes Google Play Protect and all apps on Google Play are automatically and periodically scanned by our solutions.
You can check the status of Google Play Protect on your device:
  1. Open your Android device's Google Play Store app.
  2. Tap Menu>Play Protect.
  3. Look for information about the status of your device.

Hashes of samples

Type Package name SHA256 digest
Custom ads com.targetshoot.zombieapocalypse.sniper.zombieshootinggame 5d98d8a7a012a858f0fa4cf8d2ed3d5a82937b1a98ea2703d440307c63c6c928
Click fraud com.counterterrorist.cs.elite.combat.shootinggame 84672fb2f228ec749d3c3c1cb168a1c31f544970fd29136bea2a5b2cefac6d04
Rooting trojan com.android.world.news bd233c1f5c477b0cc15d7f84392dab3a7a598243efa3154304327ff4580ae213
Zen trojan com.lmt.register eb12cd65589cbc6f9d3563576c304273cb6a78072b0c20a155a0951370476d8d

Threat Actor “Cold River”: Network Traffic Analysis and a Deep Dive on Agent Drable

Executive Summary

While reviewing some network anomalies, we recently uncovered Cold River, a sophisticated threat actor making malicious use of DNS tunneling for command and control activities. We have been able to decode the raw traffic in command and control, find sophisticated lure documents used in the campaign, connect other previously unknown samples, and associate a number of legitimate organizations whose infrastructure is referenced and used in the campaign.

The campaign targets Middle Eastern organizations largely from the Lebanon and United Arab Emirates, though, Indian and Canadian companies with interests in those Middle Eastern countries are also targeted. There are new TTPs used in this attack – for example Agent_Drable is leveraging the Django python framework for command and control infrastructure, the technical details of which are outlined later in the blog.

We are not sure which threat actor or proxy of a threat actor is behind the campaign. This campaign is using previously undiscovered toolcraft and we speculate that right-to-left languages used has influenced the hardcoded string “Agent_Drable” name into the implant used in the campaign. It references a 2007 conflict of the Lebanese army at the “Nahr Elbard” Palestinian Refugee camp, which is a transliteration of Nahr el bared. The English translation of Nahr Elbard is “Cold River.”

In short, “Cold River” is a sophisticated threat that utilizes DNS subdomain hijacking, certificate spoofing, and covert tunneled command and control traffic in combination with complex and convincing lure documents and custom implants.

Note: the campaign described in this blog post has been also covered by Talos and CERT-OPMD, whereas the underpinning DNS hijacking attacks have been recently described in detail by FireEye in this article.

MalDoc Droppers

Two malicious word documents were found, differing only in the decoy content (same VBA macro, same payload). The first sample is an empty document but is weaponized (Figure 1).

Figure 1: Screenshot of the weaponized empty document, (sha1: 1f007ab17b62cca88a5681f02089ab33adc10eec)

Figure 1: Screenshot of the weaponized empty document, sha1: 1f007ab17b62cca88a5681f02089ab33adc10eec

The second one is a legitimate HR document from the SUNCOR company to which they added the malicious payload and VBA macro (Figure 2).

Figure 2: Screenshot of the HR document from Suncor, (sha1: 9ea865e000e3e15cec15efc466801bb181ba40a1)

Figure 2: Screenshot of the HR document from Suncor, sha1: 9ea865e000e3e15cec15efc466801bb181ba40a1

While gathering open intelligence about the callback domain 0ffice36o[.]com we found a reference to a potential linked document from Twitter (see Figure 3); although that document did not contain the same payload. The person behind this Twitter account may have attached the wrong document.

Figure 3: Tweet referencing a third document: https://twitter.com/KorbenD_Intel/status/1053037793012781061

The timestamps listed in Table 1 tend to confirm the hypothesis that the Suncor document is a legitimate document which was weaponized: the creation date is old enough, and the last save matches the timeframe of the campaign. The empty document is most likely the one used to test the macro or to deliver the payload in an environment not related to Suncor.

SHA1 Description Creation Time Last Saved Time
1f007ab17b62cca88a5681f02089ab33adc10eec Empty doc 2018-10-05 07:10:00 2018-10-15 02:59:00
9ea865e000e3e15cec15efc466801bb181ba40a1 Suncor decoy 2012-06-07 18:25:00 2018-10-15 22:22:00

Table 1: Malicious documents and related metadata.

For a more global timeline of the document and their payload, please refer to Figure 4.

Behavior Analysis

Regarding the VBA macro, it stays basic but efficient. The macro is split into two components, one executing when the document is opened and the other at document close. The actual payload is not stored directly into the VBA code, but instead hidden in a form within the document.

When opening the Suncor document, macro execution must be enabled by the user to actually see its content. This makes the macro activation part appear legitimate to an average user. The only additional obfuscation taking place is the use of string concatenation, such as “t” & “mp“, “Microsoft.XML” & “DOM“, “userp” & “rofile“, etc.

The malicious macro contains some basic anti-sandboxing code, checking to see if a mouse is available on the computer using the API Application.MouseAvailable. Overall, the logic of the macro is the following:

At document opening:

  • Check if Environ("userprofile")\.oracleServices\svshost_serv.exe exists.
  • If yes, stop. If no, continue.
  • Create the directory Environ("userprofile")\.oracleServices if it does not exist.
  • Fetch the base64 encoded payload stored in UserForm1.Label1.Caption.
  • Decode and write it into Environ("userprofile")\.oracleServices\svshost_serv.doc.
  • Reveal the document content.

At document close:

  • Rename the dropped “svshost_serv.doc” file as “svshost_serv.exe”.
  • Create a scheduled task that runs the EXE file every minute, named “chrome updater”.

A last interesting thing to note is that the part of the code setting up the scheduled task is copied from an online resource1.

Payloads and CnC Communication

We found two related payloads, shown in Table 2. The main difference between the two payloads is that one of them has some event logging capabilities, making it easier to determine the actual intention of the implant; most likely it was an early development or debug version. The sample actually packaged inside the Suncor documents was stripped of this functionality.

SHA1 Description Compilation Timestamp
1c1fbda6ffc4d19be63a630bd2483f3d2f7aa1f5 Payload with logs information 2018-09-03 16:57:26 UTC
1022620da25db2497dc237adedb53755e6b859e3 Payload without logs information 2018-09-15 02:31:15 UTC

Table 2: the Agent_Drable payloads.

One interesting string found inside the binary is “AgentDrable.exe“. This name is written in the DLL Name entry of the Export directory inside the PE header. It will reappear in different parts of this campaign, such as the infrastructure configuration. We can assume with confidence that this is the name given to this implant by the threat actor. There is very little evidence referencing AgentDrable outside of recent submissions to a few analysis portals. One hypothesis is that it would be the name “Elbard” reversed.

Compilation timestamps of the two samples are interesting as well. One has to be fully aware that timestamps can easily be falsified, however, these can be found in multiple places across the binaries (Debug directory, File header) and are coherent with the other events of the campaign. We placed all the dropper and payloads timestamps in Figure 4.

Figure 4: Note that the creation timestamp of WORD_1 is omitted, being way further back in time (2012).

One interesting fact is the compilation timestamp of the dropped sample without logs, which matches the last save time of the two word documents in which the dropped file was embedded. Meaning that they likely compiled the last version of their implant and directly weaponized the document for delivery.

Both malicious documents were submitted to VirusTotal from Lebanon just a few days later. Overall this timeline provides a coherent story and suggests that none of the timestamps were altered by the attackers. This completes the overview of the campaign deployment; we will provide additional insight as we compare this with the evolution of the attackers command and control infrastructure.

Dropped Executable – Behavior Analysis

The primary function of the dropped payload is to operate as a reconnaissance tool. There are no advanced functionalities implemented inside the binary (no screen capture or keylogger, for example). This file’s main functions are:

  • Running commands from CnC and returning the output
  • File download and execution
  • File exfiltration

One IP and one domain name are hardcoded inside the binary, as well as a user agent:

  • 0ffice36o[.]com (clearly mimicking the legitimate office360[.]com)
  • 185.161.211[.]72
  • Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko

The implant has two main ways of communicating with a CnC: (1) DNS requests, and (2) HTTP(S) GET/POST. The first execution defaults to DNS communication then, based on the commands received, it may switch to HTTP.

The binary is executed every minute thanks to a scheduled task created by the malicious document. At the beginning of every run, it creates the following subdirectories if they do not exist:

  • .\Apps
  • .\Uploads
  • .\Downloads

Directories “Uploads” and “Downloads” act exactly according to their name. Any executable located in the “Apps” directory will run each time the dropper implant is executed.

All the configuration data is handled using JSON and the cJSON library. Key names are generic, using one or two letters (‘a’, ‘m’, ‘ul’, …) but we managed to get a comprehensive listing as shown in Table 3. The configuration is stored in the file “Configure.txt” and retrieved at the beginning of every execution.

Parameter Name Comment
a Execution mode (DNS/HTTP)
m Max query length, used to split long DNS queries into multiple shorter ones
f Phase
c DNS counter
h Home path, where the subdirectories and config file are created
u HTTP CnC resource path
s HTTP CnC IP address
d DNS CnC domain
p HTTP CnC port number
l Connection type, HTTP or HTTPS
i Victim ID (2 chars)
k Custom base64 alphabet

Table 3: JSON configuration parameters (list not exhaustive).

In order to communicate with the DNS CnC, the sample performs DNS queries of specially crafted subdomains. For example, here are some DNS queries from different victims:

  • crzugfdhsmrqgq4hy000.0ffice36o[.]com
  • gyc3gfmhomrqgq4hy.0ffice36o[.]com
  • svg4gf2ugmrqgq4hy.0ffice36o[.]com
  • Hnahgfmg4mrqgq4hy.0ffice36o[.]com
  • 6ghzGF2UGMD4JI2VOR2TGVKEUTKF.0ffice36o[.]com

The subdomains follow a specific schema: they are made of 4 random alphadecimal chars and a base32 encoded payload. When applied to the domains listed above we get:

Subdomain Plain text
crzugfdhsmrqgq4hy000 1Fy2048|
gyc3gfmhomrqgq4hy 1Xw2048|
svg4gf2ugmrqgq4hy 1uC2048|

These three first plain texts differ only by two letters: Fy / Xw / uC. It is an ID generated by the sample, that allows the CnC to identify the source of a request. It is generated from the username and/or hostname, and thus stays consistent between implant executions. The same ID is used during HTTP communications.

While in DNS mode, the implant communicates with the CnC exclusively through these crafted subdomains and gets its command by interpreting the IP addresses returned. The HTTP communication mode is a bit more advanced: requests and answers from the implant respectively use GET and POST methods. By default, the sample builds the URL http://[CNC_IP]/[RESOURCE_PATH]?id=[ID] where:

Parameter Default Value Note
CNC_IP 185.161.211[.]72 This IP can be updated
RESOURCE_PATH /index.html This path can be updated
ID Fy This ID is constant for a given infection

The hardcoded CnC IP stored in the binaries was offline at the time of the analysis. We were able to find another active CnC hosted at 185.20.184[.]138. Figure 5 shows what the page looks like when accessed through a web browser.

Figure 5: The fake Wikipedia page.

The CnC commands are hidden inside HTML comments or within specific tags and encoded using a custom base64 alphabet. Below is an excerpt of the source code of the page, showing the encoded data.

Once decoded, they give the following JSON object from which the commands are extracted:

These commands show the typical steps that an attacker would take to perform host reconnaissance before proceeding with the intrusion. The full list of tags containing instructions or commands is in Table 4.

Tag Description
<!--[DATA]--> Base64 encoded JSON content
<link href="[DATA]"> Resource path from which a download must occur
<form action="[DATA]" Resource path on which the POST answers should be performed

Table 4: List of the tags that are extracted from the page.

The HTTP CnC is powered by a Django framework with debug mode activated. Thanks to that misconfiguration, it is possible to gather some additional pieces of information that can be used to map their whole infrastructure. Table 5 lists all the endpoints available.

Path Description
/index.html (GET) Retrieves commands and generic conf params
/Client/Login (GET) Retrieves the custom b64 alphabet used to encode data
/Client/Upload (POST) Upload exfiltrated data or command results
^\.well\-known\/acme\-challenge\/(?P<path>.*)$ Used to generate let’s encrypt certificates

Table 5: List of all available endpoints.

Besides all the resource paths, the debug mode leaked all of the environment variables and some Django internal settings. The most interesting values are listed in Tables 6 and 7 (the full list is available upon request):

Var Name Value Comment
PWD /root/relayHttps Interesting directory name
PATH_INFO /static/backup.zip Password protected backup of the database
SHELL /usr/bin/zsh
SSH_CLIENT 194.9.177[.]22 53190 22 Leaked IP of their VPN server

Table 6: Environment variables leaked due to a misconfigured Django instance.

Var Name  Value Comment
LOGIN_URL  /accounts/login/
MAGIC_WORD microsoft Unknown
DATABASES /root/relayHttps/db.sqlite3
SERVER_URL https://185.20.184[.]157 Leaked IP, unknown usage

Table 7: Settings leaked due to a misconfigured Django instance.

Once again we can find a mention to the “drable” monicker, this time as part of one of the queries used to fetch data from the underlying database:

SELECT COUNT(*) AS "__count" FROM "Client_drable"
WHERE "Client_drable"."relay_id" = %s


Thanks to the data leaked by the CnC and additional passive DNS data, we were able to identify with high confidence, multiple hosts which belong to the campaign infrastructure. One interesting fact is they are all part of the same autonomous system, Serverius N (AS 50673), and hosted by Deltahost. Furthermore, all the domain names were registered through NameSilo.

IP Description
185.161.211[.]72 Hardcoded HTTP CnC, not used at the time of the analysis.
185.20.187[.]8 Mostly used to generate Let’s Encrypt certificates. Port 443 still answers with memail.mea.com[.]lb. Port 444 has a “GlobalSign” certificate of memail.mea.com[.]lb.
185.20.184[.]138 Live HTTP CnC. Ports 80 and 443 return interesting Django debug info.
185.20.184[.]157 Unknown usage. Basic authentication protected page on port 7070 with https, cert CN is ” kerteros “. Port 8083 hosts a webserver , but only returns a blank page.
185.161.211[.]79 Hosted the HR phishing domains hr-suncor[.]com and hr-wipro[.]com, now redirect to the legitimate website.
194.9.177[.]22 Openconnect VPN used to reach the HTTP CnC.

By correlating these IP addresses with DNS resolutions (See timeline in Appendix A), we identified three domains that were most likely used to deliver the weaponized first stage documents:

  • hr-suncor[.]com
  • hr-wipro[.]com
  • files-sender[.]com

These similar looking domains names match well with the Suncor document template used in the attack. We have not found any specific document linked to Wipro yet. We also found suspicious DNS resolution from government AE and LB domain names pointing towards 185.20.187[.]8 for a short amount of time (~ 1 day each).

By cross-referencing this data with certificate generation records available on https://crt.sh, we conclude that the attackers managed to take over the DNS entries of these domains and generated multiple “Let’s encrypt” certificates allowing them to transparently intercept any TLS exchange.

Domain Certificate Redirection Dates
memail.mea.com[.]lb https://crt.sh/?id=923463758 2018-11-06
webmail.finance.gov[.]lb https://crt.sh/?id=922787406 2018-11-06
mail.apc.gov[.]ae https://crt.sh/?id=782678542 2018-09-23
mail.mgov[.]ae https://crt.sh/?id=750443611 2018-09-15
adpvpn.adpolice.gov[.]ae https://crt.sh/?id=741047630 2018-09-12


In summary, Cold River is a sophisticated threat actor making malicious use of DNS tunneling for command and control activities, compelling lure documents, and previously unknown implants. The campaign targets Middle Eastern organizations largely from the Lebanon and United Arab Emirates, though, Indian and Canadian companies with interests in those Middle Eastern countries may have also been targeted.

Cold River highlights the importance of detection diversity and contextualized threat intelligence. Without correlating Behavioral Intelligence and Network Traffic Analysis, the full scope of Cold River’s capabilities would go unseen, exposing victims to additional risk.

Indicators of Compromise

Droppers (maldocs)
9ea865e000e3e15cec15efc466801bb181ba40a1 (Suncor document)

1022620da25db2497dc237adedb53755e6b859e3 (Document Payload)
1c1fbda6ffc4d19be63a630bd2483f3d2f7aa1f5 (Writes logs)

IP addresses

Domain names

Certificates domain names

Generated certificates

User agent
Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko

Filesystem artifacts

Scheduled task
Name: “chrome updater”
Description: “chromium updater v 37.5.0”
Interval: 1 minute
Execution: “%userprofile%\.oracleServices\svshost_serv.exe”

Appendix A: DNS Resolution Timeline


1 https://www.experts-exchange.com/articles/11591/VBScript-and-Task-Scheduler-2-0-Creating-Scheduled-Tasks.html

The post Threat Actor “Cold River”: Network Traffic Analysis and a Deep Dive on Agent Drable appeared first on Lastline.

The Shifting Risk Profile in Serverless Architecture

Technology is as diverse and advanced as ever, but as tech evolves, so must the way we secure it from potential threats. Serverless architecture, i.e. AWS Lambda, is no exception. As the rapid adoption of this technology has naturally grown, the way we approach securing it has to shift. To dive into that shift, let’s explore the past and present of serverless architecture’s risk profile and the resulting implications for security.


For the first generation of cloud applications, we implemented “traditional” approaches to security. Often, this meant taking the familiar “Model-View-Controller” view to initially segment the application, and sometimes we even had the foresight to apply business logic separation to further secure the application.

But our cloud security model was not truly “cloud-native.”  That’s because our application security mechanisms assumed that traffic functioned in a specific way, with specific resources. Plus, our ability to inspect and secure that model relied on an intimate knowledge of how the application worked, and the full control of security resources between its layers. In short, we assumed full control of how the application layers were segmented, thus replicating our data center security in the cloud, giving up some of the economics and scale of the cloud in the process.

Figure 2. Simplified cloud application architecture separated by individual functions.


Now, when it comes to the latest generation of cloud applications, most leverage Platform-as-a-Service (PaaS) functions as an invaluable aid in the quest to reduce time-to-market. Essentially, this means getting back to the original value proposition for making the move to cloud in the first place.

And many leaders in the space are already making major headway when it comes to this reduction. Take Microsoft as an example, which cited a 67% reduction in time-to-market for their customer Quest Software by using Microsoft Azure services. Then there’s Oracle, which identified 50% reduction in time-to-market for their customer HEP Group using Oracle Cloud Platform services.

However, for applications built with Platform-as-a-Service, we have to think about risk differently. We must ask ourselves — how do we secure the application when many of the layers between the “blocks” of serverless functions are under cloud service provider (CSP) control and not your own?

Fortunately, there are a few things we can do. We can start by having the architecture of the application become a cornerstone of information security. From there, we must ask ourselves, do the elements relate to each other in a well understood, well-modeled way?  Have we considered how they can be induced to go wrong? Given that our instrumentation is our source of truth, we need to ensure that we’re always in the know when something does go wrong – which can be achieved through a combination of CSP and 3rd party tools.

Additionally, we need to look at how code is checked and deployed at scale and look for opportunities to complete side by side testing. Plus, we must always remember that DevOps, without answering basic security questions, can often unwittingly give away data in any release.

It can be hard to shoot a moving target. But if security strategy can keep pace with the shifting risk profile of serverless architecture, we can reap the benefits of cloud applications without worry. Then, serverless architecture will remain both seamless and secure.

The post The Shifting Risk Profile in Serverless Architecture appeared first on McAfee Blogs.

Key Takeaways From SANS Report: Secure DevOps 2018: Fact or Fiction?

DevOps, with its focus on speed and incremental development, is changing the application security landscape. We’ve talked about this change a lot in the past couple years, and how security should fit into this picture. Now SANS is taking a look at how security actually is fitting into this DevOps picture in practice. In a recent survey, the sixth in a series of annual studies by SANS on security practices in software development, SANS for the first time explicitly focuses on DevOps.

They looked at how security fits into DevOps, where security risks are and how they are being managed, and the top success factors in implementing a Secure DevOps program.

The survey responses reveal both best practices and challenges of integrating security into DevOps. We include a few noteworthy points here.

Rate of security assessments is increasing

Organizations are increasing the rate of security assessments to keep pace with the new rate of software delivery.

Almost half (47 percent) of survey respondents report that their organizations are continuously deploying at least some apps directly to production. At the same time, the number of organizations assessing or testing the security of business-critical applications more than once per month has increased from 13 percent in 2017 to 24 percent in 2018, and those testing daily and continuously have almost doubled over the same period.

This is good news considering what our own recent research revealed about the security implications of frequently scanning code for security. Data collected for our most recent State of Software Security report found that there is a very strong correlation between how many times a year an organization scans and how quickly they address their vulnerabilities. Our data found that when apps are tested fewer than three times a year, flaws persist more than 3.5x longer than when organization can bump that up to seven to 12 scans annually. Once organizations are scanning more than 300 times per year, they’re able to shorten flaw persistence 11.5x across the intervals compared to applications that are only scanned one to three times per year.

Training on secure coding is key in DevOps

The survey asked respondents which application security tools, practices, or techniques they find most useful, and security training for engineers came out on top. Considering that, in a DevOps model, developers take ownership for security assessments with the security team taking on more of an oversight role, this response makes a lot of sense. As DevOps takes hold and security shifts further left in the development cycle, developers will need a solid understanding of both how to avoid introducing security vulnerabilities, but also how to efficiently remediate found vulnerabilities. We’ve seen this idea play out among our customer base; those that take advantage of eLearning on secure coding for development teams see a 19 percent improvement in fix rate over those that do not.

Fix rate is a management problem

According to 65 percent of the SANS survey respondents, corrective actions for found vulnerabilities are solely in the hands of developers. According to SANS, “This helps explain why vulnerabilities don’t always get fixed: Developers are forced into a difficult situation, under conflicting pressures to deliver changes quickly and cheaply, while also being held responsible for fixing vulnerabilities and other bugs.”

We’ve seen evidence of this trend in our own research. Our latest State of Software Security report found that vulnerabilities remain unaddressed for significant amounts of time. More than 70 percent of all flaws remain one month after discovery, and nearly 55 percent remain three months after discovery. One in four high and very high severity flaws are not addressed within 290 days of discovery.

What’s the solution to this “fixing” problem? Our VP of program management, Pejman Pourmousa, discussed this issue in a recent blog post. He emphasizes that although developers need to own security testing in a DevOps model, the security team can’t completely opt-out of the process; they play an important role in providing the guidance and support the development team needs in order to fix what they find. Part of that guidance stems from constructing smart policies. He stresses that application security policies should detail not only how often teams need to scan, and what scanning techniques to use, but also how long they have to fix certain flaws based on severity/criticality. In addition, security teams should build in remediation time between scans. Just scanning multiple times a day and pulling results into a tracking system is not useful if no one has the bandwidth to fix anything. You are better off setting a realistic scanning schedule (once a day) so developers have time to fix what they find. You can increase scan frequency as you become more secure and are passing policy on a regular basis.

Barriers, and enablers, of secure DevOps are not just technology

We’ve found that application security success lies just as much, if not more, with people than with technology, and this survey found the same.

The survey respondents reported that their biggest barriers to secure DevOps include shortage of skills, inadequate budgets, poor prioritization, lack of management buy-in—and the crushing weight of technical debt and security debt built up.

The top three factors that they reported contributing to secure DevOps success included:

Get survey report

Get the full survey results and analysis in the SANS report, Secure DevOps: Fact or Fiction?

Kubernetes: Kubelet API containerLogs endpoint

How to get the info that kube-hunter reports for open /containerLogs endpoint

| | Information | Exposed Container| Output logs from a   |                |
|               | Disclosure  | Logs             | running container    |                |
|               |             |                  | are using the        |                |
|               |             |                  | exposed              |                |
|               |             |                  | /containerLogs       |                |
|               |             |                  | endpoint             |                |

First step, grab the output from /runningpods/ example below:

You'll need the namespace, pod name and container name.

Thus given the below runningpods output:


turns into:


Kubernetes: Kubernetes Dashboard

Tesla was famously hacked for leaving this open and it's pretty rare to find it exposed externally now but useful to know what it is and what you can do with it.

Usually found on port 30000

kube-hunter finding for it:

| LOCATION              | CATEGORY      | VULNERABILITY        | DESCRIPTION          | EVIDENCE         |
|         | Remote Code   | Dashboard Exposed    | All oprations on the | nodes: pach-okta |
|                       | Execution     |                      | cluster are exposed  |                  |

Why do you care?  It has access to all pods and secrets within the cluster. So rather than using command line tools to get secrets or run code you can just do it in a web browser.

Screenshots of what it looks like:
viewing secrets




Women in identity management: 4 newcomers to watch

Digital Identity – just the phrase leaves you thinking this must be important; after all, our identity is about who we are and what we do. Digital identity is a big technology space too. It encompasses a variety of sectors including verification-as-a-service, consumer identity and access management (CIAM), cloud (SaaS) identity, transaction authentication, and the newest entrant – self-sovereign identity. The financial value of the identity space is massive. Identity verification-as-a-service alone has been predicted by McKinsey to be worth $20 billion by 2022.

To read this article in full, please click here