Monthly Archives: February 2017

Introspection on a Recent Downward Spiral

Alrighty... now that my RSA summary post is out of the way, let's get into a deeply personal post about how absolutely horrible of a week I had at RSA. Actually, that's not fair. The first half of the week was ok, but some truly horrible human beings targeted me (on social media) on Wednesday of that week, and it drove me straight down into a major depressive crash that left me reeling for days (well, frankly, through to today still).

I've written about my struggles with depression in the past, and so in the name of continued transparency and hope that my stories can help others, I wanted to relate this fairly recent tale.

If you can't understand or identify with this story, I'm sorry, but that's on you.

The Holy Trinity: Health, Career, and Relationships

This story really starts before the RSA conference. 2016 was an up-and-down year for a variety of reasons, but overall my health had been ok as I was able to re-establish a regular exercise routine. My weight was higher throughout the year (a negative), in large part to ending 2015 with a major flu bug and sinusitis that lingered for several months. Frankly, even today, I'm worn out and not as resilient as I think I should be.

At any rate, I was doing ok in the health department until November when I traveled to Austin, TX, to speak at a small event. The night before I was supposed to speak I ended up eating something bad (I suspect a pickled jalapeño plucked from a jar on the table of a BBQ place) and contracting food poisoning. I got no sleep and was unable to eat all day, so speaking at 4pm after all of that to an audience of 5 (or less) was... not good. This lead into more travel in early December such that by the middle of the month, I was sick. Two weeks of vacation on the road (sick), and suffice to say, by the time 2017 rolled around, I was completely worn out. Once health falls, poor diet routines tend to fall into place as I caffeinate to be functional during the day, which negatively impacts sleep, which negatively impacts weight, which creates the negative, reinforcing cycle around which everything else starts to circle and devolve.

Suffice to say, one of the three pillars had fallen, and as is common for me the past few years (ever since get pneumonia in June 2014), the road back is slow and requires a lot of willpower. From a mental health perspective, once health falls, the danger is real that a depressive episode may approach if anything else takes a hit. Enter the career/work angle...

I'm not going to say a lot about this, but suffice to say, there's been a lot of personal job stress. Such an occurrence has been a trigger for me in the past, because - like so many people - a lot of my personal identity is wrapped around the work that I do. For the rare person reading this post who doesn't know, I work in the cybersecurity space, which is already beset with far above average burnout rates, which means the conditions are already tilted against success, happiness, and mental well-being. Add in my career history that's been so incredibly adverse and challenging, and the picture quickly shapes up that I can very quickly start feeling like I'm nothing more than a waste of space. After all, if work isn't fulfilling, and if I don't feel like I'm doing anything meaningful with my life, then it translates into feeling like I am meaningless. Don't argue, don't comment, don't provide some response about "no, man, you matter." It's not about rationality in this context, it's about how I feel at my core, which tends to be incredibly dark when the wheels fall off and the downward spiral commences.

To sum up, all of this describes the conditions going into RSA week. I was feeling fat, I was feeling tired, I was feeling incredibly undervalued and worthless at work... which set the conditions for what happened next, which was the sense of loss of the third pillar of relationships.

(many) People Suck

I'm not by nature a misanthrope, but I've started to become one over the years, because at the end of the day, I lot of people are miserable, awful, and just downright mean. I unfortunately experienced all of this first-hand during RSA week (all day Wednesday, to be precise - literally starting around midnight, early in the morning). What I found is that there are lots of hateful, evil people in the world who love nothing more than to shit all over everyone; especially people with whom they think they disagree. The best/worst part of this is that they're willing to shit all over people for things you may have never said, but to which statements were (falsely) ascribed.

In the cab home from our company RSA party late Tuesday (aka early Wednesday) I made the mistake of responding to someone's tweet (on the Twitters). A person whom apparently is a major figure in the "women in science" movement (a true dyed-in-the-wool hard core feminist in all the worst connotations) had shared an article about getting more women into science (a worthy goal), but I felt the tone was very anti-male, which I view as being anti-helpful in many ways. So, I replied in what I thought was a very neutral, thoughtful manner, along the lines of "I think this is great, but we need to be mindful not be inclusive via exclusion." I later added "Building one group up by tearing another down is not a net positive." as well as "When the oppressed becomes the oppressor, you still have oppression, which is not truly beneficial to everyone."

It was appalling the degree and amount of raw, vile vitriol leveled at me for what I had viewed as thoughtful, respectful, constructive comments. Moreover, these comments were spewed at me literally day Wednesday, to the tune of hundreds of tweets attacking me, calling me names and declaring things about me (clearly I'm such a product of "white male privilege," what with having grown up in a predominantly white rural community in a single-income academic household where we typically lived paycheck to paycheck and were consistently among the lowest social ranks). In some ways it was infuriating, but the constant onslaught of negativity and ad hominem attacks also took a severe toll on me in that I was already feeling crappy, and the NOP slide (so to speak) hit hard, driving me straight into the ground.

Even Small Things Amount to Piling-On

For those unfamiliar with the RSA Conference, Wednesday night during RSA week is historically an evening filled with corporate sponsored parties/receptions. As the event has grown, this has quickly become an overloaded evening of frivolity. Except this year I literally received no invitations. It was surreal. When I was with Gartner, it was all I could do to find a free moment. Even post-Gartner, as a buyer, there were myriad invitations. However, this year? Nothing. It was beyond strange, and by the time I realized it, pretty much all the parties were fully booked.

I figured, at worst, I could just tag along with people to a couple events, have a little fun, call it an early night. Sounded ok in theory. Right up until I got ditched twice in 30 minutes (by different people), and the tailspin started. Add onto this that I'd been trying to meet-up with a couple dear friend in particular, to no avail (busy schedules). And, because of work-related issues, I ended up with far too much unscheduled down time during the week (a rant for another day). But, for someone teetering on an emotional collapse, this became a rather big deal.

The biggest disaster of the night was when my phone got smacked out of my hand causing it to fly and smash against something (in the dark). When I retrieved it, I found the screen was now non-functional... which was highly problematic considering it was the only computing device that I'd brought with me for the week. I had no laptop, no back-up phone, nothing. I was terrified! I immediately felt cutoff (from the world abusing me). I was already in emotional freefall, and now was completely offline and unavailable in case anyone did try to reach out. Panic ensued. It was late at night and I had to wait until morning.

All of these things (and many more) piled onto a bad day and rapidly accelerated a downward spiral. By Thursday morning I was exhausted and disconsolate. The only reason I got out of bed was the drive to replace my phone. I dragged myself to the Verizon Wireless store, only to find out they didn't open for another hour. I went to the office, only to find out that we don't actually have *any* phones (not even a polycom!). I was able to use one of the conference room computers to look up info for phone replacement, and then when a coworker arrived in the office, I borrowed her phone to call VzW to get details on my options. I then headed to the store a little before opening time (still ended up 4th in line) and quickly picked up a replacement device (which I subsequently hated and replaced once I got home). A couple hours after that and I finally was back online to an adequate reasonable degree. But... the damage was done... and I was just ready to be done, too...

All of these things might strike you as trivial or insignificant, but you have to understand things in context. Already down due to ongoing health issues. Dragged/driven down even further by work issues. And then to have the social stuff go completely sideways? The spiral into the black hole was a rapid ascent, and the recovery less than trivial. Imagine falling into a hole, and as you try to climb out, the ground falls away and you collapse into a deeper hole. And then everything starts to fall in on you... as you fall deeper into the hole, the darker it gets, but gravity also increases, crushing you, making it harder to breathe, not to mention being buried, buried, buried... you feel like there's no way out... you feel like there's no air to breathe... you feel crushed... that is what it felt like...

This is my RSA story. It could have been an ok week overall, but the bottom quickly fell out of it. There really were several potential positives (plus a few negatives), but it was hard to recognize them given Wednesday's NOP slide to disaster.

How am I doing today? If I'm being honest, no better than so-so. Including travel, I logged 101 hours Sun-Sat for RSA week. I was exhausted last wk and am simply not recovered. I don't feel like my health or diet are in a good place yet. Work is still very stressful and I'm just not in a good place there. I'm in fact incredibly frustrated with work/career stuff right now. It's hands-down the single most vexing and depressing thing to me (I feel like a failure. I'm literally on my 3rd post-Gartner job in 2 years). It's really hard to bounce back when the pillars continue to remain shattered. Things don't feel right, and that makes everything more difficult.

But... if there's good news, it's that there are positives to be found, if I let myself see them. I do see the patterns, and I recognize changes I need to make to interdict those bad patterns. At least, to do so where I have actual control. But, it's really not an easy thing to do, and it's very difficult not to see and feel the dark cloud as it shrouds everything else. In the meantime, I do my best to soldier on, and try very hard to make better choices, such as around diet and exercise - asserting some degree of conscious choice and control where I can. Really, that's about all that one can do...

Here's to hoping 2017 turns around!

RSA USA 2017 In Review

Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc.

For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge).

With that... let's drill into my key highlights...

Size Matters

Why do people like me go to RSA? Because it's the one week in the year where I can see almost every single vendor in the industry, as well as see people I know and like who I otherwise would never get to see in person (aka "networking"). It truly is an enormous event, and it has definitely passed the threshold of being overwhelming. Several people I've know for years did not make the trip this year, and I suspect this will become a trend, but in the meantime, in many ways it's a "must attend" event.

The down-side to an event this large, and something I learned back in my Gartner days, is that - as someone with nearly 2 decades of industry experience - this is not an event where you're going to find much great content. Talks must, out of necessity, be tuned to the median audience, which means looking backward at what was cutting-edge 5-10 years ago. Sad, but true. There's simply not much room for cutting-edge thinking or discussion at the event nay more.

Soooo... why go back? Again, so long as there's business development and networking benefit, it is an essential event, but it's also very costly. Hotel pricing alone makes this an increasingly difficult prospect. For as much as we're spending on hotels each year, I could very likely visit friends in 3-4 different parts of the country and break even on travel costs. It's also increasingly a lot of noise, and much harder to sift value from that noise. I truly believe RSA is nearing the point where they'll have to either break the event into multiple events (kind of like 3 wks of SxSW), or they'll at least need to move to a different model where you're attending a conference within a conference (similar to "schools" within a large university). As it stands today, it's simply too easy to get lost in the shuffle and derive diminishing value.

Automation Nearing the Mainstream

We've been talking about security automation and orchestration for several years now, but it's often been with only a handful of examples, and generally quick forward-looking. We're just now finally reaching the point where the automation message is being picked up in the mainstream and more expansive examples are emerging.

One thing I noticed this year is that "automation" was prevalent in many booths. There are now at least a dozen vendors purportedly in the space (up from the days of it being Invotas (FireEye) and Phantom). No, I can't remember any names, but suffice to say, it's a growing list. Also, separately, I've noticed that orgs like Chef and Puppet have also made an attempt to expand their automation appeal to security (not to mention Service Now doing the same).

The point here is this: The mainstream consensus is finally starting to catch up with the reality that we will never be able to scale human resources fast enough to successfully address the rapidly changing threat landscape. Thus, we absolutely must automate as much as possible. We don't need SOC analysts staring at screens, pushing buttons when a color changes from green to red. That can be automated. Instead, we need to think about these processes and make smart decisions about when and where a human actually needs to be in the loop. This is our future, which we should eagerly embrace because it then frees us up to do much more interesting and exciting things.

DevSecOps/Secure DevOps

Since we're talking about automation, it's only natural to pivot briefly into DevOps/DevSecOps/Secure DevOps. This year's Monday event on DevSecOps was ok, if not highly repetitive. However, initial attendance was strong, and feedback has reportedly been good (the schedule got a bit foobar, so attendance declined after lunch, c'est la vie).

Here's what's important: Companies are continuing to reinvent how they operate, and DevOps is the underlying model. As such, we need to push hard to ensure that Dev and Ops teams have security responsibilities in their assigned duties, and that they are held accountable accordingly. A DevOps co-worker recently complained about this "DevSecOps" thing, and I pointed out that the entire reason for it is as a kludge because security has once again been left behind, and neither Dev nor Ops has taken on (or been assigned) security responsibilities, nor are they being held accountable for poor security decisions. THIS IS A CULTURAL FAILING THAT AFFECTS ALMOST EVERY SINGLE COMPANY AROUND.

In DevOps, the norm is always to point to "gold standard" examples like Netflix, Facebook, Etsy, etc. However, what people oftentimes forget in looking at these orgs is that, for the most part, they started out doing DevOps from the early days. There was very little need for cultural transformation because they were already operating in a DevOps manner. For companies that have been around for much, much, much longer, there will be internal opposition and institutional inertia that will slow down transformations. It's imperative that these cultural attributes be supplanted, aggressively if necessary, in order to remove barriers to change. DevOps provides an amazing template for operating an agile, efficient, effective organization... but only if companies fundamentally change how they function, including cultural transformation.

AI, ML, and Big Data Lies

If we were to take all the marketing at face value, then we'd be led to believe that the machines are thinking for themselves and we're a mere small step away from becoming part of The Matrix. Thankfully, that's not really true at all. The majority of companies claiming "AI" today are really being misleading and disingenuous. The simple fact is the majority of products are still based on heuristics or machine learning (ML) - sometimes both.

Heuristics is the traditional pattern matching we've seen for decades upon decades upon decades. Your traditional AV or IDS "solution"? It's primarily based on heuristically matching patterns and signatures to detect "a known bad thing." These are ok, but in the grand scheme they're providing little lasting value.

ML has emerged as an alternative, wherein rather than looking for patterns, we instead model environments or behaviors, and then do alerts based on either matching or deviating from the models (sometimes both!). The ML approach is actually quite promising, though it's premised on the ability to actual create a discrete model of an environment or behavior. It is also imperative that ML engines be constantly rebuilding the models to account for changes in an environment or behavior (for example, imagine building a model of your diet starting in mid-October and running through the end of the year, and then trying to apply that same model to your diet Jan-Mar after you've made major life changes, perhaps as part of a New Year's Resolution).

There is a lot of hope in AI, ML, et al, and I think for good reason. Frankly, ML gives us a lot of value when applied to reasonably discrete environments (e.g., containers), and thus I think we'll continue to see great growth and success in this space. I expect that computing environments will also continue to evolve and grow to make modeling of them that much easier. I think there's much promise.

As for AI itself, we'll have to wait and see, but I suspect we're a good decade+ away from true examples of real-world applications. However, that said, if you're in a lower-level role (analyst, basic infrastructure config, etc.), then now is a good time to invest in training/education to improve your skills to raise yourself up to a higher-level job that will be less easily threatened by AI+automation. As I noted above, we really do not need SOC analysts staring at screens clicking buttons according to a set process. Machines can already do that today. Thus there's no job security in it. Instead, become the person who builds and trains these automation tools, or be the higher-level "fixer" who is activated one automation has done all the base enumeration and examination. The world is changing rapidly, and will look quite different in a decade.

The Threat Is Real / Ignore the FUD

One of my favorite tunes from last year is Megadeth's "The Threat Is Real" as it's really quite an appropriate phrase. Hacks are succeeding every day. Breaches are so commonplace that the mainstream media has all but lost interest in reporting on them. Incidents are inevitable. And, yet, in some ways they needn't be so inevitable; at least, not to the degree and severity we continually see. Whether it be massive DDoS attacks built on the back of woefully insecure IoT devices or sizable holes in cloud CDN infrastructure a la Cloudbleed, there are a lot of holes, a lot of bugs, and a lot of undertrained people, all of which will lead to bad days.

That said, we also need to be incredibly mindful and diligent to avoid the FUD. There's too much FUD. It's like running around telling us that "we're all gonna die" as if we don't all accept this as an inevitability. Come on, folks, let's get out of that red mental state (fear/panic/anger) and apply some rational thought. There are tons of things we can be doing to prepare and protect our organizations, our customers/clients, and our resources. We just need to take a deep breath, settle down, and execute.

What should we do? Well, interestingly, it's not all that strange a list. First and foremost, Basic Security Hygiene, which I wrote about while at Gartner nearly 2.5 years ago. Things like robust IAM (centralized, processized, monitored), vuln and patch mgmt, and applying consistent, secure standards for infrastructure and development are all great starting points. Beyond that, it comes down to taking the time to understand your environment and exposures, and investing in tools and techniques that will produce measurable results (measurement is key!!!). A progressive security awareness program can be critical to educating and incentivizing people to make better decisions, and really reflects the overall imperative to transform the business and it's underlying culture. We can absolutely make things better, but it requires effort and thoughtfulness.

*whew* Ok... so, there you have it... my thoughts from RSA 2017. All told, it was a so-so week for me personally, but I'll definitely be back for one more year. TBD after that. It's really quite the circus these days. This year was especially difficult with how spread out things were (Moscone South has a major construction project underway, so the Marriott Marquis was enlisted). The wifi and mobile signals in the Marquis dungeon were nonexistent, which was painful. Also painful was the 4 spread out venues for Codebreaker's Bash on Thursday evening. It didn't work. Because people were spread all over, it was difficult to casually run into folks I was hoping to see. Hopefully next year they'll revert to a large single venue (I really, really, really enjoyed the Bash at AT&T Park, though many folks complained about it). Finding a venue for 43k+ people has to be incredibly challenging. Of course, so is finding a hotel room each year, so, ya know, there's that, too. Ha.

Hope you find this interesting/useful! Until next time...

MS16-155 – Important: Security Update for .NET Framework (3205640) – Version: 2.1

Severity Rating: Important
Revision Note: V2.1 (February 23, 2017): Revised bulletin to announce a detection logic change to Monthly Rollup Release KB3205403 and Monthly Rollup Release KB3205404. This is an informational change only. Customers who have already successfully updated their systems do not need to take any action.
Summary: This security update resolves a vulnerability in Microsoft .NET 4.6.2 Framework’s Data Provider for SQL Server. A security vulnerability exists in Microsoft .NET Framework 4.6.2 that could allow an attacker to access information that is defended by the Always Encrypted feature.

Hack Naked News #112 – February 21, 2017

A lone hacker breaches 60 universities and federal agencies, Yahoo loses $350 million from breaches, more bug bounty programs for porn sites, and is your child a hacker? Jason Wood of Paladin Security joins us to talk about smart city technology that could make military bases more secure!

BSidesSF CTF wrap-up


While this is technically a CTF writeup, like I frequently do, this one is going to be a bit backwards: this is for a CTF I ran, instead of one I played! I've gotta say, it's been a little while since I played in a CTF, but I had a really good time running the BSidesSF CTF! I just wanted to thank the other organizers - in alphabetical order - @bmenrigh, @cornflakesavage, @itsc0rg1, and @matir. I couldn't have done it without you folks!

BSidesSF CTF was a capture-the-flag challenge that ran in parallel with BSides San Francisco. It was designed to be easy/intermediate level, but we definitely had a few hair-pulling challenges.

The goal of this post is to explain a little bit of the motivation behind the challenges I wrote, and to give basic solutions. It's not going to have a step-by-step walkthrough of each challenge - though you might find that in the writeups list - but, rather, I'll cover what I intended to teach, and some interesting (to me :) ) trivia.

If you want to see the source of the challenges, our notes, and mostly everything else we generated as part of creating this CTF, you can find them here:

  • Original sourcecode on github
  • Google Drive notes (note that that's not the complete set of notes - some stuff (like comments from our meetings, brainstorming docs, etc) are a little too private, and contain ideas for future challenges :) )

Part of my goal for releasing all of our source + planning documents + deployment files is to a) show others how a CTF can be run, and b) encourage other CTF developers to follow suit and release their stuff!

As of the writing, the scoreboard and challenges are still online. We plan to keep them around for a couple more days before finally shutting them down.


The rest of my team can most definitely confirm this: I'm not an infrastructure kinda guy. I was happy to write challenges, and relied on others for infrastructure bits. The only thing I did was write a Dockerfile for each of my challenges.

As such, I'll defer to my team on this part. I'm hoping that others on my team will post more details about the configurations, which I'll share on my Twitter feed. You can also find all the Dockerfiles and deployment scripts on our Github repository.

What I do know is, we used:

  • Googles CTF Scoreboard running on AppEngine for our scoreboard
  • Dockerfiles for each challenge that had an online component, and Docker for testing
  • docker-compose for testing
  • Kubernetes for deployment
  • Google Container Engine for running all of that in The Cloud

As I said, all the configurations are on Github. The infrastructure worked great, though, we had absolutely no traffic or load problems, and only very minor other problems.

I'm also super excited that Google graciously sponsored all of our Google Cloud expenses! The CTF weekend cost us roughly $500 - $600, and as of now we've spent a little over $800.


Just a few numbers:

  • We had 728 teams register
  • We had 531 teams score at least one point
  • We had 354 teams score at least 100 points
  • We had 23 teams submit at least one on-site flag (presumably, that many teams played on-site)

Also, the top-10 teams were:

  • dcua :: 6773
  • OpenToAll :: 5178
  • scryptos :: 5093
  • Dragon Sector :: 4877
  • Antichat :: 4877
  • p4 :: 4777
  • khack40 :: 4677
  • squareroots :: 4643
  • ASIS :: 4427
  • Ox002147 :: 4397

The top-10 teams on-site were:

  • OpenToAll :: 5178
  • ▣ :: 3548
  • hash_slinging_hackers :: 3278
  • NeverTry :: 2912
  • 0x41434142 :: 2668
  • DevOps Solution :: 1823
  • Shadow Cats :: 1532
  • HOW BOU DAH :: 1448
  • Newbie :: 762
  • CTYS :: 694

The full list can be found on our page.

On-site challenges

We had three on-site challenges (none of them created by me):

on-sight [1]

This was a one-point challenge designed simply to determine who's eligible for on-site prizes. We had to flag taped to the wall. Not super interesting. :)

(Speaking of prizes, I want to give a shout out to Synack for providing some prizes, and in particular to working with us on a fairly complex set-up for dealing with said prizes. :)

Shared Secrets [250]

The Shared Secrets challenge was a last-minute idea. We wanted more on-site challenges, and others on the CTF organizers team came up with Shamir Shared Secret Scheme. We posted QR Codes containing pieces of a secret around the venue.

It was a "3 of 6" scheme, so only three were actually needed to get the secret.

The quotes on top of each image try to push people towards either "Shamir" or "ACM 22(11)". My favourite was, "Hi, hi, howdy, howdy, hi, hi! While everyone is minus, you could call me multiply", which is a line from a Shamir (the rapper) song. I did not determine if Shamir the rapper and Shamir the cryptographer were the same person. :)

Locker [150]

Locker is really cool! We basically set up a padlock with an Arduino and a receipt printer. After successfully picking the lock, you'd get a one-time-use flag printed out by the printer.

(We had some problems with submitting the flag early-on, because we forgot to build the database for the one-time-use flags, but got that resolved quickly!)

@bmenrigh developed the lock post, which detected the lock opening, and @matir developed the software for the receipt printer.

My challenges

I'm not going to go over others' challenges, other than the on-site ones I already covered, I don't have the insight to make comments on them. However, I do want to cover all my challenges. Not a ton of detail, but enough to understand the context. I'll likely blog about a couple of them specifically later.

I probably don't need to say it, but: challenge spoilers coming!

'easy' challenges [10-40]

I wrote a series of what I called 'easy' challenges. They don't really have a trick to them, but teach a fundamental concept necessary to do CTFs. They're also a teaching tool that I plan to use for years to come. :)

easy [10] - a couldn't-be-easier reversing challenge. Asks for a password then prints out a flag. You can get both the password and the flag by running strings on the binary.

easyauth [30] - a web challenge that sets a cookie, and tells you it's setting a cookie. The cookie is simply 'username=guest'. If you change the cookie to 'username=administrator', you're given the flag. This is to force people to learn how to edit cookies in their browser.

easyshell [30] and easyshell64 [30] - these are both simple programs where you can send it shellcode, and they run it. It requires the player to figure out what shellcode is and how to use it (eg, from msfvenom or an online shellcode database). There's both a 32- and a 64-bit version, as well.

easyshell and easyshell64 are also good ways to test shellcode, and a place where people can grab libc binaries, if needed.

And finally, easycap [40] is a simple packet capture, where a flag is sent across the network one packet at a time. I didn't keep my generator, but it's essentially a ruby script that would do a s.send() on each byte of a string.

skipper [75] and skipper2 [200]

Now, we're starting to get into some of the levels that require some amount of specialized knowledge. I wrote skipper and skipper2 for an internal company CTF a long time ago, and have kept them around as useful teaching tools.

One of the first thing I ever did in reverse engineering was write a registration bypass for some icon-maker program on 16-bit DOS using the command and some dumb luck. Something where you had to find the "Sorry, your registration code is invalid" message and bypass it. I wanted to simulate this, and that's where these came from.

With skipper, you can bypass the checks by just changing the program counter ($eip or $rip) or nop'ing out the checks. skipper2, however, incorporates the results from the checks into the final flag, so they can't be skipped quite so easily. Rather, you have to stop before each check and load the proper value into memory to get the flag. This simulates situations I've legitimately run into while writing keygens.

hashecute [100]

When I originally conceived of hashecute, I had imagined it being fairly difficult. The idea is, you can send any shellcode you want to the server, but you have to prepend the MD5 of the shellcode to it, and the prepended shellcode runs as well. That's gotta be hard, right? Making an MD5 that's executable??

Except it's not, really. You just need to make sure your checksum starts with a short-jump to the end of the checksum (or to a NOP sled if you want to do it even faster!). That's \xeb\x0e (for jmp) or \e9\x0e (for call), as the simplest examples (there are practically infinite others). And it's really easy to do that by just appending crap to the end of the shellcode: you can see that in my solution.

It does, however, teach a little critical thinking to somebody who might not be super accustomed to dealing with machine code, so I intend to continue using this one as a teaching tool. :)

b-64-b-tuff [100]

b-64-b-tuff has the dual-honour of both having the stupidest name and being the biggest waste of my own time .:)

So, I came up with the idea of writing this challenge during a conversation with a friend: I said that I know people have written shellcode encoders for unicode and other stuff, but nobody had ever written one for Base64. We should make that a challenge!

So I spent a couple minutes writing the challenge. It's mostly just Base64 code from StackOverflow or something, and the rest is the same skeleton as easyshell/easyshell64.

Then I spent a few hours writing a pure Base64 shellcode encoder. I intend to do a future blog 100% about that process, because I think it's actually a kind of interesting problem. I eventually got to the point where it worked perfectly, and I was happy that I could prove that this was, indeed, solveable! So I gave it a stupid name and sent out my PR.

That's when I think @matir said, "isn't Base64 just a superset of alphanumeric?".

Yes. Yes it is. I could have used any off-the-shelf alphanumeric shellcode encoder such as msfvenom. D'OH!

But, the process was really interesting, and I do plan to write about it, so it's not a total loss. And I know at least one player did the same (hi @Grazfather! [he graciously shared his code where he encoded it all by hand]), so I feel good about that :-D

in-plain-sight [100]

I like to joke that I only write challenges to drive traffic to my blog. This is sort of the opposite: it rewards teams that read my blog. :)

A few months ago, while writing the delphi-status challenge (more on that one later), I realized that when encrypting data using a padding oracle, the last block can be arbitrarily chosen! I wrote about it in an off-handed sort of way at that time.

Shortly after, I realized that it could make a neat CTF challenge, and thus was born in-plain-site.

It's kind of a silly little challenge. Like one of those puzzles you get in riddle books. The ciphertext was literally the string "HiddenCiphertext", which I tell you in the description, but of course you probably wouldn't notice that. When you do, it's a groaner. :)

Fun story: I had a guy from the team OpenToAll bring up the blog before we released the challenge, and mention how he was looking for a challenge involving plaintext ciphertext. I had to resist laughing, because I knew it was coming!

i-am-the-shortest [200]

This was a silly little level, which once again forces people to get shellcode. You're allowed to send up to 5 bytes of shellcode to the server, where the flag is loaded into memory, and the server executes them.

Obviously, 5 bytes isn't enough to do a proper syscall, so you have to be creative. It's more of a puzzle challenge than anything.

The trick is, I used a bunch of in-line assembly when developing the challenge (see the original source, it isn't pretty!) that ensures that the registers are basically set up to make a syscall - all you have to do it move esi (a pointer to the flag) into ecx. I later discovered that you can "link" variables to specific registers in gcc.

The intended method was for people to send \xcc for the shellcode (or similar) and to investigate the registers, determining what the state was, and then to use shellcode along the lines of xchg esi, ecx / int 0x80. And that's what most solvers I talked to did.

One fun thing: eax (which is the syscall number when a syscall is made) is set to len(shellcode) (the return value of read()). Since sys_write, the syscall you want to make, is number 4, you can easily trigger it by sending 4 bytes. If you send 5 bytes, it makes the wrong call.

Several of the solutions I saw had a dec eax instruction in them, however! The irony is, you only need that instruction because you have it. If you had just left it off, eax would already be 4!

delphi-status [250]

delphi-status was another of those levels where I spent way more time on the solution than on the challenge.

It seems common enough to see tools to decrypt data using a padding oracle, but not super common to see challenges where you have to encrypt data with a padding oracle. So I decided to create a challenge where you have to encrypt arbitrary data!

The original goal was to make somebody write a padding oracle encryptor tool for me. That seemed like a good idea!

But, I wanted to make sure this was do-able, and I was just generally curious, so I wrote it myself. Then I updated my tool Poracle to support encryption, and wrote a blog about it. If there wasn't a tool available that could encrypt arbitrary data with a padding oracle, I was going to hold back on releasing the code. But tools do exist, so I just released mine.

It turns out, there was a simpler solution: you could simply xor-out the data from the block when it's only one block, and xor-in arbitrary data. I don't have exact details, but I know it works. Basically, it's a classic stream-cipher-style attack.

And that just demonstrates the Cryptographic Doom Principle :)

ximage [300]

ximage might be my favourite level. Some time ago - possibly years - I was chatting with a friend, and steganography came up. I wondered if it was possible to create an image where the very pixels were executable!?

I went home wondering if that was possible, and started trying to think of 3-byte NOP-equivalent instructions. I managed to think of a large number of work-able combinations, including ones that modified registers I don't care about, plus combinations of 1- and 2-byte NOP-equivalents. By the end, I could reasonably do most colours in an image, including black (though it was slightly greenish) and white. You can find the code here.

(I got totally nerdsniped while writing this, and just spent a couple days trying to find every 3-byte NOP equivalent to see how much I can improve this!)

Originally, I just made the image data executable, so you'd have to ignore the header and run the image body. Eventually, I noticed that the bitmap header, 'BM', was effectively inc edx / dec ebp, which is a NOP for all I'm concerned. That's followed by a 2-byte length value. I changed that length on every image to be \xeb\x32, which is effectively a jump to the end of the header. That also caused weird errors when reading the image, which I was totally fine with leaving as a hint.

So what you have is an image that's effectively shellcode; it can be loaded into memory and run. A steganographic method that has probably never been done. :)

beez-fight [350]

beez-fight was an item-duplication vulnerability that was modeled after a similar vulnerability in Diablo 2. I had a friend a lonnnng time ago who discovered a vulnerability in Diablo 2, where when you sold an item it was copied through a buffer, and that buffer could be sold again. I was trying to think of a similar vulnerability, where a buffer wasn't cleared correctly.

I started by writing a simple game engine. While I was creating items, locations, monsters, etc., I didn't really think about how the game was going to be played - browser? A binary I distribute? netcat? Distributing a binary can be fun, because the player has to reverse engineer the protocol. But netcat is easier! The problem is, the vulnerability has to be a bit more subtle in netcat, because I can't depend on a numbered buffer - what you see is what you get!

Eventually, I came upon the idea of equip/unequip being problematic. Not clearing the buffer properly!

Something I see far too much in real life is code that checks if an object exists in a different way in different places. So I decided to replicate that - I had both an item that's NULL-able, and a flag :is_equipped. When you tried to use an item, it would check if the :is_equipped flag is set. But when you unequipped it, it checked if the item was NULL, which never actually happened (unequipping it only toggled the flag). As a result, you could unequip the item multiple times and duplicate it!

Once that was done, the rest was easy: make a game that's too difficult to reasonably survive, and put a flag in the store that's worth a lot of gold. The only reasonable way to get the flag is to duplicate an item a bunch, then sell it to buy the flag.

I think I got the most positive feedback on this challenge, people seem to enjoy game hacking!

vhash + vhash-fixed [450]

This is a challenge that me and @bmenrigh came up with, designed to be quite difficult. It was vhash, and, later, vhash-fixed - but we'll get to that. :)

It all dates back to a conversation I had with @joswr1ght about a SANS Holiday Hack Challenge level I was designing. I suggested using a hash-extension vulnerability, and he said we can't, because of hash_extender, recklessly written by yours truly, ruining hash extension vulnerabilities forever!

I found that funny, and mentioned it to @bmenrigh. We decided to make our own novel hashing algorithm that's vulnerable to an extension attack. We decided to make it extra hard by not giving out source! Players would have to reverse engineer the algorithm in order to implement the extension attack. PERFECT! Nobody knows as well as me how difficult it can be to create a new hash extension attack. :)

Now, there is where it gets a bit fun. I agreed to write the front-end if he wrote the back-end. The front-end was almost exactly easyauth, except the cookie was signed. We decided to use an md5sum-like interface, which was a bit awkward in PHP, but that was fine. I wrote and tested everything with md5sum, and then awaited the vhash binary.

When he sent it, I assumed vhash was a drop-in replacement without thinking too much about it. I updated the hash binary, and could log in just fine, and that was it.

When the challenge came out, the first solve happened in only a couple minutes. That doesn't seem possible! I managed to get in touch with the solver, and he said that he just changed the cookie and ignored the hash. Oh no! Our only big mess-up!

After investigation, we discovered that the agreed md5sum-like interface meant, to @bmenrigh, that the data would come on stdin, and to me it meant that the file would be passed as a parameter. So, we were hashing the empty string every time. Oops!

Luckily, we found it, fixed it, and rolled out an updated version shortly after. The original challenge became an easy 450-pointer for anybody who bothered to try, and the real challenge was only solved by a few, as intended.

dnscap [500]

dnscap is simply a packet-capture from dnscat2, running in unecrypted-mode, over a laggy connection (coincidentally, I'm writing this writeup at the same bar where I wrote the original challenge!). In dnscat2, I sent a .png file that contains the dnscat2 logo, as well as the flag. Product placement anyone?

I assumed it would be fairly difficult to disentangle the packets going through, which is why we gave it a high point-value. Ultimately, it was easier than we'd expected, people were able to solve it fairly quickly.

nibbler [666]

And finally, my old friend nibbler.

At some point in the past few months, I had the realization: nibbles (the snake game for QBasic where I learned to program) sounds like nibble (a 4-bit value). I forget where it came from exactly, but I had the idea to build a nibbles-clone with a vulnerability where you'd have to exploit it by collecting the 'fruit' at the right time.

I originally stored the scores in an array, and each 'fruit' would change between between worth 00 and FF points. You'd have to overflow the stack and build an exploit by gathering fruit with the snake. You'll notice that the name that I ask for at the start uses read() - that's so it can have NUL bytes so you can build a ROP-chain in your name.

I realized that picking values between 00 and FF would take FOREVER, and wanted to get back to the original idea: nibbles! But I couldn't think of a way to make it realistic while only collecting 4-bit values.

Eventually, I decided to drop the premise of performing an exploit, and instead, just let the user write shellcode that is run directly. As a result, it went from a pwn to a programming challenge, but I didn't re-categorize it, largely because we don't have programming challenges.

It ended up being difficult, but solveable! One of my favourite writeups is here; I HIGHLY recommend reading it. My favourite part is that he named the snakes and drew some damn sexy images!

I just want to give a shout out to the poor soul, who I won't name here, who solved this level BY HAND, but didn't cat the flag file fast enough. I shouldn't have had the 10-second timeout, but we did. As a result, he didn't get the flag. I'm so sorry. :(

Fun fact: @bmenrigh was confident enough that this level was impossible to solve that he made me a large bet that less than 2 people would solve it. Because we had 9 solvers, I won a lot of alcohol! :)


Hopefully you enjoyed hearing a little about the BSidesSF CTF challenges I wrote! I really enjoyed writing them, and then seeing people working on solving them!

On some of the challenges, I tried to teach something (or have a teachable lesson, something I can use when I teach). On some, I tried to make something pretty difficult. On some, I fell somewhere between. But there's one thing they have in common: I tried to make my own challenges as easy as possible to test and validate. :)

Wanna Get Away – Generals Password

I see this was posted 3 months ago to Youtube, but its new to me.


YouTube Video
Watch this video on YouTube.

This being blogging, lets over-analyze.

The General’s password is ihatemyjob1.

Not a bad password.  Using a passphrase is easy to remember.  Easy to type.
No doubt he should have capitalized the “I”.  Most systems can handle spaces, which would add some length.  Putting in a “@” in for a and a “0” in for o would add some complexity.  If the password file is compromised, this wouldn’t be enough to prevent breaking the hash.  But its good for a day-to-day logon.  For accounts where a password safe can be used to ease login, random would be better.  But that doesn’t work for every account.

The General’s password is echoed to the screen.   Typical security controls require that your password not be displayed on the screen.  It should be replaced by asterisks.  The General would also have been better entering it himself and not telling a subordinate the password.  He could have turned off the output of the computer to the big screen temporarily to prevent the room from seeing the password.

In pressure situations, its easy to take actions that compromise our security.  This is the type of feeling that phishers, and fraudsters often try to create so you just act and not thinking about if what you are doing makes sense.

Yes, it’s just a funny commercial.  But it can also be used as a teachable moment.  Hopefully without sucking all the fun out of the commercial

The post Wanna Get Away – Generals Password appeared first on Roger's Information Security Blog.

Introducing Cyber Portfolio Management

At RSA’17, I spoke on “Security Leadership Lessons from the Dark Side.”

Leading a security program is hard. Fortunately, we can learn a great deal from Sith lords, including Darth Vader and how he managed security strategy for the Empire. Managing a distributed portfolio is hard when rebel scum and Jedi knights interfere with your every move. But that doesn’t mean that you have to throw the CEO into a reactor core. “Better ways you will learn, mmmm?”

In the talk, I discussed how “security people are from Mars and business people are from Wheaton,” and how to overcome the communication challenges associated with that.

RSA has posted audio with slides, and you can take a listen at the link above. If you prefer the written word, I have a small ebook on Cyber Portfolio Management, a new paradigm for driving effective security programs. But I designed the talk to be the most entertaining intro to the subject.

Later this week, I’ll be sharing the first draft of that book with people who subscribe to my “Adam’s New Thing” mailing list. Adam’s New Thing is my announcement list for people who hate such things. I guarantee that you’ll get fewer than 13 messages a year.

Lastly, I want to acknowledge that at BSides San Francisco 2012, Kellman Meghu made the point that “they’re having a pretty good risk management discussion,” and that inspired the way I kicked off this talk.

5 open source security tools too good to ignore

Open source is a wonderful thing. A significant chunk of today’s enterprise IT and personal technology depends on open source software. But even while open source software is widely used in networking, operating systems, and virtualization, enterprise security platforms still tend to be proprietary and vendor-locked. Fortunately, that’s changing. 

If you haven’t been looking to open source to help address your security needs, it’s a shame—you’re missing out on a growing number of freely available tools for protecting your networks, hosts, and data. The best part is, many of these tools come from active projects backed by well-known sources you can trust, such as leading security companies and major cloud operators. And many have been tested in the biggest and most challenging environments you can imagine. 

To read this article in full, please click here

Part I. Russian APT – APT28 collection of samples including OSX XAgent

 This post is for all of you, Russian malware lovers/haters. Analyze it all to your heart's content. Prove or disprove Russian hacking in general or DNC hacking in particular, or find that "400 lb hacker" or  nail another country altogether.  You can also have fun and exercise your malware analysis skills without any political agenda.

The post contains malware samples analyzed in the APT28 reports linked below. I will post APT29 and others later.

Read about groups and types of targeted threats here: Mitre ATT&CK

List of References (and samples mentioned) listed from oldest to newest:

  1. APT28_2011-09_Telus_Trojan.Win32.Sofacy.A
  2. APT28_2014-08_MhtMS12-27_Prevenity
  3. APT28_2014-10_Fireeye_A_Window_into_Russia_Cyber_Esp.Operations
  4. APT28_2014-10_Telus_Coreshell.A
  5. APT28_2014-10_TrendMicro Operation Pawn StormUsing Decoys to Evade Detection
  6. APT28_2015-07_Digital Attack on German Parliament
  7. APT28_2015-07_ESET_Sednit_meet_Hacking
  8. APT28_2015-07_Telus_Trojan-Downloader.Win32.Sofacy.B
  9. APT28_2015-09_Root9_APT28_Technical_Followup
  10. APT28_2015-09_SFecure_Sofacy-recycles-carberp-and-metasploit-code
  11. APT28_2015-10_New Adobe Flash Zero-Day Used in Pawn Storm
  12. APT28_2015-10_Root9_APT28_targets Financial Markets
  13. APT28_2015-12_Bitdefender_In-depth_analysis_of_APT28–The_Political_Cyber-Espionage
  14. APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets
  15. APT28_2015_06_Microsoft_Security_Intelligence_Report_V19
  16. APT28_2016-02_PaloAlto_Fysbis Sofacy Linux Backdoor
  17. APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National Committee << DNC (NOTE: this is APT29)
  18. APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnel
  19. APT28_2016-10_ESET_Observing the Comings and Goings
  20. APT28_2016-10_ESET_Sednit A Mysterious Downloader
  21. APT28_2016-10_ESET_Sednit Approaching the Target
  22. APT28_2016-10_Sekoia_Rootkit analysisUse case on HideDRV
  23. APT28_2017-02_Bitdefender_OSX_XAgent  << OSX XAgent


Download sets (matching research listed above). Email me if you need the password
          Download all files/folders listed (72MB)

Sample list

Parent FolderFile Name (SHA1)MD5 ChecksumSHA256 Checksum
APT28_2014-10_Fireeye_A_Window_into_Russia_Cyber_Esp.OperationsE2450DFFA675C61AA43077B25B12851A910EEEB6_ coreshell.dll_9eebfebe3987fec3c395594dc57a0c4ce6d09ce32cc62b6f17279204fac1771a6eb35077bb79471115e8dfed2c86cd75
APT28APT28_2014-10_TrendMicro Operation Pawn Storm
APT28_2014-10_TrendMicro Operation Pawn Storm0A3E6607D5E9C59C712106C355962B11DA2902FC_Case2_S.vbs_exe_db9edafbadd71c7a3a0f0aec1b216a92b3d624c4287795a7fbddd617f57705153d30f5f4c4d2d1fec349ac2812c3a8a0
APT28_2014-10_TrendMicro Operation Pawn Storm0E12C8AB9B89B6EB6BAF16C4B3BBF9530067963F_Case2_Military CooperationDecoy.doc_7fcf20302404f644fb07fe9d4fe9ac8477166146463b9124e075f3a7925075f969974e32746c78d022ba99f578b9f0bb
APT28_2014-10_TrendMicro Operation Pawn Storm14BEEB0FC5C8C887D0435009730B6370BF94BC93_Case5Payload2_netids.dll_35717cd78ce713067a5037286cf91c3e1b3dd8aaafd750aa85185dc52672b26d67d662796847d7cbb01a35b565e74d35
APT28_2014-10_TrendMicro Operation Pawn Storm3814EEC8C45FC4313A9C7F65CE882A7899CF0405_Case4_NetIds.dll_a24552843b9fedd7d0084e1eb1dd6e35966660738c9e3ec103c2f8fe361c8ac20647cacaa5153197fa1917e9da99082e
APT28_2014-10_TrendMicro Operation Pawn Storm4B8806FE8E0CB49E4AA5D8F87766415A2DB1E9A9_Case2dropper_cryptmodule.exe_41e14894f4ad9494e0359ee5bb3d9745684f4b9ea61e14a15e82cac25076c5afe2d30e3dad7ce0b1b375b24d81135c37
APT28_2014-10_TrendMicro Operation Pawn Storm550ABD71650BAEA05A0071C4E084A803CB413C31_Case2_skype.exe_7276d1dab1125f59604252159e0c529c81f0f5fcb3cb8a63e8a3713b4107b89d888cb722cb6c7586c7fcdb45f5310174
APT28_2014-10_TrendMicro Operation Pawn Storm55318328511961EC339DFDDCA0443068DCCE9CD2_Case3_conhost.dll_f1704aaf08cd66a2ac6cf8810c9e07c274bdd9c250b0f4f27c0ecfeca967f53b35265c785d67406cc5e981a807d741bd
APT28_2014-10_TrendMicro Operation Pawn Storm5A452E7248A8D3745EF53CF2B1F3D7D8479546B9_Case3_netui.dll_keylogaa3e6af90c144112a1ad0c19bdf873ff4536650c9c5e5e1bb57d9bedf7f9a543d6f09addf857f0d802fb64e437b6844a
APT28_2014-10_TrendMicro Operation Pawn Storm6ADA11C71A5176A82A8898680ED1EAA4E79B9BC3_Case1_Letter to IAEA.pdf_decoy76d3eb8c2bed4f2588e22b8d0984af86b0f1f553a847f3244f434541edbf26904e2de18cca8db8f861ea33bb70942b61
APT28_2014-10_TrendMicro Operation Pawn Storm6B875661A74C4673AE6EE89ACC5CB6927CA5FD0D_Case2Payload2_ netids.dll_42bc93c0caddf07fce919d126a6e378f9392776d6d8e697468ab671b43dce2b7baf97057b53bd3517ecd77a081eff67d
APT28_2014-10_TrendMicro Operation Pawn Storm72CFD996957BDE06A02B0ADB2D66D8AA9C25BF37_Case1_saver.scr_ed7f6260dec470e81dafb0e63bafb5ae7313eaf95a8a8b4c206b9afe306e7c0675a21999921a71a5a16456894571d21d
APT28_2014-10_TrendMicro Operation Pawn Storm78D28072FDABF0B5AAC5E8F337DC768D07B63E1E_Case5_IDF_Spokesperson_Terror_Attack_011012.doc_1ac15db72e6d4440f0b4f710a516b1650cccb9d951ba888c0c37bb0977fbb3682c09f9df1b537eede5a1601e744a01ad
APT28_2014-10_TrendMicro Operation Pawn Storm7FBB5A2E46FACD3EE0C945F324414210C2199FFB_Case5payload_saver.scr_c16b07f7590a8620a8f0f687b0bd8bd8cb630234494f2424d8e158c6471f0b6d0643abbdf2f3e378bc2f68c9e7bca9eb
APT28_2014-10_TrendMicro Operation Pawn Storm88F7E271E54C127912DB4DB49E37D93AEA8A49C9_Case3_download_msmvs.exe_66f368cab3d5e64475a91f636c87af15e8ac9acc6fa3283276bbb77cff2b54d963066659b65e48cd8803a2007839af25
APT28_2014-10_TrendMicro Operation Pawn Storm8DEF0A554F19134A5DB3D2AE949F9500CE3DD2CE_Case6_dropper_filee.dll_16a6c56ba458ec718b4e9bc8f9f10785ce554d57333bdbccebb5e2e8d16a304947981e48ea2a5cc3d5f4ced7c1f56df3
APT28_2014-10_TrendMicro Operation Pawn Storm956D1A36055C903CB570890DA69DEABAACB5A18A_Case2_International Military.rtf_d994b9780b69f611284e22033e435edb342e1f591ab45fcca6cee7f5da118a99dce463e222c03511c3f1288ac2cf82c8
APT28_2014-10_TrendMicro Operation Pawn Storm9C622B39521183DD71ED2A174031CA159BEB6479_Case3_conhost.dll__d4e99548832b6999f00e8d223c6fabbdd5debe5d88e76a409b9bc3f69a02a7497d333934d66f6aaa30eb22e45b81a9ab
APT28_2014-10_TrendMicro Operation Pawn StormA8551397E1F1A2C0148E6EADCB56FA35EE6009CA_Case6_Coreshell.dll_48656a93f9ba39410763a2196aabc67fc8087186a215553d2f95c68c03398e17e67517553f6e9a8adc906faa51bce946
APT28_2014-10_TrendMicro Operation Pawn StormA90921C182CB90807102EF402719EE8060910345_Case4_APEC Media list 2013 Part1.xls_aeebfc9eb9031e423797a5af1985242de8d3f1e4e0d7c19e195d92be5cb6b3617a0496554c892e93b66a75c411745c05
APT28_2014-10_TrendMicro Operation Pawn StormAC6B465A13370F87CF57929B7CFD1E45C3694585_Case4Payload_dw20.t_e1554b931affb3cd2edc90bc580280785ab8ef93fdeaac9af258845ab52c24d31140c8fffc5fdcf465529c8e00c508ac
APT28_2014-10_TrendMicro Operation Pawn StormB3098F99DB1F80E27AEC0C9A5A625AEDAAB5899A_APEC Media list 2013 Part2.xls_decoybebb3675cfa4adaba7822cc8c39f55bf8fc4fe966ef4e7ecf635283a6fa6bacd8586ee8f0d4d39c6faffd49d60b01cb9
APT28_2014-10_TrendMicro Operation Pawn StormBC58A8550C53689C8148B021C917FB4AEEC62AC1_Case5Payload_install.exe_c43edb579e43aaeb6f0c0703f84e43f77dd063acdfb00509b3b06718b39ae53e2ff2fc080094145ce138abb1f2253de4
APT28_2014-10_TrendMicro Operation Pawn StormC5CE5B7D10ACCB04A4E45C3A4DCF10D16B192E2F_Case1Payload_netids.dll_85c80d01661f88ec556579e772a5a3db461f5340f9ea47344f86bb7302fbaaa0567605134ec880eef34fa9b40926eb70
APT28_2014-10_TrendMicro Operation Pawn StormD0AA4F3229FCD9A57E9E4F08860F3CC48C983ADDml.rtfa24d2f5258f8a0c3bddd1b5636b0ec57992caa9e8de503fb304f97d1ab0b92202d2efb0d1353d19ce7bec512faf76491
APT28_2014-10_TrendMicro Operation Pawn StormDAE7FAA1725DB8192AD711D759B13F8195A18821_Case6_MH17.doc_decoy388594cd1bef96121be291880b22041aadf344f12633ab0738d25e38f40c6adc9199467838ec14428413b1264b1bf540
APT28_2014-10_TrendMicro Operation Pawn StormE338A57C35A4732BBB5F738E2387C1671A002BCB_Case6_advstoreshell.dll_d7a625779df56d874871bb632f3e310611097a7a3336e0ab124fa921b94e3d51c4e9e4424e140e96127bfcf1c10ef110
APT28_2014-10_TrendMicro Operation Pawn StormF542C5F9259274D94360013D14FFBECC43AAE552_Case5Decoy_IDF_Spokesperson_Terror_Attack_011012.doc_77aa465744061b4b725f73848aebdff691f750f422fd3ff361fabca02901830ef3f6e5829f6e8db9c1f518a1a3cac08c
APT28_2014-10_TrendMicro Operation Pawn Stormwp-operation-pawn-storm.pdfce254486b02be740488c0ab3278956fd9b8495ff1d023e3ae7aed799f02d9cf24422a38dfb9ed37c0bdc65da55b4ee42
APT28APT28_2015-07_Digital Attack on German Parliament
APT28_2015-07_Digital Attack on German Parliament0450AAF8ED309CA6BAF303837701B5B23AAC6F05_servicehost.dll_800af1c9d341b846a856a1e686be6a3e566ab945f61be016bfd9e83cc1b64f783b9b8deb891e6d504d3442bc8281b092
APT28_2015-07_Digital Attack on German ParliamentCDEEA936331FCDD8158C876E9D23539F8976C305_exe_5e70a5c47c6b59dae7faf0f2d62b28b3730a0e3daf0b54f065bdd2ca427fbe10e8d4e28646a5dc40cbcfb15e1702ed9a
APT28_2015-07_Digital Attack on German ParliamentDigital Attack on German Parliament_ Investigative Report on the Hack of the Left Party Infrastructure in Bundestag _ netzpolitik.pdf28d4cc2a378633e0ad6f3306cc067c43e83e2185f9e1a5dbc550914dcbc7a4d0f8b30a577ddb4cd8a0f36ac024a68aa0
APT28_2015-07_Digital Attack on German ParliamentF46F84E53263A33E266AAE520CB2C1BD0A73354E_winexesvc.exe_77e7fb6b56c3ece4ef4e93b6dc608be05130f600cd9a9cdc82d4bad938b20cbd2f699aadb76e7f3f1a93602330d9997d
APT28APT28_2015-10_New Adobe Flash Zero-Day Used in Pawn Storm
APT28_2015-10_New Adobe Flash Zero-Day Used in Pawn Storm2DF498F32D8BAD89D0D6D30275C19127763D5568763D5568.swf_6ca857721be6fff26b10867c99bd8c80b4064721d911e9606edf366173325945f9e940e489101e7d0747103c0e905126
APT28_2015-10_New Adobe Flash Zero-Day Used in Pawn StormA5FCA59A2FAE0A12512336CA1B78F857AFC06445AFC06445_ mgswizap.dll_f1d3447a2bff56646478b0adb7d0451c5a414a39851c4e22d4f9383211dfc080e16e2caffd90fa06dcbe51d11fdb0d6c
APT28APT28_2015-10_Root9_APT28_targets Financial Markets
APT28_2015-10_Root9_APT28_targets Financial Markets0450AAF8ED309CA6BAF303837701B5B23AAC6F05_servicehost.dll_800af1c9d341b846a856a1e686be6a3e566ab945f61be016bfd9e83cc1b64f783b9b8deb891e6d504d3442bc8281b092
APT28_2015-10_Root9_APT28_targets Financial MarketsF325970FD24BB088F1BEFDAE5788152329E26BF3_SupUpNvidia.exe_0369620eb139c3875a62e36bb7abdae8b1f2d461856bb6f2760785ee1af1a33c71f84986edf7322d3e9bd974ca95f92d
APT28APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets1A4F39C0262822B0623213B8ED3F56DEE0117CD59_tf394kv.dll_8c4d896957c36ec4abeb07b2802268b96cd30c85dd8a64ca529c6eab98a757fb326de639a39b597414d5340285ba91c6
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets1A4F39C0262822B0623213B8ED3F56DEE0117CD5_tf394kv.dll_8c4d896957c36ec4abeb07b2802268b96cd30c85dd8a64ca529c6eab98a757fb326de639a39b597414d5340285ba91c6
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets314EF7909CA0ED3A744D2F59AB5AC8B8AE259319.dll_(4.3)AZZYimplants-USBStealerf6f88caf49a3e32174387cacfa144a89e917166adf6e1135444f327d8fff6ec6c6a8606d65dda4e24c2f416d23b69d45
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets3E2E245B635B04F006A0044388BD968DF9C3238C_IGFSRVC.dll_USBStealerce151285e8f0e7b2b90162ba171a4b904e4606313c423b681e11110ca5ed3a2b2632ec6c556b7ab9642372ae709555f3
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targets776C04A10BDEEC9C10F51632A589E2C52AABDF48_USBGuard.exe_8cb08140ddb00ac373d29d37657a03cc690b483751b890d487bb63712e5e79fca3903a5623f22416db29a0193dc10527
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsAF86743852CC9DF557B62485715AF4C6D73644D3_AZZY4.3installerc3ae4a37094ecfe95c2badecf40bf5bb67ecc3b8c6057090c7982883e8d9d0389a8a8f6e8b00f9e9b73c45b008241322
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsC78FCAE030A66F388BF8CEA569422F5A79B7B96C_tmpdt.tmp_(4.3)AZZYimplantce8b99df8642c065b6af43fde1f786a31bab1a3e0e501d3c14652ecf60870e483ed4e90e500987c35489f17a44fef26c
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsC78FCAE030A66F388BF8CEA569422F5A79B7B96C_tmpdt.tmp__ce8b99df8642c065b6af43fde1f786a31bab1a3e0e501d3c14652ecf60870e483ed4e90e500987c35489f17a44fef26c
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsE251B3EB1449F7016DF78D113571BEA57F92FC36c_servicehost.dll_USBStealer8b238931a7f64fddcad3057a96855f6c92dcb0d8394d0df1064e68d90cd90a6ae5863e91f194cbaac85ec21c202f581f
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsE3B7704D4C887B40A9802E0695BAE379358F3BA0_Stand-aloneAZZYbackdoora96f4b8ac7aa9dbf4624424b7602d4f7a9dc96d45702538c2086a749ba2fb467ba8d8b603e513bdef62a024dfeb124cb
APT28_2015-12_Kaspersky_Sofacy APT hits high profile targetsF325970FD24BB088F1BEFDAE5788152329E26BF3_SupUpNvidia.exe_USBStealer0369620eb139c3875a62e36bb7abdae8b1f2d461856bb6f2760785ee1af1a33c71f84986edf7322d3e9bd974ca95f92d
APT28APT28_2016-02_PaloAlto_Fysbis Sofacy Linux Backdoor
APT28_2016-02_PaloAlto_Fysbis Sofacy Linux Backdoor9444D2B29C6401BC7C2D14F071B11EC9014AE040_Fysbis_elf_364ff454dcf00420cff13a57bcb784678bca0031f3b691421cb15f9c6e71ce193355d2d8cf2b190438b6962761d0c6bb
APT28_2016-02_PaloAlto_Fysbis Sofacy Linux BackdoorA Look Into Fysbis_ Sofacy’s Linux Backdoor - Palo Alto Networks Blog.pdf9a6b771c934415f74a203e0dfab9edbe1b6c3e6ef673f14536ff8d7c2bf18f9358a9a7f8962a24e2255f54ac451af86c
APT28_2016-02_PaloAlto_Fysbis Sofacy Linux BackdoorECDDA7ACA5C805E5BE6E0AB2017592439DE7E32C_ksysdefd_elfe107c5c84ded6cd9391aede7f04d64c8fd8b2ea9a2e8a67e4cb3904b49c789d57ed9b1ce5bebfe54fe3d98214d6a0f61
APT28_2016-02_PaloAlto_Fysbis Sofacy Linux BackdoorF080E509C988A9578862665B4FCF1E4BF8D77C3E075b6695ab63f36af65f7ffd45cccd3902c7cf55fd5c5809ce2dce56085ba43795f2480423a4256537bfdfda0df85592
APT29 APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National Committee
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National Committee0B3852AE641DF8ADA629E245747062F889B26659.exe_cc9e6578a47182a941a478b276320e06fd39d2837b30e7233bc54598ff51bdc2f8c418fa5b94dea2cadb24cf40f395e5
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National Committee74C190CD0C42304720C686D50F8184AC3FADDBE9.exe_19172b9210295518ca52e93a29cfe8f440ae43b7d6c413becc92b07076fa128b875c8dbb4da7c036639eccf5a9fc784f
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National CommitteeBears in the Midst_ Intrusion into the Democratic National Committee ».pdfdd5e31f9d323e6c3e09e367e6bd0e7b12d815b11f3b916bdc27b049402f5f1c024cffe2318a4f27ebfa3b8a9fffe2880
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National CommitteeCB872EDD1F532C10D0167C99530A65C4D4532A1E.exe_ce227ae503e166b77bf46b6c8f5ee4dab101cd29e18a515753409ae86ce68a4cedbe0d640d385eb24b9bbb69cf8186ae
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National CommitteeE2B98C594961AAE731B0CCEE5F9607080EC57197_pagemgr.exe_004b55a66b3a86a1ce0a0b9b69b959766c1bce76f4d2358656132b6b1d471571820688ccdbaca0d86d0ca082b9390536
APT29_2016-06_Crowdstrike_Bears in the Midst Intrusion into the Democratic National CommitteeF09780BA9EB7F7426F93126BC198292F5106424B_VmUpgradeHelper.exe_9e7053a4b6c9081220a694ec93211b4e4845761c9bed0563d0aa83613311191e075a9b58861e80392914d61a21bad976
APT28APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnel
APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnelE2101519714F8A4056A9DE18443BC6E8A1F1B977_PortMapClient.exe_ad44a7c5e18e9958dda66ccfc406cd44b81b10bdf4f29347979ea8a1715cbfc560e3452ba9fffcc33cd19a3dc47083a4
APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnelF09780BA9EB7F7426F93126BC198292F5106424B_VmUpgradeHelper.exe_9e7053a4b6c9081220a694ec93211b4e4845761c9bed0563d0aa83613311191e075a9b58861e80392914d61a21bad976
APT28_2016-07_Invincea_Tunnel of Gov DNC Hack and the Russian XTunnelTunnel of Gov_ DNC Hack and the Russian XTunnel _ Invincea.pdfb1b88f78c2f4393d437da4ce743ac5e8fb0cb4527efc48c90a2cd3e9e46ce59eaa280c85c50d7b680c98bb159c27881d
APT28APT28_2016-10_ESET_Observing the Comings and Goings
APT28_2016-10_ESET_Observing the Comings and Goingseset-sednit-part-2.pdfc3c278991ad051fbace1e2f3a4c20998f9ed13d5aa43c74287a936bf52772080fc26b5c62a805e19abceb20ef08ea5ff
APT28_2016-10_ESET_Observing the Comings and GoingsSedreco-dropper
APT28_2016-10_ESET_Observing the Comings and GoingsSedreco_payload
APT28_2016-10_ESET_Observing the Comings and GoingsXAgent-LIN
APT28_2016-10_ESET_Observing the Comings and GoingsXAgent-WIN
APT28_2016-10_ESET_Observing the Comings and GoingsXtunnel
APT28APT28_2016-10_ESET_Sednit A Mysterious Downloader
APT28_2016-10_ESET_Sednit A Mysterious Downloader1CC2B6B208B7687763659AEB5DCB76C5C2FBBF26.scr_006b418307c534754f055436a91848aa6507caba5835cad645ae80a081b98284032e286d97dabb98bbfeb76c3d51a094
APT28_2016-10_ESET_Sednit A Mysterious Downloader49ACBA812894444C634B034962D46F986E0257CF.exe_23ae20329174d44ebc8dbfa9891c62603e23201e6c52470e73a92af2ded12e6a5d1ad39538f41e762ca1c4b8d93c6d8d
APT28_2016-10_ESET_Sednit A Mysterious Downloader4C9C7C4FD83EDAF7EC80687A7A957826DE038DD7.exe_0eefeaf2fb78ebc49e7beba505da273d6ccc375923a00571dffca613a036f77a9fc1ee22d1fddffb90ab7adfbb6b75f1
APT28_2016-10_ESET_Sednit A Mysterious Downloader4F92D364CE871C1AEBBF3C5D2445C296EF535632.exe_9227678b90869c5a67a05defcaf21dfb79a508ba42247ddf92accbf5987b1ffc7ba20cd11806d332979d8a8fe85abb04
APT28_2016-10_ESET_Sednit A Mysterious Downloader516EC3584073A1C05C0D909B8B6C15ECB10933F1.exe_607a7401962eaf78b93676c9f5ca6a26ecd2c8e79554f226b69bed7357f61c75f1f1a42f1010d7baa72abe661a6c0587
APT28_2016-10_ESET_Sednit A Mysterious Downloader593D0EB95227E41D299659842395E76B55AA048D.exe_6cd2c953102792b738664d69ce41e080a13aa88c32eb020071c2c92f5364fd98f6dead7bcf71320731f05cd0a34a59db
APT28_2016-10_ESET_Sednit A Mysterious Downloader593D0EB95227E41D299659842395E76B55AA048D_dll_6cd2c953102792b738664d69ce41e080a13aa88c32eb020071c2c92f5364fd98f6dead7bcf71320731f05cd0a34a59db
APT28_2016-10_ESET_Sednit A Mysterious Downloader5C132AE63E3B41F7B2385740B9109B473856A6A5.dll_94ebc9ef5565f98b1aa1e97c6d35c2e0cfc60d5db3bfb4ec462d5e4bd5222f04d7383d2c1aec1dc2a23e3c74a166a93d
APT28_2016-10_ESET_Sednit A Mysterious Downloader5FC4D555CA7E0536D18043977602D421A6FD65F9.exe_81d9649612b05829476854bde71b8c3f1faf645c2b43cd78cc70df6bcbcd95e38f19d16ca2101de0b6a8fc31cac24c37
APT28_2016-10_ESET_Sednit A Mysterious Downloader669A02E330F5AFC55A3775C4C6959B3F9E9965CF.exe_a0f212fd0f103ca8beaf8362f74903a2a50cb9ce1f01ea335c95870484903734ba9cd732e7b3db16cd962878bac3a767
APT28_2016-10_ESET_Sednit A Mysterious Downloader6CAA48CD9532DA4CABD6994F62B8211AB9672D9E_bk.exe_9df2ddb2631ff5439c34f80ace40cd29f18fe2853ef0d4898085cc5581ae35b83fc6d1c46563dbc8da1b79ef9ef678eb
APT28_2016-10_ESET_Sednit A Mysterious Downloader7394EA20C3D510C938EF83A2D0195B767CD99ED7_x32.dll_d70f4e9d55698f69c5f63b1a2e1507eb471fbdc52b501dfe6275a32f89a8a6b02a2aa9a0e70937f5de610b4185334668
APT28_2016-10_ESET_Sednit A Mysterious Downloader9F3AB8779F2B81CAE83F62245AFB124266765939.exe_3430bf72d2694e428a73c84d5ac4a4b9b1900cb7d1216d1dbc19b4c6c8567d48215148034a41913cc6e59958445aebde
APT28_2016-10_ESET_Sednit A Mysterious DownloaderE8ACA4B0CFE509783A34FF908287F98CAB968D9E.exe_991ffdbf860756a4589164de26dd7ccf44e8d3ffa0989176e62b8462b3d14ad38ede5f859fd3d5eb387050f751080aa2
APT28_2016-10_ESET_Sednit A Mysterious DownloaderEE788901CD804965F1CD00A0AFC713C8623430C4.exe_93c589e9eaf3272bc0349d605b85c566f9c0303d07800ed7cba1394cd326bbe8f49c7c5e0e062be59a9749f6c51c6e69
APT28_2016-10_ESET_Sednit A Mysterious DownloaderEE788901CD804965F1CD00A0AFC713C8623430C46.exe_93c589e9eaf3272bc0349d605b85c566f9c0303d07800ed7cba1394cd326bbe8f49c7c5e0e062be59a9749f6c51c6e69
APT28_2016-10_ESET_Sednit A Mysterious Downloadereset-sednit-part3.pdfa7b4e01335aac544a12c6f88aab80cd92c7a60963b94b6fc924abdcb19da4d32f35c86cdfe2277b0081cd02c72435b48
APT28APT28_2016-10_ESET_Sednit Approaching the Target
APT28_2016-10_ESET_Sednit Approaching the Target015425010BD4CF9D511F7FCD0FC17FC17C23EEC1c2a0344a2bbb29d9b56d378386afcbed63d0b28114f6277b901132bc1cc1f541a594ee72f27d95653c54e1b73382a5f6
APT28_2016-10_ESET_Sednit Approaching the Target0F7893E2647A7204DBF4B72E50678545573C3A1035283c2e60a3cba6734f4f98c443d11fda43d39c749c121e99bba00ce809ca63794df3f704e7ad4077094abde4cf2a73
APT28_2016-10_ESET_Sednit Approaching the Target10686CC4E46CF3FFBDEB71DD565329A80787C439d7c471729bc124babf32945eb5706eb6bc8fec92eee715e77c762693f1ae2bbcd6a3f3127f1226a847a8efdc272e2cbc
APT28_2016-10_ESET_Sednit Approaching the Target17661A04B4B150A6F70AFDABE3FD9839CC56BEE8a579d53a1d29684de6d2c0cbabd525c56562e2ac60afa314cd463f771fcfb8be70f947f6e2b314b0c48187eebb33dd82
APT28_2016-10_ESET_Sednit Approaching the Target21835AAFE6D46840BB697E8B0D4AAC06DEC44F5B211b7100fd799e9eaabeb13cfa4462313d13f2e5b241168005425b15410556bcf26d04078da6b2ef42bc0c2be7654bf8
APT28_2016-10_ESET_Sednit Approaching the Target2663EB655918C598BE1B2231D7C018D8350A0EF9540e4a7a28ca1514e53c2564993d8d8731dd3e3c05fabbfeafbcb7f5616dba30bbb2b1fc77dba6f0250a2c3270c0dd6b
APT28_2016-10_ESET_Sednit Approaching the Target2C86A6D6E9915A7F38D119888EDE60B38AB1D69D56e011137b9678f1fcc54f9372198bae69d5123a277dc1f618be5edcc95938a0df148c856d2e1231a07e2743bd683e01
APT28_2016-10_ESET_Sednit Approaching the Target351C3762BE9948D01034C69ACED97628099A90B083cf67a5d2e68f9c00fbbe6d7d9203bf853dbbba09e2463c45c0ad913d15d67d15792d888f81b4908b2216859342aa04
APT28_2016-10_ESET_Sednit Approaching the Target3956CFE34566BA8805F9B1FE0D2639606A404CD4dffb22a1a6a757443ab403d61e760f0c0356f5fa9907ea060a7d6964e65f019896deb1c7e303b7ba04da1458dc73a842
APT28_2016-10_ESET_Sednit Approaching the Target4D5E923351F52A9D5C94EE90E6A00E6FCED733EF6159c094a663a171efd531b23a46716de00eaf295a28f5497dbb5cb8f647537b6e55dd66613505389c24e658d150972c
APT28_2016-10_ESET_Sednit Approaching the Target4FAE67D3988DA117608A7548D9029CADDBFB3EBFc6a80316ea97218df11e11125337233ab0b3f0d6e6c593e2a2046833080574f98566c48a1eda865b2e110cd41bf31a31
APT28_2016-10_ESET_Sednit Approaching the Target51B0E3CD6360D50424BF776B3CD673DD45FD0F97973e0c922eb07aad530d8a1de19c77557c4101caf833aa9025fec4f04a637c049c929459ad3e4023ba27ac72bde7638d
APT28_2016-10_ESET_Sednit Approaching the Target51E42368639D593D0AE2968BD2849DC20735C071dfc836e035cb6c43ce26ed870f61d7e813468ebe5d47d57d62777043c80784cbf475fb2de1df4546a307807bd2376b45
APT28_2016-10_ESET_Sednit Approaching the Target5C3E709517F41FEBF03109FA9D597F2CCC495956ac75fd7d79e64384b9c4053b37e5623f0ac7b666814fd016b3d21d7812f4a272104511f90ca666fa13e9fb6cefa603c7
APT28_2016-10_ESET_Sednit Approaching the Target63D1D33E7418DAF200DC4660FC9A59492DDD50D92d4eaa0331abbc6d867f5f979b2c890db4f755c91c2790f4ab9bac4ee60725132323e13a2688f3d8939ae9ed4793d014
APT28_2016-10_ESET_Sednit Approaching the Target69D8CA2A02241A1F88A525617CF18971C99FB63Bed601bbd4dd0e267afb0be840cb27c904c52957270e63efa4b81a1c6551c706b82951f019b682219096e67182a727eab
APT28_2016-10_ESET_Sednit Approaching the Target6FB3FD8C2580C84314B14510944700144A9E31DFf7ee38ca49cd4ae35824ce5738b6e58763911ebce691c4b7c9582f37f63f6f439d2ce56e992bfbdcf812132512e753eb
APT28_2016-10_ESET_Sednit Approaching the Target80DCA565807FA69A75A7DD278CEF1DAAEE34236E9863f1efc5274b3d449b5b7467819d280abda721c4f1ca626f5d8bd2ce186aa98b197ca68d53e81cf152c32230345071
APT28_2016-10_ESET_Sednit Approaching the Target842B0759B5796979877A2BAC82A33500163DED67291af793767f5c5f2dc9c6d44f1bfb59f50791f9909c542e4abb5e3f760c896995758a832b0699c23ca54b579a9f2108
APT28_2016-10_ESET_Sednit Approaching the Target8F99774926B2E0BF85E5147AACA8BBBBCC5F1D48c2988e3e4f70d5901b234ff1c1363dcc69940a20ab9abb31a03fcefe6de92a16ed474bbdff3288498851afc12a834261
APT28_2016-10_ESET_Sednit Approaching the Target90C3B756B1BB849CBA80994D445E96A9872D0CF521d63e99ed7dcd8baec74e6ce65c9ef3dfa8a85e26c07a348a854130c652dcc6d29b203ee230ce0603c83d9f11bbcacc
APT28_2016-10_ESET_Sednit Approaching the Target99F927F97838EB47C1D59500EE9155ADB55B806A07c8a0a792a5447daf08ac32d1e283e88f0674cb85f28b2619a6e0ddc74ce71e92ce4c3162056ef65ff2777104d20109
APT28_2016-10_ESET_Sednit Approaching the Target9FC43E32C887B7697BF6D6933E9859D29581EAD0a3c757af9e7a9a60e235d08d54740fbcbf28267386a010197a50b65f24e815aa527f2adbc53c609d2b2a4f999a639413
APT28_2016-10_ESET_Sednit Approaching the TargetA43EF43F3C3DB76A4A9CA8F40F7B2C89888F03997c2b1de614a9664103b6ff7f3d73f83dc2551c4e6521ac72982cb952503a2e6f016356e02ee31dea36c713141d4f3785
APT28_2016-10_ESET_Sednit Approaching the TargetA5FCA59A2FAE0A12512336CA1B78F857AFC06445f1d3447a2bff56646478b0adb7d0451c5a414a39851c4e22d4f9383211dfc080e16e2caffd90fa06dcbe51d11fdb0d6c
APT28_2016-10_ESET_Sednit Approaching the TargetA857BCCF4CC5C15B60667ECD865112999E1E56BA0c334645a4c12513020aaabc3b78ef9fe1b1143c0003c6905227df37d40aacbaecc2be8b9d86547650fe11bd47ca6989
APT28_2016-10_ESET_Sednit Approaching the TargetB4A515EF9DE037F18D96B9B0E48271180F5725B7afe09fb5a2b97f9e119f70292092604ed93f22d46090bfc19ef51963a781eeb864390c66d9347e86e03bba25a1fc29c5
APT28_2016-10_ESET_Sednit Approaching the TargetB7788AF2EF073D7B3FB84086496896E7404E625Eeda061c497ba73441994a30e36f55b1db1800cb1d4b755e05b0fca251b8c6da96bb85f8042f2d755b7f607cbeef58db8
APT28_2016-10_ESET_Sednit Approaching the TargetB8AABE12502F7D55AE332905ACEE80A10E3BC39991381cd82cdd5f52bbc7b30d34cb8d831a09ce8a9210d2530d6ce1d59bfae2ac617ac89558cdcdcac15392d176e70c8d
APT28_2016-10_ESET_Sednit Approaching the TargetC1EAE93785C9CB917CFB260D3ABF6432C6FDAF4D732fbf0a4ceb10e9a2254af59ae4f8806236a1bdd76ed90659a36f58b3e073623c34c6436d26413c8eca95f3266cc6fc
APT28_2016-10_ESET_Sednit Approaching the TargetC2E8C584D5401952AF4F1DB08CF4B6016874DDAC078755389b98d17788eb5148e23109a654c4ce98970a44f92be748ebda9fcfb7b30e08d98491e7735be6dd287189cea3
APT28_2016-10_ESET_Sednit Approaching the TargetC345A85C01360F2833752A253A5094FF421FC8391219318522fa28252368f58f36820ac2fbd5c2cf1c1f17402cc313fe3266b097a46e08f48b971570ef4667fbfd6b7301
APT28_2016-10_ESET_Sednit Approaching the TargetD3AA282B390A5CB29D15A97E0A046305038DBEFE18efc091b431c39d3e59be445429a7bceae782130b06d95f3373ff7d5c0977a8019960bdf80614c1aa7e324dc350428a
APT28_2016-10_ESET_Sednit Approaching the TargetD85E44D386315B0258847495BE1711450AC02D9Fc4ffab85d84b494e1c450819a0e9c7db500fa112a204b6abb365101013a17749ce83403c30cd37f7c6f94e693c2d492f
APT28_2016-10_ESET_Sednit Approaching the TargetD9989A46D590EBC792F14AA6FEC30560DFE931B18b031fce1d0c38d6b4c68d52b2764c7e4bcd11142d5b9f96730715905152a645a1bf487921dd65618c354281512a4ae7
APT28_2016-10_ESET_Sednit Approaching the TargetE5FB715A1C70402774EE2C518FB0E4E9CD3FDCFF072c692783c67ea56da9de0a53a60d11c431ae04c79ade56e1902094acf51e5bf6b54d65363dfa239d59f31c27989fde
APT28_2016-10_ESET_Sednit Approaching the TargetE742B917D3EF41992E67389CD2FE2AAB0F9ACE5B7764499bb1c4720d0f1d302f15be792c63047199037892f66dc083420e2fc60655a770756848c1f07adc2eb7d4a385d0
APT28_2016-10_ESET_Sednit Approaching the TargetED9F3E5E889D281437B945993C6C2A80C60FDEDC2dfc90375a09459033d430d046216d22261b0a5912965ea95b8ae02aae1e761a61f9ad3a9fb85ef781e62013d6a21368
APT28_2016-10_ESET_Sednit Approaching the TargetF024DBAB65198467C2B832DE9724CB70E24AF0DD7b1bfd7c1866040e8f618fe67b93bea5df47a939809f925475bc19804319652635848b8f346fb7dfd8c95c620595fe9f
APT28_2016-10_ESET_Sednit Approaching the TargetF3D50C1F7D5F322C1A1F9A72FF122CAC990881EE77089c094c0f2c15898ff0f021945148eb6620442c3ab327f3ccff1cc6d63d6ffe7729186f7e8ac1dbbbfddd971528f0
APT28_2016-10_ESET_Sednit Approaching the TargetF7608EF62A45822E9300D390064E667028B75DEA75f71713a429589e87cf2656107d2bfcb6fff95a74f9847f1a4282b38f148d80e4684d9c35d9ae79fad813d5dc0fd7a9
APT28_2016-10_ESET_Sednit Approaching the Targeteset-sednit-part1.pdfbae0221feefb37e6b81f5ca893864743b31b27aa0808aea5b0e8823ecb07402c0c2bbf6818a22457e146c97f685162b4
APT28APT28_2016-10_Sekoia_Rootkit analysisUse case on HideDRV
APT28_2016-10_Sekoia_Rootkit analysisUse case on HideDRV83E54CB97644DE7084126E702937F8C3A2486A2F_fsflt.sys_f8c8f6456c5a52ef24aa426e6b1216854bfe2216ee63657312af1b2507c8f2bf362fdf1d63c88faba397e880c2e39430
APT28_2016-10_Sekoia_Rootkit analysisUse case on HideDRV9F3AB8779F2B81CAE83F62245AFB124266765939_fsflt.13430bf72d2694e428a73c84d5ac4a4b9b1900cb7d1216d1dbc19b4c6c8567d48215148034a41913cc6e59958445aebde

Toolsmith Release Advisory: Sysmon v6 for Securitay

Sysmon just keeps getting better.
I'm thrilled to mention that @markrussinovich and @mxatone have released Sysmon v6.
When I first discussed Sysmon v2 two years ago it offered users seven event types.
Oh, how it's grown in the last two years, now with 19 events, plus an error event.
From Mark's RSA presentation we see the current listing with the three new v6 events highlighted.
Sysmon Events

"This release of Sysmon, a background monitor that records activity to the event log for use in security incident detection and forensics, introduces an option that displays event schema, adds an event for Sysmon configuration changes, interprets and displays registry paths in their common format, and adds named pipe create and connection events."

Mark's presentation includes his basic event recommendations so as to run Sysmon optimally.
Basic Event Recommendations
Basic Event Recommendations (Cont)

I strongly suggest you deploy using these recommendations.
A great way to get started is to use a Sysmon configuration template. Again, as Mark discussed at RSA, consider @SwiftOnSecurity's sysmon-config-export.xml via Github. While there are a number of templates on Github, this one has "virtually every line commented and sections are marked with explanations, so it should also function as a tutorial for Sysmon and a guide to critical monitoring areas in Windows systems." Running Sysmon with it is as easy as:
sysmon.exe -accepteula -i sysmonconfig-export.xml

As a quick example of Sysmon capabilities and why you should always run it everywhere, consider the following driver installation scenario. While this is a non-malicious scenario that DFIR practitioners will appreciate, rather than the miscreants, the detection behavior resembles that which would result from kernel-based malware.
I fired up WinPMEM, the kernel mode driver for gaining access to physical memory included with Rekall, as follows:
Upon navigating to Applications and Services Logs/Microsoft/Windows/Sysmon/Operational in the Windows Event Viewer, I retrieved the expected event:

Event ID 6: Driver loaded
The best way to leave you to scramble off and deploy Sysmon broadly, are with Mark's Best Practices and Tips, again from his RSA presentation.

Go forth and deploy!
Cheers...until next time.

Hack Naked News #111 – February 14, 2017

Microsoft delays Patch Tuesday, WordPress continues to fail at failing, Valve eradicates a Steam bug, ransomware that makes you do terrible things, and more. Jason Wood of Paladin Security joins us to talk about a father and son who created access to a supercomputer via voice commands!

Attacking the Windows NVIDIA Driver

Posted by Oliver Chang

Modern graphic drivers are complicated and provide a large promising attack surface for EoPs and sandbox escapes from processes that have access to the GPU (e.g. the Chrome GPU process). In this blog post we’ll take a look at attacking the NVIDIA kernel mode Windows drivers, and a few of the bugs that I found. I did this research as part of a 20% project with Project Zero, during which a total of 16 vulnerabilities were discovered.

Kernel WDDM interfaces

The kernel mode component of a graphics driver is referred to as the display miniport driver. Microsoft’s documentation has a nice diagram that summarises the relationship between the various components:


In the DriverEntry() for display miniport drivers, a DRIVER_INITIALIZATION_DATA structure is populated with callbacks to the vendor implementations of functions that actually interact with the hardware, which is passed to dxgkrnl.sys (DirectX subsystem) via DxgkInitialize(). These callbacks can either be called by the DirectX kernel subsystem, or in some cases get called directly from user mode code.


A well known entry point for potential vulnerabilities here is the DxgkDdiEscape interface. This can be called straight from user mode, and accepts arbitrary data that is parsed and handled in a vendor specific way (essentially an IOCTL). For the rest of this post, we’ll use the term “escape” to denote a particular command that’s supported by the DxgkDdiEscape function.

NVIDIA has a whopping 400~ escapes here at time of writing, so this was where I spent most of my time (the necessity of many of these being in the kernel is questionable):

// (names of these structs are made up by me)
// Represents a group of escape codes
struct NvEscapeRecord {
 DWORD action_num;
 DWORD expected_magic;
 void *handler_func;
 NvEscapeRecordInfo *info;
 _QWORD num_codes;

// Information about a specific escape code.
struct NvEscapeCodeInfo {
 DWORD code;
 DWORD unknown;
 _QWORD expected_size;
 WORD unknown_1;

NVIDIA implements their private data (pPrivateDriverData in the DXGKARG_ESCAPE struct) for each escape as a header followed by data. The header has the following format:

struct NvEscapeHeader {
 DWORD magic;
 WORD unknown_4;
 WORD unknown_6;
 DWORD size;
 DWORD magic2;
 DWORD code;
 DWORD unknown[7];

These escapes are identified by a 32-bit code (first member of the NvEscapeCodeInfo struct above), and are grouped by their most significant byte (from 1 - 9).

There is some validation being done before each escape code is handled. In particular, each NvEscapeCodeInfo contains the expected size of the escape data following the header. This is validated against the size in the NvEscapeHeader, which itself is validated against the PrivateDriverDataSize field given to DxgkDdiEscape. However, it’s possible for the expected size to be 0 (usually when the escape data is expected to be variable sized) which means that the escape handler is responsible for doing its own validation. This has led to some bugs (1, 2).

Most of the vulnerabilities found (13 in total) in escape handlers were very basic mistakes, such as writing to user provided pointers blindly, disclosing uninitialised kernel memory to user mode, and incorrect bounds checking. There were also numerous issues that I noticed (e.g. OOB reads) that I didn’t report because they didn’t seem exploitable.


Another interesting entry point is the DxgkDdiSubmitBufferVirtual function, which is newly introduced in Windows 10 and WDDM 2.0 to support GPU virtual memory (deprecating the old DxgkDdiSubmitBuffer/DxgkDdiRender functions). This function is fairly complicated, and also accepts vendor specific data from the user mode driver for each command submitted. One bug was found here.


There are a few other WDDM functions that accept vendor-specific data, but nothing of interest were found in those after a quick review.

Exposed devices

NVIDIA also exposes some additional devices that can be opened by any user:

  • \\.\NvAdminDevice which appears to be used for NVAPI. A lot of the ioctl handlers seem to call into DxgkDdiEscape.
  • \\.\UVMLite{Controller,Process*}, likely related to NVIDIA’s “unified memory”. 1 bug was found here.
  • \\.\NvStreamKms, installed by default as part of GeForce Experience, but you can opt out during installation. It’s not exactly clear why this particular driver is necessary. 1 bug was found here also.

More interesting bugs

Most of the bugs I found were by manual reversing and analysis, along with some custom IDA scripts. I also ended up writing a fuzzer, which was surprisingly successful given how simple it was.

While most of the bugs were rather boring (simple cases of missing validation), there were a few that were a bit more interesting.


This driver registers a process creation notification callback using the PsSetCreateProcessNotifyRoutineEx function. This callback checks if new processes created on the system match image names that were previously set by sending IOCTLs.

This creation notification routine contained a bug:

(Simplified decompiled output)

wchar_t Dst[BUF_SIZE];


if ( cur->image_names_count > 0 ) {
 // info_ is the PPS_CREATE_NOTIFY_INFO that is passed to the routine.
 image_filename = info_->ImageFileName;
 buf = image_filename->Buffer;
 if ( buf ) {
   filename_length = 0i64;
   num_chars = image_filename->Length / 2;
   // Look for the filename by scanning for backslash.
   if ( num_chars ) {
     while ( buf[num_chars - filename_length - 1] != '\\' ) {
       if ( filename_length >= num_chars )
         goto DO_COPY;
     buf += num_chars - filename_length;
   wcscpy_s(Dst, filename_length, buf);
   Dst[filename_length] = 0;

This routines extracts the image name from the ImageFileName member of PS_CREATE_NOTIFY_INFO by searching backwards for backslash (‘\’). This is then copied to a stack buffer (Dst) using wcscpy_s, but the length passed is the length of the calculated name, and not the length of the destination buffer.

Even though Dst is a fixed size buffer, this isn’t a straightforward overflow. Its size is bigger than 255 wchars, and for most Windows filesystems path components cannot be greater than 255 characters. Scanning for backslash is also valid for most cases because ImageFileName is a canonicalised path.

It is however, possible to pass a UNC path that keeps forward slash (‘/’) as the path separator after being canonicalised (credits to James Forshaw for pointing me to this). This means we can get a filename of the form “aaa/bbb/ccc/...” and cause an overflow.

For example: CreateProcessW(L"\\\\?\\UNC\\\\DavWWWRoot\\aaaa/bbbb/cccc/blah.exe", …)

Another interesting note is that the wcslwr following the bad copy doesn’t actually limit the contents of the overflow (the only requirement is valid UTF-16). Since the calculated filename_length doesn’t include the null terminator, wcscpy_s will think that the destination is too small and will clear the destination string by writing a null byte at the beginning (after copying the contents up to filename_length bytes first so the overflow still happens). This means that the wcslwr is useless because this wcscpy_s call and part of the code never worked to begin with.

Exploiting this is trivial, as the driver is not compiled with stack cookies (hacking like it’s 1999). A local privilege escalation exploit is attached in the original issue that sets up a fake WebDAV server to exploit the vulnerability (ROP, pivot stack to user buffer, ROP again to allocate rwx mem containing shellcode and jump to it).

Incorrect validation in UVMLiteController

NVIDIA’s driver also exposes a device at \\.\UVMLiteController that can be opened by any user (including from the sandboxed Chrome GPU process). The IOCTL handlers for this device write results directly to Irp->UserBuffer, which is the output pointer passed to DeviceIoControl (Microsoft’s documentation  says not to do this).The IO control codes specify METHOD_BUFFERED, which means that the Windows kernel checks that the address range provided is writeable by the user before passing it off to the driver.

However, these handlers lacked bounds checking for the output buffer, which means that a user mode context could pass a length of 0 with any arbitrary address (which passes the ProbeForWrite check) to result in a limited write-what-where (the “what” here is limited to some specific values: including 32-bit 0xffff, 32-bit 0x1f, 32-bit 0, and 8-bit 0).

A simple privilege escalation exploit is attached in the original issue.

Remote attack vector?

Given the quantity of bugs that were discovered, I investigated whether if any of them can be reached from a completely remote context without having to compromise a sandboxed process first (e.g. through WebGL in a browser, or through video acceleration).

Luckily, this didn’t appear to be the case. This wasn’t too surprising, given that the vulnerable APIs here are very low level and only reached after going through many layers (for Chrome, libANGLE -> Direct3D runtime and user mode driver -> kernel mode driver), and generally called with valid arguments constructed in the user mode driver.

NVIDIA’s response

The nature of the bugs found showed that NVIDIA has a lot of work to do. Their drivers contained a lot of code which probably shouldn’t be in the kernel, and most of the bugs discovered were very basic mistakes. One of their drivers (NvStreamKms.sys) also lacks very basic mitigations (stack cookies) even today.

However, their response was mostly quick and positive. Most bugs were fixed well under the deadline, and it seems that they’ve been finding some bugs on their own internally. They also indicated that they’ve been working on re-architecturing their kernel drivers for security, but weren’t ready to share any concrete details.


First bug reported to NVIDIA.
6 of the bugs reported were fixed silently in the 372.90 release. Discussed patch gap issues with NVIDIA.
Patch released that includes fix for rest (all 14) of the bugs that were reported at the time (375.93).
Public bulletin released, and P0 bugs derestricted.
Realised that wasn’t fixed properly. Notified NVIDIA.
Fix for issue 911 released along with bulletin.
Final two bugs fixed.

Patch gap

NVIDIA’s first patch, which included fixes to 6 of the bugs I reported, did not include a public bulletin (the release notes mention “security updates”). They had planned to release public details a month after the patch is released. We noticed this, and let them know that we didn’t consider this to be good practice as an attacker can reverse the patch to find the vulnerabilities before the public is made aware of the details given this large window.

While the first 6 bugs fixed did not have details released for more than 30 days, the remaining 8 at the time had a patch released 5 days before the first bulletin was released. It looks like NVIDIA has been trying to reduce this gap, but based on recent bulletins it appears to be inconsistent.


Given the large attack surface exposed by graphics drivers in the kernel and the generally lower quality of third party code, it appears to be a very rich target for finding sandbox escapes and EoP vulnerabilities. GPU vendors should try to limit this by moving as much attack surface as they can out of the kernel.


The SANS Storm Center did a diary article on PacketTotal, which you can find here. PacketTotal is a (free) site where you upload a pcap (up to 50 Mb) and the site will analyze it and give you an console view that includes malicious or suspicious activity as well as a break out of http, dns and other protocols. It will also give you a nice timeline graph showing the packets as they interact, which is really nice.  Lastly, you get an analytics page if you like graphs showing the breakout of stats on the traffic. You can find it at, yes,

VirusTotal += Webroot

We welcome the Webroot scanner to VirusTotal. This is a machine learning engine from the US. In the words of the company:

"Webroot SecureAnywhere Business Endpoint Protection is a cloud-driven anti-malware solution and was the first next generation solution to offer a full replacement to conventional AV when launched in 2011.
Rather than rely on static signatures to identify malicious files and process, Webroot uses real-time monitoring and analysis of the events occurring within a device. Then, by using the extensive resources of cloud-based computing, threat and behavioral intelligence, Webroot is able to predict with negligible false positives any signs of malicious behavior. Windows PE files submitted to VirusTotal will be processed by the Webroot PE Scanner, non-PE files will not be scanned.”

Webroot has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by MRG Effitas, an AMTSO-member tester.

Does Reliable Real Time Detection Demand Prevention?

Chris Sanders started a poll on Twitter asking "Would you rather get a real-time alert with partial context immediately, or a full context alert delayed by 30 mins?" I answered by saying I would prefer full context delayed by 30 minutes. I also replied with the text at left, from my first book The Tao of Network Security Monitoring (2004). It's titled "Real Time Isn't Always the Best Time."

Dustin Webber then asked "if you have [indicators of compromise] IOC that merit 'real-time' notification then you should be in the business of prevention. Right?"

Long ago I decided to not have extended conversations over Twitter, as well as to not try to compress complex thoughts into 140 characters -- hence this post!

There is a difference, in my mind, between high-fidelity matching (using the vernacular from my newest book, The Practice of Network Security Monitoring, 50% off now with code RSAREADING) and prevention.

To Dustin's point, I agree that if it is possible to generate a match (or "alert," etc.) with 100% accuracy (or possibly near 100%, depending on the severity of the problematic event), i.e., with no chance or almost no chance of a false positive, then it is certainly worth seeking a preventive action for that problematic event. To use a phrase from the last decade, "if you can detect it, why can't you prevent it?"

However, there are likely cases where zero- or low-false positive events do not have corresponding preventive actions. Two come to mind.

First, although you can reliably detect a problem, you may not be able to do anything about it. The security team may lack the authority, or technical capability, to implement a preventive action.

Second, although you can reliably detect a problem, you may not want to do anything about it. The security team may desire to instead watch an intruder until such time that containment or incident mitigation is required.

This, then, is my answer to Dustin's question!

Crypt0l0cker Revival !

A couple of days ago a colleague of mine gave me a "brand new" malicious content delivered by a single HTML page. The page was sent to an email box as part of a biggest attack. I found that vector particularly fun and so I'd like to share some of the steps who took me through a personal investigation path made not for professional usage but just for fun.

At first sight the HTML page looks like the following image.

Figure1: Attack Vector. A simple HTML page

A white backgrounded HTML page with a single line test on it saying: "print this document please". But what document ? Honestly I am in front of one of the ugliest "fake email" I ever seen. But let's move on and se what it really carries on. Opening the HTML content with a simple editor we might see a suspicious obfuscated Javascript. We are facing a first obfuscation stage. 

Figure2: Obfuscated First Stage

Since Javascript is an interpreted language (such as .NET or .Java) is not hard to understand its behavior, indeed after some rounds of "substitutions" and "concatenations" it easy to get the following clear text result showing the end of the first stage.

Figure3: Clear Text First Stage
That script is going to create an additional "script tag" on the current document by injecting an external script from: "". The injected script will be called with the following code signature: "saveAs(blob, 'image.js');" with 2 arguments: 
  1. blob. The raw content of "big_encoded_data" (please refer to Figure3)
  2. image.js. The saving name
In order to better understand what that function saveAs(blob, image.js) does we need to analyze the external FileServer.js. The entry point of the external script is the function "saveAs(arg1, arg2)" which has been defined as follows:

Figure4: FileServer.js Original Entry

saveAs(blob, name) is a simple wrapper function headed to FileServer constructor which is defined as follows:

Figure5: FileServer.js constructor

The script saves the "blob" content to the temporary folder giving to it a specific name (image.js in our case). As you might notice from the script content: "Apple do not allow, see " if the victims opens the file with Safari/Mail the attack vector will have no effect since Safari/Mail does not allow you to trigger the script on "" event. This is why I did't see any file when I opened the infected HTML content. Back to the original script (Figure3) we see the aveAs function called on page.load so the resulting image.js is going to be saved in the temporary local folder, in case of email clients, it will be triggered as soon as saved! So lets move on our next stage: the big_encoded_data variable (Figure3) which is going to be saved as image.js file. The big_encoded_data owns a first obfuscation stage made by encoding the downloader in base64. Once decoded from base64 and beautified the results looks like the following image

Figure6: Stage 2 base64 decoded obfuscated downloader

The downloader is still obfuscated by a high number of simple returning array-strings variables. It took almost 45 minutes to decode the entire second stage downloader. The resulting downloader is shown in the following image.

Figure7: Second Stage Downloader
A first check on fileSystem API and on the Element Type is super interesting (at least to me). We are analyzing an attack based on a specific file system, Windows native. The deobfuscated downloader grabs a file from "" and saves it to a temporary directory. By using ActiveXObject (Windows native) it saves the file and it runs it through command line c["run"]("cmd.exe /c " + f + g, "0"); where f takes the temporary folder f = b["GetSpecialFolder"]("2"); and g takes the temporary name g = b["GetTempName"]();.

This is the end of the second stage downloader.

The downloaded file is a PE Executable packed as well. Fortunately the used packer is the PiMP Stub by Nullsoft: a quite famous installer used by several software house.

Figure8: NullSoft Installer

The PiMP installer takes .dlls and runs them as the resulting software. The used resources are compressed in its own body by a well known algorithm: .7zip. Kation.DLL is the only DLL included in the dropped file and so it is the run DLL by PiMP installer. Kation wraps out ADVAPI32.DLL and KERNEL32.DLL as you might see from Figure9. ADVAPI32 is a core Microsoft library which includes the Microsoft encryption libraries such as: EncryptFileA, EncryptFileW and so on and so forth. It's not hard to guess a new Ransomware infection from that API calls.

Figure9: Usage of Encryption Libraries

From a static analysis prospective it becomes clear that some of the used strings are dynamically allocated. For example in sub_10001170 (frame 0XBC) several UFT-16 strings within decryption loop are involved showing out the control flow passing to Etymology.Vs (Figure11).

Figure10: Setting the running pointer

Figure11: Decoding Functions

The real behavior is hidden into the Etymology.Vs encrypted file included in the PiMP solution as well. Running the infected sample it disclosures its real behavior: shown in Figure12.

Figure12: Ransom Request

Here we go,  we have just discovered a brand new Crypt0L0cker ! it asks for bitcoin (Figure13), of course.  Looking at network communications, a Domain Name Generator Algorithm (DNGA), [wow, it sounds new from CryptoL0Cker !] fires up as soon as the dropped file is executed. It looks for valid registered subdomains belonging with  Until a valid Command and Control answers to the CryptoLocker client it hides itself and performs simple DNS query as follows:


The process to contact the Command and Control in order to exchange key and to notify the attacker could be very time consuming, in some of my runs it took until 2 hours depending on the available Command and Control at the time being. It would be very nice to have extra time to reverse the DNGA but unfortunately my weekend time is ending up. 

Figure13: Ransom Request Web Page

Development language is French, and many piece of code reminds me the "gaming world".   The main Command and Control domain is registered in Moscow (RU) and the registrant is "privacy protected".

Results for Target:
Created Date :2017-02-07T12:37:10Z
Updated Date :2017-02-08T10:38:54Z
Results for Target:
Created Date :2017-02-07T12:37:10Z
Updated Date :2017-02-08T10:38:54Z

The ransom page (available on the following link) is registered by EPAG Domain Sercives GmbH (Bonn, Germany) and is written in Franc language:

Ok Let's have some brand new IoC:

Malicious hashes:

Malicious urls:

- base dns:

.?????? (6 characters)


Enjoy your new IoC

Guest Post: Bamm Visscher on Detection

Yesterday my friend Bamm Visscher published a series of Tweets on detection. I thought readers might like to digest it as a lightly edited blog post. Here, then, is the first ever (as far as I can remember) guest post on TaoSecurity Blog. Enjoy.

When you receive new [threat] intel and apply it in your detection environment, keep in mind all three analysis opportunities: RealTime, Batch, and Hunting.

If your initial intelligence analysis produces high context and quality details, it's a ripe candidate for RealTime detection.

If analysts can quickly and accurately process events generated by the [RealTime] signature, it's a good sign the indicator should be part of RealTime detection. If an analyst struggles to determine if a [RealTime alert] has detected malicious activity, it's likely NOT appropriate for RealTime detection.

If [the threat] intelligence contains limited context and/or details, try leveraging Batch Analysis with scheduled data reports as a better detection technique. Use Batch Analysis to develop better context (both positive and negative hits) to identify better signatures for RealTime detection.

Hunting is the soft science of detection, and best done with a team of diverse skills. Intelligence, content development, and detection should all work together. Don't fear getting skunked on your hunting trips. Keep investing time. The rewards are accumulative. Be sure to pass Hunting rewards into Batch Analysis and RealTime detection operations in the form of improved context.

The biggest mistake organizations make is not placing emphasis outside of RealTime detection, and "shoe-horning" [threat] intelligence into RealTime operations. So called "Atomic Indicators" tend to be the biggest violator of shoe-horning. Atomic indicators are easy to script into signature driven detection devices, but leave an analyst wondering what he is looking at and for.

Do not underestimate the NEGATIVE impact of GOOD [threat] intelligence inappropriately placed into RealTime operations! Mountains of indiscernible events will lead to analyst fatigue and increase the risk of good analyst missing a real incident.

Are my password freely available on the Internet? Four actions that can minimize damage

Frequently we hear of large data breaches from email, social networking, news and other types of websites which we are members off.  Many of us may have been challenged by the site owner to change our password when the site suffered a breach and would even have received a breach notification email.

It would however be useful to have a service which could tell us if our passwords were available in plain text online, anytime we wished. The good news is that a security blogger Troy Hunt has set-up a site   Here you could enter your email id (a common login credential) and find out if the corresponding password was exposed on breached sites.  The bad news is that it covers only data breaches where the hacker has dumped the compromised list of passwords on paste sites such as PasteBin. This represent a small fraction of the passwords exposed and in all probability allowed a window of time for the hacker to gain access to your account before the breach was uncovered. It also allows anyone (friend, foe, bully, ex-partner, relative, competitor and colleague) who knows your email id to check for the password, and selectively target you.

My advice to all Cybercitizens in general but more specifically after you discover that your password has been exposed is to”

1.    Never reuse that exposed password and to never reuse password on multiple sites. A single exposure can have a cascading effect in the compromise of your online assets. If you have used the same password on multiple sites then quickly change the password on all of them.
2.    To use two factor authentication which a large majority of sites offer to limit the use of disclosed passwords
3.    To change your passwords once every 3 months to limit the exposure window. In large dumps the hacker may take time to target your account and if you have changed your password by then, you would get lucky
4.    To quickly change passwords once you are aware that there has been a breach

Bejtlich Books Explained

A reader asked me to explain the differences between two of my books. I decided to write a public response.

If you visit the TaoSecurity Books page, you will see two different types of books. The first type involves books which list me as author or co-author. The second involves books to which I have contributed a chapter, section, or foreword.

This post will only discuss books which list me as author or co-author.

In July 2004 I published The Tao of Network Security Monitoring: Beyond Intrusion Detection. This book was the result of everything I had learned since 1997-98 regarding detecting and responding to intruders, primarily using network-centric means. It is the most complete examination of NSM philosophy available. I am particularly happy with the NSM history appendix. It cites and summarizes influential computer security papers over the four decade history of NSM to that point.

The main problem with the Tao is that certain details of specific software versions are very outdated. Established software like Tcpdump, Argus, and Sguil function much the same way, and the core NSM data types remain timeless. You would not be able to use the Bro chapter with modern Bro versions, for example. Still, I recommend anyone serious about NSM read the Tao.

The introduction describes the Tao using these words:

Part I offers an introduction to Network Security Monitoring, an operational framework for the collection, analysis, and escalation of indications and warnings (I&W) to detect and respond to intrusions.   Part I begins with an analysis of the terms and theory held by NSM practitioners.  The first chapter discusses the security process and defines words like security, risk, and threat.  It also makes assumptions about the intruder and his prey that set the stage for NSM operations.  The second chapter addresses NSM directly, explaining why NSM is not implemented by modern NIDS' alone.  The third chapter focuses on deployment considerations, such as how to access traffic using hubs, taps, SPAN ports, or inline devices.  

Part II begins an exploration of the NSM “product, process, people” triad.  Chapter 4 is a case study called the “reference intrusion model.”  This is an incident explained from the point of view of an omniscient observer.  During this intrusion, the victim collected full content data in two locations.  We will use those two trace files while explaining the tools discussed in Part II.  Following the reference intrusion model, I devote chapters to each of the four types of data which must be collected to perform network security monitoring – full content, session, statistical, and alert data.  Each chapter describes open source tools tested on the FreeBSD operating system and available on other UNIX derivatives.  Part II also includes a look at tools to manipulate and modify traffic.  Featured in Part II are little-discussed NIDS' like Bro and Prelude, and the first true open source NSM suite, Sguil.

Part III continues the NSM triad by discussing processes.  If analysts don’t know how to handle events, they’re likely to ignore them.  I provide best practices in one chapter, and follow with a second chapter explicitly for technical managers.  That material explains how to conduct emergency NSM in an incident response scenario, how to evaluate monitoring vendors, and how to deploy a NSM architecture.

Part IV is intended for analysts and their supervisors.  Entry level and intermediate analysts frequently wonder how to move to the next level of their profession.  I offer some guidance for the five topics with which a security professional should be proficient: weapons and tactics, telecommunications, system administration, scripting and programming, and management and policy.  The next three chapters offer case studies, showing analysts how to apply NSM principles to intrusions and related scenarios.

Part V is the offensive counterpart to the defensive aspects of Parts II, III, and IV.  I discuss how to attack products, processes, and people.  The first chapter examines tools to generate arbitrary packets, manipulate traffic, conduct reconnaissance, and exploit flaws inn Cisco, Solaris, and Microsoft targets.  In a second chapter I rely on my experience performing detection and response to show how intruders attack the mindset and procedures upon which analysts rely.

An epilogue on the future of NSM follows Part V.  The appendices feature several TCP/IP protocol header charts and explanations.   I also wrote an intellectual history of network security, with abstracts of some of the most important papers written during the last twenty-five years.  Please take the time to at least skim this appendix,  You'll see that many of the “revolutionary ideas” heralded in the press were in some cases proposed decades ago.

The Tao lists as 832 pages. I planned to write 10 more chapters, but my publisher and I realized that we needed to get the Tao out the door. ("Real artists ship.") I wanted to address ways to watch traffic leaving the enterprise in order to identify intruders, rather than concentrating on inbound traffic, as was popular in the 1990s and 2000s. Therefore, I wrote Extrusion Detection: Security Monitoring for Internal Intrusions.

I've called the Tao "the Constitution" and Extrusion "the Bill of Rights." These two books were written in 2004-2005, so they are tightly coupled in terms of language and methodology. Because Extrusion is tied more closely with data types, and less with specific software, I think it has aged better in this respect.

The introduction describes Extrusion using these words:

Part I mixes theory with architectural considerations.  Chapter 1 is a recap of the major theories, tools, and techniques from The Tao.  It is important for readers to understand that NSM has a specific technical meaning and that NSM is not the same process as intrusion detection.  Chapter 2 describes the architectural requirements for designing a network best suited to control, detect, and respond to intrusions.  Because this chapter is not written with specific tools in mind, security architects can implement their desired solutions regardless of the remainder of the book.  Chapter 3 explains the theory of extrusion detection and sets the stage for the remainder of the book.  Chapter 4 describes how to gain visibility to internal traffic.  Part I concludes with Chapter 5, original material by Ken Meyers explaining how internal network design can enhance the control and detection of internal threats.

Part II is aimed at security analysts and operators; it is traffic-oriented and requires basic understanding of TCP/IP and packet analysis.  Chapter 6 offers a method of dissecting session and full content data to unearth unauthorized activity.  Chapter 7 offers guidance on responding to intrusions, from a network-centric perspective.  Chapter 8 concludes part III by demonstrating principles of network forensics.

Part III collects case studies of interest to all types of security professionals.  Chapter 9 applies the lessons of Chapter 6 and explains how an internal bot net was discovered using Traffic Threat Assessment.  Chapter 10 features analysis of IRC bot nets, contributed by LURHQ analyst Michael Heiser. 

An epilogue points to future developments.  The first appendix, Appendix A, describes how to install Argus and NetFlow collection tools to capture session data.  Appendix B explains how to install a minimal Snort deployment in an emergency.  Appendix C, by Tenable Network Security founder Ron Gula, examines the variety of host and vulnerability enumeration techniques available in commercial and open source tools.  The book concludes with Appendix D, where Red Cliff Consulting expert Rohyt Belani offers guidance on internal host enumeration using open source tools.

At the same time I was writing Tao and Extrusion, I was collaborating with my friends and colleagues Keith Jones and Curtis Rose on a third book, Real Digital Forensics: Computer Security and Incident Response. This was a ground-breaking effort, published in October 2005. What made this book so interesting is that Keith, Curtis and I created workstations running live software, compromised each one, and then provided forensic evidence for readers on a companion DVD. 

This had never been done in book form, and after surviving the process we understood why! The legal issues alone were enough to almost make us abandon the effort. Microsoft did not want us to "distribute" a forensic image of a Windows system, so we had to zero out key Windows binaries to satisfy their lawyers. 

The primary weakness of the book in 2017 is that operating systems have evolved, and many more forensics books have been written. It continues to be an interesting exercise to examine the forensic practices advocated by the book to see how they apply to more modern situations.

This review of the book includes a summary of the contents:

• Live incident response (collecting and analyzing volatile and nonvolatile data; 72 pp.)
• Collecting and analyzing network-based data (live network surveillance and data analysis; 87 pp.)
• Forensic duplication of various devices using commercial and open source tools (43 pp.)
• Basic media analysis (deleted data recovery, metadata, hash analysis, “carving”/signature analysis, keyword searching, web browsing history, email, and registry analyses; 96 pp.)
• Unknown tool/binary analysis (180 pp.)
• Creating the “ultimate response CD” (response toolkit creation; 31 pp.)
• Mobile device and removable media forensics (79 pp.)
• On-line-based forensics (tracing emails and domain name ownership; 30 pp.)
• Introduction to Perl scripting (12 pp.)

After those three titles, I was done with writing for a while. However, in 2012 I taught a class for Black Hat in Abu Dhabi. I realized many of the students lacked the fundamental understanding of how networks operated and how network security monitoring could help them detect and respond to intrusions. I decided to write a book that would explain NSM from the ground up. While I assumed the reader would have familiarity with computing and some security concepts, I did not try to write the book for existing security experts.

The result was The Practice of Network Security Monitoring: Understanding Incident Detection and Response. If you are new to NSM, this is the first book you should buy and read. It is a very popular title and it distills my philosophy and practice into the most digestible form, in 376 pages.

The main drawback of the book is the integration of Security Onion coverage. SO is a wonderful open source suite, partly because it is kept so current. That makes it difficult for a print book to track changes in the software installation and configuration options. While you can still use PNSM to install and use SO, you are better off relying on Doug Burks' excellent online documentation. 

PNSM is an awesome resource for learning how to use SO and other tools to detect and respond to intrusions. I am particularly pleased with chapter 9, on NSM operations. It is a joke that I often tell people to "read chapter 9" when anyone asks me about CIRTs.

The introduction describes PNSM using these words:

Part I, “Getting Started,” introduces NSM and how to think about sensor placement.

• Chapter 1, “Network Security Monitoring Rationale,” explains why NSM matters, to help you gain the support needed to deploy NSM in your environment.
• Chapter 2, “Collecting Network Traffic: Access, Storage, and Management,”addresses the challenges and solutions surrounding physical access to network traffic.

Part II, “Security Onion Deployment,” focuses on installing SO on hardware and configuring SO effectively.

• Chapter 3, “Stand-alone NSM Deployment and Installation,” introduces SO and explains how to install the software on spare hardware to gain initial NSM capability at low or no cost.
• Chapter 4, “Distributed Deployment,” extends Chapter 3 to describe how to install a dispersed SO system.
• Chapter 5, “SO Platform Housekeeping,” discusses maintenance activities for keeping your SO installation running smoothly. 

Part III, “Tools,” describes key software shipped with SO and how to use these applications.

• Chapter 6, “Command Line Packet Analysis Tools,” explains the key features of Tcpdump, Tshark, Dumpcap, and Argus in SO.
• Chapter 7, “Graphical Packet Analysis Tools,” adds GUI-based software to the mix, describing Wireshark, Xplico, and NetworkMiner.
• Chapter 8, “NSM Consoles,” shows how NSM suites, like Sguil, Squert, Snorby, and ELSA, enable detection and response workflows.

Part IV, “NSM in Action,” discusses how to use NSM processes and data to detect and respond to intrusions.

• Chapter 9, “NSM Operations,” shares my experience building and leading a global computer incident response team (CIRT).
• Chapter 10, “Server-side Compromise,” is the first NSM case study, wherein you’ll learn how to apply NSM principles to identify and validate the compromise of an Internet-facing application.
• Chapter 11, “Client-side Compromise,” is the second NSM case study, offering an example of a user being victimized by a client-side attack.
• Chapter 12, “Extending SO,” concludes the main text with coverage of tools and techniques to expand SO’s capabilities.
• Chapter 13, “Proxies and Checksums,” concludes the main text by addressing two challenges to conducting NSM.

The Conclusion offers a few thoughts on the future of NSM, especially with respect to cloud environments. 

The Appendix, “SO Scripts and Configuration,” includes information from SO developer Doug Burks on core SO configuration files and control scripts.

I hope this post helps explain the different books I've written, as well as their applicability to modern security scenarios.

VirusTotal += Endgame

We welcome the Endgame scanner to VirusTotal. This is a machine learning engine from the US. In the words of the company:

"Endgame is a leading endpoint security platform that enables enterprises to close the protection gap against advanced attacks as well as detect and eliminate entrenched adversaries. Endgame's endpoint security platform leverages a series of layered defenses to prevent, detect and respond to threats through a unified endpoint agent. The IOC-independent platform covers the entire kill chain, leveraging machine learning and behavioral techniques to uncover, in real-time, unique attacks that evade traditional defenses and respond precisely without disrupting normal business operations. The malware detection and prevention capability, integrated in VirusTotal today, represents a key element in this layered defense. The machine learning model exposed in VirusTotal detects never-before-seen malware with high efficacy in an extremely lightweight implementation."

Endgame has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by SE Labs, an AMTSO-member tester.

Lifting the (Hyper) Visor: Bypassing Samsung’s Real-Time Kernel Protection

Posted by Gal Beniamini, Project Zero

Traditionally, the operating system’s kernel is the last security boundary standing between an attacker and full control over a target system. As such, additional care must be taken in order to ensure the integrity of the kernel. First, when a system boots, the integrity of its key components, including that of the operating system’s kernel, must be verified. This is achieved on Android by the verified boot chain. However, simply booting an authenticated kernel is insufficient—what about maintaining the integrity of the kernel while the system is executing?

Imagine a scenario where an attacker is able to find and exploit a vulnerability in the operating system’s kernel. Using such a vulnerability, the attacker may attempt to subvert the integrity of the kernel itself, either by modifying the contents of its code, or by introducing new attacker-controlled code and running it within the context of the operating system. Even more subtly, the attacker may choose to modify the data structures used by the operating system in order to alter its behaviour (for example, by granting excessive rights to select processes). As the kernel is in charge of managing all memory translations, including its own, there is no mechanism in place preventing an attacker within the same context from doing so.

However, in keeping with the concept of “defence in depth”, additional layers may be added in order to safeguard the kernel against such would-be attackers. If stacked correctly, these layers may be designed in such a way which either severely limits or simply prevents an attacker from subverting the kernel’s integrity.

In the Android ecosystem, Samsung provides a security hypervisor which aims to tackle the problem of ensuring the integrity of the kernel during runtime. The hypervisor, dubbed “Real-Time Kernel Protection” (RKP), was introduced as part of Samsung KNOX. In this blog post we’ll take an in-depth look at the inner-working of RKP and present multiple vulnerabilities which allowed attackers to subvert each of RKP’s security mechanisms. We’ll also see how the design of RKP could be fortified in order to prevent future attacks of this nature, making exploitation of RKP much harder.

As always, all the vulnerabilities in this article have been disclosed to Samsung, and the fixes have been made available in the January SMR.

I would like to note that in addition to addressing the reported issues, the Samsung KNOX team has been extremely helpful and open to discussion. This dialogue helped ensure that the issues were diagnosed correctly and the root causes identified. Moreover, the KNOX team has reviewed this article in advance, and have provided key insights into future improvements planned for RKP based on this research.

I would especially like to thank Tomislav Suchan from the Samsung KNOX team for helping address every single query I had and for providing deep insightful responses. Tomislav’s hard work ensured that all the issues were addressed correctly and fully, leaving no stone unturned.

HYP 101

Before we can start exploring the architecture of RKP, we first need a basic understanding of the virtualisation extensions on ARMv8. In the ARMv8 architecture, a new concept of exception levels was introduced. Generally, discrete components run under different exception levels - the more privileged the component, the higher its exception level.

Screenshot from 2017-01-27 12:45:40.png

In this blog post we’ll only focus on exception levels within the “Normal World”. Within this context, EL0 represents user-mode processes running on Android, EL1 represents Android’s Linux kernel, and EL2 (also known as “HYP” mode) represents the RKP hypervisor.

Recall then when user-mode processes (EL0) wish to interact with the operating system’s kernel (EL1), they must do so by issuing “Supervisor Calls” (SVCs), triggering exceptions which are then handled by the kernel. Much in the same way, interactions with the hypervisor (EL2) are performed by issuing “Hypervisor Calls” (HVCs).

Additionally, the hypervisor may control key operations that are performed within the kernel, by using the “Hypervisor Configuration Register” (HCR). This register governs over the virtualisation features that enable EL2 to interact with code running in EL1. For example, setting certain bits in the HCR will cause the hypervisor to trap specific operations which would normally be handled by EL1, enabling the hypervisor to choose whether to allow or disallow the requested operation.

Lastly, the hypervisor is able to implement an additional layer of memory translation, called a “stage 2 translation”. Instead of using the regular model where the operating system’s translation table maps between virtual addresses (VAs) and physical addresses (PAs), the translation process is split in two.

First, the EL1 translation tables are used in order to map a given VA to an intermediate physical address (IPA) - this is called a “stage 1 translation”. In the process, the access controls present in the translation are also applied, including access permission (AP) bits, execute never (XN) and privileged execute never (PXN).

Then, the resulting IPA is translated to a PA by performing a “stage 2 translation”. This mapping is performed by using a translation table which is accessible to EL2, and is inaccessible to code running in EL1. By using this 2-stage translation regime, the hypervisor is able to prevent access to certain key regions of physical memory, which may contain sensitive data that should be kept secret from EL1.

Screenshot from 2017-01-27 15:18:54.png

Creating a Research Platform

As we just saw in our “HYP 101” lesson, communicating with EL2 explicitly is done by issuing HVCs. Unlike SVCs which may be freely issued by code running in EL0, HVCs can only be triggered by code running in EL1. Since RKP runs in EL2 and exposes the vast majority of its functionality by means of commands which can be triggered from HVCs, we first need a platform from which we are able to send arbitrary HVCs.

Fortunately, in a recent blog post, we already covered an exploit that allowed us to elevate privileges into the context of system_server. This means that all that’s left before we can start investigating RKP and interacting with EL2, is to find an additional vulnerability that allows escalation from an already privileged context (such as system_server), to the context of the kernel.

Luckily, simply surveying the attack surface exposed to such privileged contexts revealed a vast amount of relatively straightforward vulnerabilities, any of which could be used to gain some foothold in EL1. For the purpose of this research, I’ve decided to exploit the most convenient of these: a simple stack overflow in a sysfs entry, which could be used to gain arbitrary control over the stack contents for a kernel thread. Once we have control over the stack’s contents, we can construct a ROP payload that prepares arguments for a function call in the kernel, calls that function, and returns the results back to user-space.

Untitled Diagram.png

In order to ease exploitation, we can wrap the entire process of creating a ROP stack which calls a kernel function and returns the results to user-space, into a single function, which we’ll call “execute_in_kernel”. Combined with our shellcode wrapper, which converts normal-looking C code to shellcode that can be injected into system_server, we are now able to freely construct and run code which is able to invoke kernel functions on demand.

Screenshot from 2017-01-27 16:42:38.png

Putting it all together, we can start investigating and interacting with RKP using this robust research platform. The rest of the research detailed in this blog post was conducted on a fully updated Galaxy S7 Edge (SM-G935F, XXS1APG3, Exynos chipset), using this exact framework in order to inject code into system_server using the first exploit, and then run code in the kernel using the second exploit.

Finally, now that we’ve laid down all the needed foundations, let’s get cracking!

Mitigation #1 - KASLR

With the introduction of KNOX v2.6, Samsung devices implement Kernel Address Space Layout Randomisation (KASLR). This security feature introduces a random “offset”, generated each time the device boots, by which the base address of the kernel is shifted. Normally, the kernel is loaded into a fixed physical address, which corresponds to a fixed virtual address in the VAS of the kernel. By introducing KASLR, all the kernel’s memory, including its code, is shifted by this randomised offset (also known as a “slide”).

While KASLR may be a valid mitigation against remote attackers aiming to exploit the kernel, it is very hard to implement in a robust way against local attackers. In fact, there has been some very interesting recent research on the subject which manages to defeat KASLR without requiring any software bug (e.g., by observing timing differences).

While those attacks are quite interesting in their own right, it should be noted that bypassing KASLR can often be achieved much more easily. Recall that the entire kernel is shifted by a single “slide” value - this means that leaking any pointer in the kernel which resides at a known offset from the kernel’s base address would allow us to easily calculate the slide’s value.

The Linux kernel does include mechanisms intended to prevent the leakage of such pointers to user-space. One such mitigation is enforced by ensuring that every time a pointer’s value is written by the kernel, it is printed using a special format specifier: “%pK”. Then, depending on the value of kptr_restrict, the kernel may anonymise the printed pointer. In all Android devices that I’ve encountered, kptr_restrict is configured correctly, indeed ensuring the “%pK” pointers are anonymised.

Be that as it may, all we need is to find a single pointer which a kernel developer neglected to anonymise. In Samsung’s case, this turned out to be rather amusing… The pm_qos debugfs entry, which is readable by system_server, included the following code snippet responsible for outputting the entry’s contents:

static void pm_qos_debug_show_one(struct seq_file *s, struct pm_qos_object *qos)
   struct plist_node *p;
   unsigned long flags;

   spin_lock_irqsave(&pm_qos_lock, flags);

   seq_printf(s, "%s\n", qos->name);
   seq_printf(s, "   default value: %d\n", qos->constraints->default_value);
   seq_printf(s, "   target value: %d\n", qos->constraints->target_value);
   seq_printf(s, "   requests:\n");
   plist_for_each(p, &qos->constraints->list)
       seq_printf(s, "      %pk(%s:%d): %d\n",
                     container_of(p, struct pm_qos_request, node),
                     (container_of(p, struct pm_qos_request, node))->func,
                     (container_of(p, struct pm_qos_request, node))->line,

   spin_unlock_irqrestore(&pm_qos_lock, flags);

Unfortunately, the anonymisation format specifier is case sensitive… Using a lowercase “k”, like the code above, causes the code above to output the pointer without applying the anonymisation offered by “%pK” (perhaps this serves as a good example of how fragile KASLR is). Regardless, this allows us to simply read the contents of pm_qos, and subtract the pointer’s value from it’s known offset from the kernel’s base address, thus giving us the value of the KASLR slide.

Mitigation #2 - Loading Arbitrary Kernel Code

Preventing the allocation of new kernel code is one of the main mitigations enforced by RKP. In addition, RKP aims to protect all existing kernel code against modification. These mitigations are achieved by enforcing the following set of rules:

  1. All pages, with the exception of the kernel’s code, are marked as “Privileged Execute Never” (PXN)
  2. Kernel data pages are never marked executable
  3. Kernel code pages are never marked writable
  4. All kernel code pages are marked as read-only in the stage 2 translation table
  5. All memory translation entries (PGDs, PMDs and PTEs) are marked as read-only for EL1

While these rules appear to be quite robust, how can we be sure that they are being enforced correctly? Admittedly, the rules are laid out nicely in the RKP documentation, but that’s not a strong enough guarantee...

Instead of exercising trust, let’s start by challenging the first assertion; namely, that with the exception of the kernel’s code, all other pages are marked as PXN. We can check this assertion by looking at the stage 1 translation tables in EL1. ARMv8 supports the use of two translation tables in EL1, TTBR0_EL1 and TTBR1_EL1. TTBR0_EL1 is used to hold the mappings for user-space’s VAS, while TTBR1_EL1 holds the kernel’s global mappings.

In order to analyse the contents of the EL1 stage 1 translation table used by the kernel, we’ll need to first locate the physical address of the translation table itself. Once we find the translation table, we can use our execute_in_kernel primitive in order to iteratively execute a “read gadget” in the kernel, allowing us to read out the contents of the translation table.

There is one tiny snag, though - how will we be able to retrieve the location of the translation table? To do so, we’ll need to find a gadget which allows us to read TTBR1_EL1 without causing any adverse effects in the kernel.

Unfortunately, combing over the kernel’s code reveals a depressing fact - it seems as though such gadgets are quite rare. While there are some functions that do read TTBR1_EL1, they also perform additional operations, resulting in unwanted side effects. In contrast, RKP’s code segments seem to be rife with such gadgets - in fact, RKP contains small gadgets to read and write nearly every single control register belonging to EL1.

Perhaps we could somehow use this fact to our advantage? Digging deeper into the kernel’s code (init/main.c) reveals that rather perplexingly, on Exynos devices (as opposed to Qualcomm-based devices) RKP is bootstrapped by the EL1 kernel. This means that instead of booting EL2 directly from EL3, it seems that EL1 is booted first, and only then performs some operations in order to bootstrap EL2.

This bootstrapping is achieved by embedding the entire binary containing RKP’s code in the EL1 kernel’s code segment. Then, once the kernel boots, it copies the RKP binary to a predefined physical range and transitions to TrustZone in order to bootstrap and initialise RKP.

RKP bootstrap.png

By embedding the RKP binary within the kernel’s text segment, it becomes a part of the memory range executable from EL1. This allows us to leverage all of the gadgets in the embedded RKP binary - making life that much easier.

Equipped with this new knowledge, we can now create a small program which reads the location of the stage 1 translation table using the gadgets from the RKP binary directly in EL1, and subsequently dumps and parses the table’s contents. Since we are interested in bypassing the code loading mitigations enforced by RKP, we’ll focus on the physical memory ranges containing the Linux kernel. After writing and running this program, we are faced with the following output:

[256] L1 table [PXNTable: 0, APTable: 0]
 [  0] 0x080000000-0x080200000 [PXN: 0, UXN: 1, AP: 0]
 [  1] 0x080200000-0x080400000 [PXN: 0, UXN: 1, AP: 0]
 [  2] 0x080400000-0x080600000 [PXN: 0, UXN: 1, AP: 0]
 [  3] 0x080600000-0x080800000 [PXN: 0, UXN: 1, AP: 0]
 [  4] 0x080800000-0x080a00000 [PXN: 0, UXN: 1, AP: 0]
 [  5] 0x080a00000-0x080c00000 [PXN: 0, UXN: 1, AP: 0]
 [  6] 0x080c00000-0x080e00000 [PXN: 0, UXN: 1, AP: 0]
 [  7] 0x080e00000-0x081000000 [PXN: 0, UXN: 1, AP: 0]
 [  8] 0x081000000-0x081200000 [PXN: 0, UXN: 1, AP: 0]
 [  9] 0x081200000-0x081400000 [PXN: 0, UXN: 1, AP: 0]
 [ 10] 0x081400000-0x081600000 [PXN: 1, UXN: 1, AP: 0]

As we can see above, the entire physical memory range [0x80000000, 0x81400000] is mapped in the stage 1 translation table using first level “Section” descriptors, each of which is responsible for translating a 1MB range of memory. We can also see that, as expected, this range is marked UXN and non-PXN - therefore EL1 is allowed to execute memory in these ranges, while EL0 is prohibited from doing so. However, much more surprisingly, the entire range is marked with access permission (AP) bit values of “00”. Let’s consult the ARM VMSA to see what these values indicate:

Screenshot from 2017-01-28 12:23:15.png

Aha - so in fact this means that these memory ranges are also readable and writable from EL1! Combining all this together, we reach the conclusion that the entire physical range of [0x80000000, 0x81400000] is mapped as RWX in the stage 1 translation table.

This still doesn’t mean we can modify the kernel’s code. Remember, RKP enforces the stage 2 memory translation as well. These memory ranges could well be restricted in the stage 2 translation in order to prevent attackers from gaining write access to them.

After some reversing, we find that RKP’s initial stage 2 translation table is in fact embedded in the RKP binary itself. This allows us to extract its contents and to analyse it in detail, similar to our previous work on the stage 1 translation table.

Screenshot from 2017-01-28 12:34:45.png

I’ve written a python script which analyses a given binary blob according to the stage 2 translation table’s format specified in the ARM VMSA. Next, we can use this script in order to discover the memory protections enforced by RKP on the kernel’s physical address range:

0x80000000-0x80200000: S2AP=11, XN=0
0x80200000-0x80400000: S2AP=11, XN=0
0x80400000-0x80600000: S2AP=11, XN=0
0x80600000-0x80800000: S2AP=11, XN=0
0x80800000-0x80a00000: S2AP=11, XN=0
0x80a00000-0x80c00000: S2AP=11, XN=0
0x80c00000-0x80e00000: S2AP=11, XN=0
0x80e00000-0x81000000: S2AP=11, XN=0
0x81000000-0x81200000: S2AP=11, XN=0
0x81200000-0x81400000: S2AP=11, XN=0
0x81400000-0x81600000: S2AP=11, XN=0

First of all, we can see that the stage 2 translation table used by RKP maps every IPA to the same PA. As such, we can safely ignore the existence of IPAs for the remainder of this blog post and focus of PAs instead.

More importantly, however, we can see that our memory range of interest is not marked as XN, as is expected. After all, the kernel should be executable by EL1. But bafflingly, the entire range is marked with the stage 2 access permission (S2AP) bits set to “11”. Once again, let’s consult the ARM VMSA:

Screenshot from 2017-01-28 12:51:37.png
So this seems a little odd… Does this mean that the entire kernel’s code range is marked as RWX in both the stage 1 and the stage 2 translation table? That doesn’t seem to add up. Indeed, trying to write to memory addresses containing EL1 kernel code results in a translation fault, so we’re definitely missing something here.

Ah, but wait! The stage 2 translation tables that we’ve analysed above are simply the initial translation tables which are used when RKP boots. Perhaps after the EL1 kernel finishes its initialisation it will somehow request RKP to modify these mappings in order to protect its own memory ranges.

Indeed, looking once again at the kernel’s initialisation routines, we can see that shortly after booting, the EL1 kernel calls into RKP:

static void rkp_init(void)
rkp_init_t init;
init.magic = RKP_INIT_MAGIC;
init.vmalloc_start = VMALLOC_START;
init.vmalloc_end = (u64)high_memory;
init.init_mm_pgd = (u64)__pa(swapper_pg_dir);
init.id_map_pgd = (u64)__pa(idmap_pg_dir);
init.rkp_pgt_bitmap = (u64)__pa(rkp_pgt_bitmap);
init.rkp_map_bitmap = (u64)__pa(rkp_map_bitmap);
init.rkp_pgt_bitmap_size = RKP_PGT_BITMAP_LEN;
init.zero_pg_addr = page_to_phys(empty_zero_page);
init._text = (u64) _text;
init._etext = (u64) _etext;
if (!vmm_extra_mem) {
printk(KERN_ERR"Disable RKP: Failed to allocate extra mem\n");
init.extra_memory_addr = __pa(vmm_extra_mem);
init.extra_memory_size = 0x600000;
init._srodata = (u64) __start_rodata;
init._erodata =(u64) __end_rodata;
init.large_memory = rkp_support_large_memory;

rkp_call(RKP_INIT, (u64)&init, 0, 0, 0, 0);
rkp_started = 1;

On the kernel’s side we can see that this command provides RKP with many of the memory ranges belonging to the kernel. In order to figure out the implementation of this command, let’s shift our focus back to RKP. By reverse engineering the implementation of this command within RKP, we arrive at the following approximate high-level logic:

void handle_rkp_init(...) {
   void* kern_text_phys_start = rkp_get_pa(text);
   void* kern_text_phys_end = rkp_get_pa(etext);
   rkp_debug_log("DEFERRED INIT START", 0, 0, 0);

   if (etext & 0x1FFFFF)
       rkp_debug_log("Kernel range is not aligned", 0, 0, 0);

   if (!rkp_s2_range_change_permission(kern_text_phys_start, kern_text_phys_end, 128, 1, 1))
       rkp_debug_log("Failed to make Kernel range RO", 0, 0, 0);


The highlighted function call above is used in order to modify the stage 2 access permissions for a given PA memory range. Calling the function with these arguments will cause the given memory range to be marked as read-only in the stage 2 translation. This means that shortly after booting the EL1 kernel, RKP does indeed lock down write access to the kernel’s code ranges.

...And yet, something’s still missing here. Remember that RKP should not only prevent the kernel’s code from being modified, but it aims to also prevent attackers from creating new executable code in EL1. Well, while the kernel’s code is indeed being marked as read-only in the stage 2 translation table, does that necessarily prevent us from creating new executable code?

Recall that we’ve previously encountered the presence of KASLR, in which the kernel’s base address (both in the kernel’s VAS and the corresponding physical address) is shifted by a randomised “slide” value. Moreover, since the Linux kernel assumes that the virtual to physical offset for kernel addresses is constant, this means that the same slide value is used for both virtual and physical addresses.

However, there’s a tiny snag here -- the address range we examined earlier on, the same one which is marked RWX is both the stage 1 and stage 2 translation table, is much larger than the kernel’s text segment. This is, partly, in order to allow the kernel to be placed somewhere within that region after the KASLR slide is determined. However, as we’ve just seen, after choosing the KASLR slide, RKP only protects the range spanning from “_text” to “_etext” - that is, only the region containing the kernel’s text after applying the KASLR slide.

kaslr_rwx (1).png

This leaves us with two large regions: [0x80000000, “_text”], [“_etext”, 0x81400000], which are left marked as RWX in both the stage 1 and stage 2 translation tables! Thus, we can simply write new code to these regions and execute it freely within the context of EL1, therefore bypassing the code loading mitigation. I’ve included a small PoC which demonstrates this issue, here.

Mitigation #3 - Bypassing EL1 Memory Controls

As we’ve just seen in the previous section, some of RKP’s stated goals require memory controls that are enforced not only in the stage 2 translation, but also directly in the stage 1 translation used by EL1. For example, RKP aims to ensure that all pages, with the exception of the kernel’s code, are marked as PXN. These goals require RKP to have some form of control over the contents of the stage 1 translation table.

So how exactly does RKP make sure that these kinds of assurances are kept? This is done by using a combined approach; first, the stage 1 translation tables are placed in a region which is marked as read-only in the stage 2 translation tables. This, in effect, disallows EL1 code from directly modifying the translation tables themselves. Secondly, the kernel is instrumented (a form of paravirtualisation), in order to make it aware of RKP’s existence. This instrumentation is performed so that each write operation to a data structure used in the stage 1 translation process (a PGD, PMD or PTE), will instead call an RKP command, informing it of the requested change.

Putting these two defences together, we arrive at the conclusion that all modifications to the stage 1 translation table must therefore pass through RKP which can, in turn, ensure that they do not violate any of its security goals.

Or do they?

While these rules do prevent modification of the current contents of the stage 1 translation table, they don’t prevent an attacker from using the memory management control registers to circumvent these protections. For example, an attacker could attempt to modify the value of TTBR1_EL1 directly, pointing it at an arbitrary (and unprotected) memory address.

Obviously, such operations cannot be permitted by RKP. In order to allow the hypervisor to deal with such situations, the “Hypervisor Configuration Register” (HCR) can be leveraged. Recall that the HCR allows the hypervisor to disallow certain operations from being performed under EL1. One such operation which can be trapped is the modification of the EL1 memory management control registers.

In the case of RKP on Exynos devices, while it does not set HCR_EL2.TRVM (i.e., it allows all read access to memory control registers), it does indeed set HCR_EL2.TVM, allowing it to trap write access to these registers.

So although we’ve established that RKP does correctly trap write access to the control registers, this still doesn’t guarantee they remain protected. This is actually quite a delicate situation - the Linux kernel requires some access to many of these registers in order to perform regular operations. This means that while some access can be denied by RKP, other operations need to be inspected carefully in order to make sure they do not violate RKP’s safety guarantees, before allowing them to proceed. Once again, we’ll need to reverse engineer RKP’s code to assess the situation.

Screenshot from 2017-01-30 01:54:02.png

As we can see above, attempts to modify the location of the translation tables themselves, result in RKP correctly verifying the entire translation table, making sure it follows the allowed stage 1 translation policy. In contrast, there are a couple of crucial memory control registers which, at the time, weren’t intercepted by RKP at all - TCR_EL1 and SCTLR_EL1!

Inspecting these registers in the ARM reference manual reveals that they both can have profound effects on the stage 1 translation process.

For starters, the System Control Register for EL1 (SCTLR_EL1) provides top-level control over the system, including the memory system, in EL1. One bit of crucial importance in our scenario is the SCTLR_EL1.M bit.

This bit denotes the state of the MMU for stage 1 translation in EL0 and EL1. Therefore, simply by unsetting this bit, an attacker can disable the MMU for stage 1 translation. Once the bit is unset, all memory translation in EL1 map directly to IPAs, but more importantly - these memory translation do not have any access permissions checks enabled, effectively making all memory ranges considered as RWX in the stage 1 translation. This, in turn, bypasses several of RKP’s assurances, such as making sure that only the kernel’s text is not marked as PXN.

As for the Translation Control Register for EL1 (TCR_EL1), it’s effects are slightly more subtle. Instead of completely disabling the stage 1 translation’s MMU, this register governs the way in which translation is performed.

Indeed, observing this register more closely, reveals certain key ways in which an attacker may leverage it in order to circumvent RKP’s stage 1 protections. For example, the AARCH64 memory translation table may assume different formats, depending on the translation granule under which the system is operating. Normally, AARCH64 Linux kernels use a translation granule of 4KB.

This fact is implicitly acknowledged in RKP. For example, when code in EL1 changes the value of the translation table (e.g., TTBR1_EL1), RKP must protect this PGD in the stage 2 translation in order to make sure that EL1 cannot gain access to it. Indeed, reversing the corresponding code within RKP reveals that it does just that:

Screenshot from 2017-01-30 01:56:10.png

However, as we can see in the picture above, the stage 2 protection is only performed on a 4KB region (a single page). This is because when using a 4KB translation granule, the translation regime has a translation table size of 4KB. However, this is where we, as an attacker, come in. What if we were to change the value of the translation granule to 64KB, by modifying TCR_EL1.TG0 and TCR_EL1.TG1?

In that case, the translation regime’s translation table will now be 64KB as well, as opposed to 4KB under the previous regime. Since RKP uses a hard-coded value of 4KB when protecting the translation table, the bottom 60KB remain unprotected by RKP, allowing an attacker in EL1 to freely modify it in order to point to any IPA, any more crucially, with any access permissions, UXN/PXN values.

Lastly, it should once more be noted that while the gadgets needed to access these registers aren’t abundant in the kernel’s image, they are present in the embedded RKP binary on Exynos devices. Therefore we can simply execute these gadgets within EL1 in order to modify the registers above. I’ve written a small PoC that demonstrates this issue by disabling the stage 1 MMU in EL1.

Mitigation #4 - Accessing Stage 2 Unmapped Memory

Other than the operating system’s memory, there exist several other memory regions which may contain potentially sensitive information that should not be accessible by code running in EL0 and EL1. For example, peripherals on the SoC may have their firmware stored in the “Normal World”, in physical memory ranges which should never be accessible to Android itself.

In order to enforce such protections, RKP explicitly unmaps a few memory ranges from the stage 2 translation table. By doing so, any attempt to access these PA ranges in EL0 or EL1 will result in a translation fault, consequently crashing the kernel and rebooting the device.

Moreover, RKP’s own memory ranges should also be made inaccessible to lesser privileged code. This is crucial so as to protect RKP from modifications by EL0 and EL1, but also serves to protect the sensitive information that is processed within RKP (such as the “cfprop” key). Indeed, after starting up, RKP explicitly unmaps it’s own memory ranges in order to prevent such access:


Admittedly, the stage 2 translation table itself is placed within the very region being unmapped from the stage 2 translation table, therefore ensuring that code in EL1 cannot modify it. However, perhaps we could find another way to control stage 2 mappings, but leveraging RKP itself.

For example, as we’ve previously seen, certain operations such as setting TTBR1_EL1 result in changes to the stage 2 translation table. Combing over the RKP binary, we come across one such operation, as follows:

__int64 rkp_set_init_page_ro(unsigned args* args_buffer)
 unsigned long page_pa = rkp_get_pa(args_buffer->arg0);
 if ( page_pa < rkp_get_pa(text) || page_pa >= rkp_get_pa(etext) )
   if ( !rkp_s2_page_change_permission(page_pa, 128, 0, 0) )
     return rkp_debug_log("Cred: Unable to set permission for init cred", 0LL, 0LL, 0LL);
   rkp_debug_log("Good init CRED is within RO range", 0LL, 0LL, 0LL);
 rkp_debug_log("init cred page", 0LL, 0LL, 0LL);
 return rkp_set_pgt_bitmap(page_pa, 0);

As we can see, this command receives a pointer from EL1, verifies it is not within the kernel’s text segment, and if so proceeds to call rkp_s2_page_change_permission in order to modify the access permissions on this range in the stage 2 translation table. Digging deeper into the function reveals that this set of parameters is used to denote the region as read-only and XN.

But, what if we were to supply a page that resides somewhere that is not currently mapped in the stage 2 translation at all, such as RKP’s own memory ranges? Well, in this case, rkp_s2_page_change_permission will happily create a translation entry for the given page, effectively mapping in a previously unmapped region!

This allows us to re-map any stage 2 unmapped region (albeit as read-only and XN) from EL1. I’ve written a small PoC which demonstrates the issue by stage 2 re-mapping RKP’s physical address range and reading it from EL1.

Design Improvements to RKP

After seeing some specific issues in this blog post, highlighting how the different defence mechanisms of RKP could be subverted by an attacker, let’s zoom out for a second and think about some design choices that could serve to strengthen RKP’s security posture against future attacks.

First, RKP on Exynos devices is currently being bootstrapped by EL1 code. This is in contrast to the model used on Qualcomm devices, whereby the EL2 code is both verified by the bootloader, and subsequently booted by EL3. Ideally, I believe the same model used on Qualcomm devices should be adopted for Exynos as well.

Performing the bootstrapping in this order automatically fixes other related security issues “for free”, such as the presence of RKP’s binary within the kernel’s text segment. As we’ve seen, this seemingly harmless fact is in fact very useful for an attacker in several of the scenarios we’ve highlighted in this post. Moreover, it removes other risks such as attackers exploiting the EL1 kernel early in the boot process and leveraging that access to subvert the initialisation of EL2.

As in interim improvement, RKP decided to zero out the RKP binary resident in EL1’s code during initialisation. This improvement will roll out in the next Nougat milestone release of Samsung devices, and addresses the issue of attackers leveraging the binary for gadgets. However, it doesn’t address the issue regarding potential early exploitation of the EL1 kernel to subvert the initialization of EL2, which requires a more extensive modification.

Second, RKP’s code segments are currently marked as both writable and executable in TTBR0_EL2. This, combined with the fact that SCTLR_EL2.WXN is not set, allows an attacker to use any memory corruption primitive in EL2 in order to directly overwrite the EL2 code segments, allowing for much easier exploitation of the hypervisor.

While I have not chosen to include these issues in the blog post, I’ve found several memory corruptions, any of which could be used to modify memory within the context of RKP. Combining these two facts together, we can conclude that any of these memory corruptions could be used by an attacker to directly modify RKP’s code itself, therefore gaining code execution within it.

Simply setting SCTLR_EL2.WXN and marking RKP’s code as read-only would not prevent an attacker from gaining access to RKP, but it could make exploitation of such memory corruptions harder to exploit and more time consuming.

Third, RKP should lock down all memory control registers, unless they absolutely must be used by the Linux kernel. This would prevent abuse of several of these registers which may subtly affect the system’s behaviour, and in doing so, violate assumptions made by RKP about the kernel. Where these registers have to be modified by EL1, RKP should verify that only the appropriate bits are accessed.

RKP has since locked down access to the two registers mentioned in this blog post. This is a good step in the right direction, and unfortunately as access to some of these registers must be retained, simply revoking access to all of them is not a feasible solution. As such, preventing access to other memory control registers remains a long term goal.

Lastly, there should be some distinction between stage 2 unmapped regions that were simply never mapped in, and those which were explicitly mapped out. This can be achieved by storing the memory ranges corresponding to explicitly unmapped regions, and disallowing any modification that would result in remapping them within RKP. While the issue I highlighted is now fixed, implementing this extra step would prevent further similar issues from cropping up in the future.

Aikido & HolisticInfoSec™

This is the 300th post to the HolisticInfoSec™ blog. Sparta, this isn't, but I thought it important to provide you with content in a warrior/philosopher mindset regardless. 
Your author is an Aikido practitioner, albeit a fledgling in practice, with so, so much to learn. While Aikido is often translated as "the way of unifying with life energy" or as "the way of harmonious spirit", I propose that the philosophies and principles inherent to Aikido have significant bearing on the practice of information security.
In addition to spending time in the dojo, there are numerous reference books specific to Aikido from which a student can learn. Among the best is Adele Westbrook and Oscar Ratti's Aikido and the Dynamic Sphere. All quotes and references that follow are drawn from this fine publication.
As an advocate for the practice of HolisticInfoSec™ (so much so, I trademarked it) the connectivity to Aikido is practically rhetorical, but allow me to provide you some pointed examples. I've tried to connect each of these in what I believe is an appropriate sequence to further your understanding, and aid you in improving your practice. Simply, one could say each of these can lead to the next.
The Practice of Aikido
"The very first requisite for defense is to know the enemy."
So often in information security, we see reference to the much abused The Art of War, wherein Sun Tzu stated "It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles." Aikido embraces this as the first requisite, but so too offers the importance of not underestimating your enemy or opponent. For information security, I liken it to this. If you are uninformed on adversary actor types and profiles, their TTPs (tools, tactics, procedures), as well as current vulnerabilities and exploits, along with more general threat intelligence, then you are already at a disadvantage before you even begin to imagine countering your opponent.  

"A positive defensive strategy is further qualified as being specific, immediate, consistent, and powerful." 
Upon learning more about your adversary, a rehearsed, exercised strategy for responding to their attack should be considered the second requisite for defense. To achieve this, your efforts must include:
  • a clear definition and inventory of the assets you're protecting
  • threat modeling of code, services, and infrastructure
  • an incident response plan and SOP, and regular exercise of the IR plan
  • robust security monitoring to include collection, aggregation, detection, correlation, and visualization
  • ideally, a purple team approach that includes testing blue team detection and response capabilities in partnership with a red team. Any red team that follows the "you suck, we rock" approach should be removed from the building and replaced by one who espouses "we exist to identify vulnerabilities and exploits with the goal of helping the organization better mitigate and remediate".
As your detection and response capabilities improve with practice and repetition, your meantime to mitigate (MTM) and meantime to remediate (MTR) should begin to shrink, thus lending to the immediacy, consistentcy, and power of your defense.

The Process of Defense and Its Factors
"EVERY process of defense will consist of three stages: perception, evaluation-decision, and reaction."
These should be easy likenesses for you to reconcile.
Perception = detection and awareness
The better and more complete your threat intelligence collection and detection capabilities, the better your situational awareness will be, and as a result your perception of adversary behaviors will improve and become more timely.
Evaluation-decision = triage
It's inevitable...$#!+ happens. Your ability to quickly evaluate adversary actions and be decisive in your response will dictate your level of success as incident responders. Strength at this stage directly impacts the rest of the response process. Incorrect or incomplete evaluation, and the resulting ill-informed decisions, can set back your response process in a manner from which recovery will be very difficult.
Reaction = response
My Aikido sensei, after doing so, likes to remind his students "Don't get hit." :-) The analogy here is to react quickly enough to stay on your feet. Can you move quickly enough to not be hit as hard or as impactfully as your adversary intended? Your reaction and response will determine such outcomes. The connection between kinetic and virtual combat here is profound. Stand still, get hit. Feign or evade, at least avoid some, or all contact. In the digital realm, you're reducing your time to recover with this mindset.  

Dynamic Factors
"A defensive aikido strategy begins the moment a would-be attacker takes a step toward you or turns aggressively in your direction. His initial motion (movement) in itself contains the factors you will use to neutralize the action of attack which will spring with explosive force from that motion of convergence."
Continuing on our theme of inevitability, digital adversaries will, beyond the shadow of a doubt, take a step toward you or turn aggressively in your direction. The question for you will be, do you even know when that has occurred in light of our discussion of requisites above? Aikido is all about using your opponent's energy against them, wherein, for those of us in DFIR, our adversary's movement in itself contains the factors we use to neutralize the action of attack. As we improve our capabilities in our defensive processes (perception, evaluation-decision, and reaction), we should be able to respond in a manner that begins the very moment we identify adversarial behavior, and do so quickly enough that our actions pivot directly on our adversary's initial motion.
As an example, your adversary conducts a targeted, nuanced spear phishing campaign. Your detective means identify all intended victims, you immediately react, and add all intended victims to an enhanced watch list for continuous monitoring. The two victims who engaged the payload are quarantined immediately, and no further adversarial pivoting or escalation is identified. The environment as a whole raised to a state of heightened awareness, and your user-base becomes part of your perception network.

"It will be immediate or instantaneous when your reaction is so swift that you apply a technique of neutralization while the attack is still developing, and at the higher levels of the practice even before an attack has been fully launched."
Your threat intelligence capabilities are robust enough that your active deployment of detections for specific Indicators of Compromise (IOCs) prevented the targeted, nuanced spear phishing campaign from even reaching the intended victims. Your monitoring active lists include known adversary infrastructure such that the moment they launch an attack, you are already aware of its imminence.
You are able to neutralize your opponent before they even launch. This may be unimaginable for some, but it is achievable by certain mature organizations under specific circumstances.

The Principle of Centralization
"Centralization, therefore, means adopting a new point of reference, a new platform from which you can exercise a more objective form of control over events and over yourself."
Some organizations decentralize information security, others centralize it with absolute authority. There are arguments for both, and I do not intend to engage that debate. What I ask you to embrace is the "principle of centralization". The analogy is this: large corporations and organizations often have multiple, and even redundant security teams. Even so, their cooperation is key to success.
  • Is information exchanged openly and freely, with silos avoided? 
  • Are teams capable of joint response? 
  • Are there shared resources that all teams can draw from for consistent IOCs and case data?
  • Are you and your team focused on facts, avoiding FUD, thinking creatively, yet assessing with a critical, objective eye?
Even with a logically decentralized security model, organizations can embrace the principle of centralization and achieve an objective form of control over events. The practice of a joint forces focus defines the platform from which teams can and should operate.

Adversarial conditions, in both the physical realm, and the digital realm in which DFIR practitioners operate, are stressful, challenging, and worrisome. 
Morihei Ueshiba, Aikido's founder reminds us that "in extreme situations, the entire universe becomes our foe; at such critical times, unity of mind and technique is essential - do not let your heart waver!" That said, perfection is unlikely, or even impossible, this is a practice you must exercise. Again Ueshiba offers that "failure is the key to success; each mistake teaches us something."
Keep learning, be strong of heart. :-)
Cheers...until next time.

Fixing the Nations CyberSecurity Professionals Shortage Problem

There is no shortage of security vendors. There is not a shortage of good security tools. Whatever tool you need, there are probably a dozen companies that have a tool that fits your need. Automation is necessary, given the huge amount of alerts, logs and IOC's a security analyst must deal with. But not everything can be automated. Automation is a means to an end, not the end itself. It sorts and reduces the amount of data an intrusion analyst must look at and can point him/her in the right direction. But at the end of the day, it's the analyst, not the tool, that must make the correct assessment. And that takes education, experience and then continual training. Without good analysts looking at the output of the tools, the end result is nothing more than a slightly educated guess. And the protection of our networks and data stores can't rely on guesses based on a tool. 
Apprenticeship and mentoring may be one way to speed up the on-boarding of new cyber-security professionals.

Implementing IoT securely in your company – Part 3

This is Part 3 of the series implementing IoT securely in your company, click here for part 1 and here for part 2. As it is quite common that new IoT devices are ordered and also maintained by the appropriate department and not by the IT department, it is important that there is a policy in place.

This policy is specially important in this case as most non IT departments don’t think about IT security and maintaining the system. They are often used to think about buying a device and it will run for years and often even longer, without doing much. We on the other hand in the IT know that the buying part is the easy part, maintaining it is the hard one.

Extend existing security policies

Most companies won’t need to start from scratch, as they most likely have policies for common stuff like passwords, patching and monitoring. The problem here is the scope of the policies and that you’re current able to technically enforce many of them:

  • Most passwords are typically maintained by an identity management system and the password policy is therefore enforced for the whole company. The service/admin passwords are typically configured and used by members of the IT department. For IoT devices that maybe not true as the devices are managed by the using department and technically enforcing it may not be possible.
  • Patching of the software is typically centrally done by the IT department, be it the client or server team. But who is responsible for updating the IoT devices? Who monitors that updates are really done? How does he monitor that? What happens if a department does not update their devices? What happens if a vendor stops providing security updates for a given device?
  • Centrally by the IT department provided services are generally monitored by the IT department. Is the IT department responsible for monitoring the IoT devices?  Who is responsible for looking into the problem?

You should look at this and write it down as a policy which is accepted by the other departments before deploying IoT devices. In the beginning they will say yes sure we’ll update the devices regularly and replace the devices before the vendors stops providing security updates – and often can’t remember it some years later.

Typical IoT device problems

Beside extending the policies to cover IoT devices it’s also important to check the policies if the fit the IoT space and cover typical problems. I’ll list some of them here, which I’ve seen done wrong in the past. Sure some of them also apply for normal IT server/services but are maybe consider so basically that everyone just does it right, that it is maybe not covered by your policy.

  • No Update is possible
    Yes, there are devices out in the wild that can’t be updated. What does your policy say?
  • Default Logins
    Many IoT devices come with a default login and as the management of the devices is done via a central (cloud) management system, it is often forgotten that the devices may have also a administration interface.What does your policy say?
  • Recover from IoT device loss
    Let’s assume that an attacker is able to get into one IoT device or that the IoT device gets stole. Is the same password used on the server? Do all devices use the same password? Will the IT department get informed at all? What does your policy say?
  • Naming and organizing things
    For IT devices it’s clear that we use the DNS structure – works for servers, switches, pc’s. Make sure that the same gets used for IoT device. What does your policy say?
  • Replacing IoT devices
    Think about > 100 IoT devices running for 4 years and now some break down, and the the devices are end of sales. Can you connect new models to the old ones?  does someone keep spare parts?  What does your policy say?
  • Self signed certificates
    If the system/devices uses TLS (e.g. HTTPS) it needs to be able to use your internal PKI certificates. Self signed certificates are basically the same as unencrypted traffic. What does your policy say?
  • Disable unused services
    IoT enable often all services by default, like I had a device providing a FTP and telnet server – but for administration only HTTP was ever used. What does your policy say?

I hope that article series helps you to implement IoT devices somewhat securely.


Kelihos infection spreading by Thumb Drive and continues geo-targeting

I've mentioned before how proud I am that my students are extremely passionate about CyberCrime. My guest blogger 'Arsh Arora' is on a visit to his hometown New Delhi, India to attend a wedding. Instead of having fun, he is monitoring Kelihos botnet from a different geographical location than US to determine if the behavior is any different. Seems fairly consistent, but Arsh explains more in this next edition of his Kelihos guest-blogging:

Kelihos botnet geo-targeting Canada and Kazakhstan 

After laying low for a while, the Kelihos botnet is back to its business of providing 'spam as a service'. The Kelihos botnet continues "geo-targeting" based on the ccTLD portion of email addresses. Today, those recipients whose email address ends in ".ca" are receiving links to web pages of Tangerine Bank Phish websites. While recipients whose email address ends in ".kz" are receiving a link to the Ecstasy website.

Tangerine Bank Phish geo-targeted to Canadians

The spam body consists of a webpage that will be displayed as a webpage, seeking the user to click a button with the subject line of "TANGERINE online account has been suspended". Tangerine is internet/telephone base bank formerly known as ING Direct (Tangerine).

Fig. 1 Raw Text of  Spam message

The html version is displayed to the victim receiving the email. Thus, instigating the victim to click on the "Learn More" Button (link is "hxxp://tangeerine[dot]com/InitialTangerine/index.php"). Once clicked the victim is redirected to a phishing site, seeking the user to enter  "Enter your Client Number, Card Number or Username".

Fig. 2 Html version of the Phish
Fig. 3 Redirected link seeking user to enter details

Second version of the similar-themed message was with the subject line of "Your account is disabled. Please verify your information is correct" and the corresponding redirect link once you hit the start button was "hxxp://sec-tangrene[dot]online/". 

Fig. 4 Raw Text of second spam message

Fig. 5 Html version of Tangerine Phish
Unfortunately, the following link was down and not accessible.

Canadian Banks take great pride in their infrastructure and preventive measures. This gives the attackers an extra challenge of trying to penetrate inside these banks. Therefore, targeting them like in previous instances, one such case of Desjardins phish. 

Fcuk Spam geo-targeted to Kazakhstan 

This behavior is never observed before as Kelihos botnet was geo-targeting email addresses ending with ".kz". The spam message contained a link (www[dot]almatinki[dot]com) to a Fcuk website with the subject line in Russian "Глубокий м" when translated it is stated as"Deep m". Attached are the screenshot of email message and website.

Fig. 6 Email message of the spam
Fig. 7 Website

Kelihos spreading via executables copied to flash drives

There is a saying that when an Academic has an accident we call it "research!"  After completing a successful infection of Kelihos, a thumb drive was accidentally connected to the virtual machine instead of the host machine. Upon inspection, the thumb drive appeared to have acquired a new hidden executable name “porn.exe”, as well as a few shortcuts that were not there before. On further analysis of the file "porn.exe", it revealed that it was a copy of the original Kelihos binary. 

Fig. 8 VT analysis of porn.exe

By repeating the process with ProcMon running, we found the Create File function linked to the E:\porn.exe. In the moments leading up to this, several other file names are tried with CreateFile, in an attempt to open them. It appears that if none of these files are opened, then it defaults to creating a porn.exe file, and then writing the binary to this file. After binary creation, the shortcuts for the hidden directories, and executables are created.

Fig. 9 Create File of porn.exe
Fig. 10 Various instances of trying to Create File

An Autorun.inf is not created to run this file, however, a shortcut to the file with the command C:\WINDOWS\system32\cmd.exe F/c "start %cd%\porn.exe" can be found on the drive, as well as shortcut to several other hidden directories on the drive (not malicious).

Fig. 11 Executable and shortcut placed on thumb drive
Running porn.exe works like a normal Kelihos run, however, we were unable to infect a thumb drive with this binary. Further analysis is required to determine the mechanism by which thumb drive infection occurs, as this executable appears to be identical to the original binary.

Thanks a lot Eli Brown for sharing great insights on the infection behavior of Kelihos. 

We continue our research on the Kelihos botnet and try to provide as much insights about the botnet.

A few words about ovarian cancer

Cancer sucks. The number of people who are touched by cancer is terrifying, it is rare to find someone who hasn’t had friends or family attacked by cancer if they’ve avoided it themselves. Sometimes, as with my bladder cancer, it’s not that bad- for me I get a rather uncomfortable exam regularly, and sometimes get a small tumor or two removed, no big deal. That makes me lucky, few who face cancer get to shrug it off as a mere annoyance.

Since I’ve recently learned a lot more about ovarian cancer than I ever expected to know, I’d like to share a few things with everyone. Remember, I’m not a medical professional, these are my observations and ideas formed over the two and a half years of my late wife’s struggle with clear cell ovarian cancer.

First, routine tests and doctor visits are unlikely to detect it early.

Second, it’s insidious- many women develop ovarian cancer around the time of menopause, and many of the symptoms of the cancer are also expected conditions that accompany menopause.

There is a blood test which looks for a marker, CA 125, which may help detect ovarian cancer but the test is far from perfect. Many people have suggested it should be a regular test, others think it may lead to a false sense of security. Gilda Radner talked about the test in her autobiography before we lost her to ovarian cancer. Here’s my take- and keep in mind that I’m not a doctor of anything and this isn’t medical advice- I think that CA 125 screening and the symptoms of ovarian cancer are things women should be aware of. I think that routine CA 125 screening probably makes sense for women with a family history of cancer, maybe for a broader population- but only if the test is considered a weak indicator, and is done as part of comprehensive medical care (a low reading does not mean there’s no cancer). If you have a healthy relationship with your doctor it should be part of a conversation, as with most tests. I don’t think much about my prostate, but I do think about symptoms of prostate problems every time my doctor sends me off for a PSA test. Awareness of symptoms, thinking about them honestly, and having real conversations with your doctors is key to minimizing Bad Things.

Note: I was going to prefix this with a note saying this is another personal post with nothing to do with InfoSec, then I realized I’m talking about using weak indicators as component in a comprehensive detection plan, and that sounds pretty familiar.

I don’t want to watch any more people die of cancer, and neither do you. But we will, so let’s try to spread the word and minimize the suffering.

Finally, I am not a doctor, psychologist, or anyone else who can provide real help- but if you or a loved one are facing ovarian cancer and want someone to talk to, yell at, or commiserate with- reach out to me. There’s email info in the upper right corner of the page.