For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I've ever provided. Video. Slides.
The success of blockchain technology in securing cryptocurrencies doesn’t make the technology a good fit for securing the Internet of Things, RSA Security Chief Technology Officer Zulfikar Ramzan says. Check out our exclusive conversation with Zully about IoT, blockchain and the state of the information security industry. I had the...
Sometimes a conference just gets it right. Good talks, single track, select engaged attendees, and no sales talks. It’s a recipe for success that Kolide got right on its very first try with QueryCon, the first-ever osquery conference.
It’s no secret that we are huge fans of osquery, Facebook’s award-winning open source endpoint detection tool. From when we ported osquery to Windows in 2016 to our launch of our osquery extension repo this year, we’ve been one of the leading contributors to the tool’s development. This is why we were delighted Kolide invited us to participate in QueryCon!
The two-day conference, hosted at the beautiful Palace of the Fine Arts in San Francisco, drew over 120 attendees and 16 speakers. The attendance list was a Who’s Who in Big Tech security; teams from Facebook, Airbnb, Yelp, Atlassian, Adobe, Netflix, Salesforce, and more. It was great to meet face-to-face. We’ve been collaborating with some of these teams on osquery for years. It was also exciting to see the widespread adoption of the technology manifested in person. Though some of the teams attending were there to learn about the tech before deploying, the majority seemed to be committed adopters.
The talks ranged from the big-picture (operational security preparedness by Rob Fry of JASK) to the highly technical (breakdowns of macOS internals by Michael Lynn of Facebook), with consistent levity, epitomized by the brilliantly sulky Ben Hughes of Stripe. Scott Lundgren of Carbon Black gave a report-card-style review of the community from an outsider’s perspective. Longtime osquery evangelist Chris Long of Palantir provided a candid user experience of working with osquery’s audit framework in his organization. It was a well-curated mix of subjects, speakers, and perspectives. They all taught us something new.
What we learned at QueryCon
1. The community is bigger and stronger than we thought
As of this week, osquery’s Slack has 1,703 users. Until the sold-out showing at QueryCon, I never thought to check how many of those users were active; 431 in the last 30 days. 120 of those people made it to QueryCon. Dozens more joined the waitlist.
2. Some users are innovating in very cool ways
We came to QueryCon intent on pushing the community to use osquery in new, innovative ways. Turns out, it didn’t need much pushing. Take the security team at Netflix. They’re using osquery in multiple internal open source projects: Diffy, a digital forensics and incident response (DFIR) tool, and Stethoscope, their security detection and recommendation application. We heard many more examples from many more teams.
3. The community really likes our contributions
Many of the talks mentioned our team and our work. We knew we were contributing significant engineering effort, but we hadn’t truly realized how much others had been benefiting. It felt great to hear that work done for our clients truly advances the whole community.
4. The goals are clear, but the way there is not
We gleaned some clear takeaways that are likely common for a first meetup of a new open source project:
- We need to define and broadcast osquery’s guiding principles;
- We need to solidify some best practices for effective collaboration;
- We need to tackle technical debt.
However, we didn’t determine how these will get done. Facebook was clear in defining its role in this process. Their small dedicated osquery team will continue to put in the hard work of testing, managing versions, and holding the community to high standards for both written code and community inclusion. However, it’s up to the community to take care of the rest.
What we shared at QueryCon
Osquery Super Features
Speaker: Lauren Pearl
Abstract: In this talk, we reviewed a user feature wishlist gathered from interviews with five Silicon Valley tech teams who use osquery. From these, we identified Super Features – features that would fundamentally improve the value proposition of osquery. We explained how these developments could transform osquery’s power in technical organizations. Finally, we walked through the high-level development plans for making these Super Features a reality.
Link to Video: To be released soon!
Slides: Super Features PDF
The Osquery Extensions Skunkworks Project: Unconventional Uses for Osquery
Speaker: Mike Myers
Abstract: Facebook created osquery with certain guiding principles: don’t pry into users’ data, don’t change the state of the system, don’t create network traffic to third parties. It was originally intended as a read-only information gatherer. For those that didn’t want to play by these rules, there’s the extension interface. We’ve begun experimenting with extensions that don’t align with mainline osquery: integrating with third-party services, writable tables, host-based firewall administration, malware vaccination, and more. We shared some of our lessons-learned on the challenges of using osquery as a control interface.
Link to Video: To be released soon!
Slides: Skunkworks Extensions PDF
Thank you so much!
This was a great first conference for an emerging technology. It awakened community leaders to issues and opportunities and started the conversation of how to push forward. Attendees renewed enthusiasm and commitment to advance and maintain the project.
It’s hard to believe that this was Kolide’s first time hosting such an event. Director Of Operations, Antigoni Sinanis, the lady in charge of the event’s success, has set a high bar for her company to clear next year. We at are already looking forward to round two!
I'm at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.
SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.
The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions -- all designed so people from different disciplines talk to each other.
I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.
This year's program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)
Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.
Next year, I'll be hosting the event at Harvard.
Games and analytics services ran into one another headfirst recently, in a spat related to the game Conan Exiles.
Developers had to remove a tracking service, which allowed game developers to track where Steam players had come from. By generating an API key and integrating it into the game, developers could figure out which ad campaigns (for example) had directed gamers to Steam at first install.
From another game developer’s forum, where they too ended up removing the system:
Click to enlarge
[UPDATE: Redshell plugin has been removed as a part of a recent patch]
This is a commonly used service that lets us (based on anonymized user footprint) see if those who bought the game came to the Steam store from some link below a Youtube video, from Facebook campaigns etc. It helps us see which marketing campaigns worked and which didn’t – no game data are sent there at all.
Once again – both Treasure Data and Redshell are solutions used by many developers and games (you can google them, if you wish). In both cases, all the data are anonymous and are sent to servers that only we can access.
As far as Conan goes, the system in place ultimately led to bad reviews, and when the bad reviews start rolling in due to third-party apps, there can be only one end result:
Click to enlarge
From the Community Manager:
The system this review mentions is no longer in Conan Exiles as of this writing. Some gamers are so fed up with third party tracking / analytics in games that they now curate Steam lists purely for games making use of said setups.
What’s fascinating to me is that gamers often have no idea exactly how much data a developer racks up even without third-party tools or analytics in place. Even in the case of Conan Exiles, there’s currently a lack of official servers so people are being encouraged to use third-party servers run by random admins. You’re being asked to place a lot of trust in someone who can simply decide to stop paying for hosting, and who also has full control over the game settings. What happens to all your cool stuff when the admin pulls the plug or decides their friend has a better claim to the land you built your castle on?
The answer is, “It’s probably all just gone out the window, sorry.” Yet as best I can tell, there aren’t as many aggrieved voices raised in relation to rogue admins or even the “anti-tamper” technology in place.
I’ve covered Privacy Policies at length in the past, so I won’t dwell on them here.
What I do want to do is give you a link to a talk I gave at Virus Bulletin 2017, all about the Virtual Worlds of Advergaming.
If you’re even remotely concerned about a system sending “how did they get here?” data to a game developer, you should be aware that marketing, advertisements, and even social engineering designed to throw ads at you in-game have existed for a long time. In fact, it’s already starting to bleed over to augmented and virtual reality.
Did you ever wonder why you’d been funneled down a narrow corridor in a shooter, then forced to crouch behind a branded energy drinks dispenser as the only piece of available cover in a gunfight?
Click to enlarge
Or why devs would place a huge billboard at the top of a hill you had to spend a few minutes hiking toward?
Click to enlarge
Maybe you just wondered if an in-game ad network dedicated to virtual titles was potentially susceptible to dubious ads like the mock-up below?
Click to enlarge
These are all things I’ve endeavored to cover in this talk.
Here’s the pitch:
As adverts in gaming (advergaming) ecosystems continue to become more sophisticated —while the game networks themselves have effectively become social networks—so too do the potential complications for parents, children, and gamers, who just want to play without worrying about where their data is going (and how it is being used). Attempts at blocking ads on closed gaming networks, tablets, and PC games have started to turn into the same type of turf war as seen on PC desktops, and forays into VR gaming have only made this more of an issue—the more potentially realistic the game experience, the harder it becomes to disassociate product advertising from the world around you.
This presentation explains: the different types of in-game ads (static, dynamic, through the line, below the line), how adverts have effectively broken simple processes for good, which specific types of advertising are used on certain platforms, and the gamification of people in the real world. It will also illustrate some of the tricks and techniques used by advertisers to ensure that gamers can’t avoid adverts as part of their gaming experience, and will compare the oldest forms of advergaming with the newest techniques, looking at how gamers trying to block ads have led to unskippable ads which form part of gameplay, and at what the future holds for VR/augmented in-game advertising.
Viewers should come away with a greater understanding of the types of advertising used in the systems they engage with on a daily basis, how that advertising may target family members in specific ways, which types of gaming are least/most susceptible to advergaming, how game developers manipulate gamers into seeing ads at specific times, and the informed choices available to reduce or eliminate forms of in-game ads they may feel uncomfortable with.
I’d like to think I got the job done for the attendees of Virus Bulletin 2017, shining as big a light as I could on some of these practices in the 25 minutes available to me. You can read the full paper about the Virtual Worlds of Advergaming here.
Otherwise, you can simply watch the talk (with a few minutes of the opening missing due to a technical hitch) below.
Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc.
For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge).
With that... let's drill into my key highlights...
Why do people like me go to RSA? Because it's the one week in the year where I can see almost every single vendor in the industry, as well as see people I know and like who I otherwise would never get to see in person (aka "networking"). It truly is an enormous event, and it has definitely passed the threshold of being overwhelming. Several people I've know for years did not make the trip this year, and I suspect this will become a trend, but in the meantime, in many ways it's a "must attend" event.
The down-side to an event this large, and something I learned back in my Gartner days, is that - as someone with nearly 2 decades of industry experience - this is not an event where you're going to find much great content. Talks must, out of necessity, be tuned to the median audience, which means looking backward at what was cutting-edge 5-10 years ago. Sad, but true. There's simply not much room for cutting-edge thinking or discussion at the event nay more.
Soooo... why go back? Again, so long as there's business development and networking benefit, it is an essential event, but it's also very costly. Hotel pricing alone makes this an increasingly difficult prospect. For as much as we're spending on hotels each year, I could very likely visit friends in 3-4 different parts of the country and break even on travel costs. It's also increasingly a lot of noise, and much harder to sift value from that noise. I truly believe RSA is nearing the point where they'll have to either break the event into multiple events (kind of like 3 wks of SxSW), or they'll at least need to move to a different model where you're attending a conference within a conference (similar to "schools" within a large university). As it stands today, it's simply too easy to get lost in the shuffle and derive diminishing value.
Automation Nearing the Mainstream
We've been talking about security automation and orchestration for several years now, but it's often been with only a handful of examples, and generally quick forward-looking. We're just now finally reaching the point where the automation message is being picked up in the mainstream and more expansive examples are emerging.
One thing I noticed this year is that "automation" was prevalent in many booths. There are now at least a dozen vendors purportedly in the space (up from the days of it being Invotas (FireEye) and Phantom). No, I can't remember any names, but suffice to say, it's a growing list. Also, separately, I've noticed that orgs like Chef and Puppet have also made an attempt to expand their automation appeal to security (not to mention Service Now doing the same).
The point here is this: The mainstream consensus is finally starting to catch up with the reality that we will never be able to scale human resources fast enough to successfully address the rapidly changing threat landscape. Thus, we absolutely must automate as much as possible. We don't need SOC analysts staring at screens, pushing buttons when a color changes from green to red. That can be automated. Instead, we need to think about these processes and make smart decisions about when and where a human actually needs to be in the loop. This is our future, which we should eagerly embrace because it then frees us up to do much more interesting and exciting things.
Since we're talking about automation, it's only natural to pivot briefly into DevOps/DevSecOps/Secure DevOps. This year's Monday event on DevSecOps was ok, if not highly repetitive. However, initial attendance was strong, and feedback has reportedly been good (the schedule got a bit foobar, so attendance declined after lunch, c'est la vie).
Here's what's important: Companies are continuing to reinvent how they operate, and DevOps is the underlying model. As such, we need to push hard to ensure that Dev and Ops teams have security responsibilities in their assigned duties, and that they are held accountable accordingly. A DevOps co-worker recently complained about this "DevSecOps" thing, and I pointed out that the entire reason for it is as a kludge because security has once again been left behind, and neither Dev nor Ops has taken on (or been assigned) security responsibilities, nor are they being held accountable for poor security decisions. THIS IS A CULTURAL FAILING THAT AFFECTS ALMOST EVERY SINGLE COMPANY AROUND.
In DevOps, the norm is always to point to "gold standard" examples like Netflix, Facebook, Etsy, etc. However, what people oftentimes forget in looking at these orgs is that, for the most part, they started out doing DevOps from the early days. There was very little need for cultural transformation because they were already operating in a DevOps manner. For companies that have been around for much, much, much longer, there will be internal opposition and institutional inertia that will slow down transformations. It's imperative that these cultural attributes be supplanted, aggressively if necessary, in order to remove barriers to change. DevOps provides an amazing template for operating an agile, efficient, effective organization... but only if companies fundamentally change how they function, including cultural transformation.
AI, ML, and Big Data Lies
If we were to take all the marketing at face value, then we'd be led to believe that the machines are thinking for themselves and we're a mere small step away from becoming part of The Matrix. Thankfully, that's not really true at all. The majority of companies claiming "AI" today are really being misleading and disingenuous. The simple fact is the majority of products are still based on heuristics or machine learning (ML) - sometimes both.
Heuristics is the traditional pattern matching we've seen for decades upon decades upon decades. Your traditional AV or IDS "solution"? It's primarily based on heuristically matching patterns and signatures to detect "a known bad thing." These are ok, but in the grand scheme they're providing little lasting value.
ML has emerged as an alternative, wherein rather than looking for patterns, we instead model environments or behaviors, and then do alerts based on either matching or deviating from the models (sometimes both!). The ML approach is actually quite promising, though it's premised on the ability to actual create a discrete model of an environment or behavior. It is also imperative that ML engines be constantly rebuilding the models to account for changes in an environment or behavior (for example, imagine building a model of your diet starting in mid-October and running through the end of the year, and then trying to apply that same model to your diet Jan-Mar after you've made major life changes, perhaps as part of a New Year's Resolution).
There is a lot of hope in AI, ML, et al, and I think for good reason. Frankly, ML gives us a lot of value when applied to reasonably discrete environments (e.g., containers), and thus I think we'll continue to see great growth and success in this space. I expect that computing environments will also continue to evolve and grow to make modeling of them that much easier. I think there's much promise.
As for AI itself, we'll have to wait and see, but I suspect we're a good decade+ away from true examples of real-world applications. However, that said, if you're in a lower-level role (analyst, basic infrastructure config, etc.), then now is a good time to invest in training/education to improve your skills to raise yourself up to a higher-level job that will be less easily threatened by AI+automation. As I noted above, we really do not need SOC analysts staring at screens clicking buttons according to a set process. Machines can already do that today. Thus there's no job security in it. Instead, become the person who builds and trains these automation tools, or be the higher-level "fixer" who is activated one automation has done all the base enumeration and examination. The world is changing rapidly, and will look quite different in a decade.
The Threat Is Real / Ignore the FUD
One of my favorite tunes from last year is Megadeth's "The Threat Is Real" as it's really quite an appropriate phrase. Hacks are succeeding every day. Breaches are so commonplace that the mainstream media has all but lost interest in reporting on them. Incidents are inevitable. And, yet, in some ways they needn't be so inevitable; at least, not to the degree and severity we continually see. Whether it be massive DDoS attacks built on the back of woefully insecure IoT devices or sizable holes in cloud CDN infrastructure a la Cloudbleed, there are a lot of holes, a lot of bugs, and a lot of undertrained people, all of which will lead to bad days.
That said, we also need to be incredibly mindful and diligent to avoid the FUD. There's too much FUD. It's like running around telling us that "we're all gonna die" as if we don't all accept this as an inevitability. Come on, folks, let's get out of that red mental state (fear/panic/anger) and apply some rational thought. There are tons of things we can be doing to prepare and protect our organizations, our customers/clients, and our resources. We just need to take a deep breath, settle down, and execute.
What should we do? Well, interestingly, it's not all that strange a list. First and foremost, Basic Security Hygiene, which I wrote about while at Gartner nearly 2.5 years ago. Things like robust IAM (centralized, processized, monitored), vuln and patch mgmt, and applying consistent, secure standards for infrastructure and development are all great starting points. Beyond that, it comes down to taking the time to understand your environment and exposures, and investing in tools and techniques that will produce measurable results (measurement is key!!!). A progressive security awareness program can be critical to educating and incentivizing people to make better decisions, and really reflects the overall imperative to transform the business and it's underlying culture. We can absolutely make things better, but it requires effort and thoughtfulness.
*whew* Ok... so, there you have it... my thoughts from RSA 2017. All told, it was a so-so week for me personally, but I'll definitely be back for one more year. TBD after that. It's really quite the circus these days. This year was especially difficult with how spread out things were (Moscone South has a major construction project underway, so the Marriott Marquis was enlisted). The wifi and mobile signals in the Marquis dungeon were nonexistent, which was painful. Also painful was the 4 spread out venues for Codebreaker's Bash on Thursday evening. It didn't work. Because people were spread all over, it was difficult to casually run into folks I was hoping to see. Hopefully next year they'll revert to a large single venue (I really, really, really enjoyed the Bash at AT&T Park, though many folks complained about it). Finding a venue for 43k+ people has to be incredibly challenging. Of course, so is finding a hotel room each year, so, ya know, there's that, too. Ha.
Hope you find this interesting/useful! Until next time...
While this is technically a CTF writeup, like I frequently do, this one is going to be a bit backwards: this is for a CTF I ran, instead of one I played! I've gotta say, it's been a little while since I played in a CTF, but I had a really good time running the BSidesSF CTF! I just wanted to thank the other organizers - in alphabetical order - @bmenrigh, @cornflakesavage, @itsc0rg1, and @matir. I couldn't have done it without you folks!
The goal of this post is to explain a little bit of the motivation behind the challenges I wrote, and to give basic solutions. It's not going to have a step-by-step walkthrough of each challenge - though you might find that in the writeups list - but, rather, I'll cover what I intended to teach, and some interesting (to me :) ) trivia.
If you want to see the source of the challenges, our notes, and mostly everything else we generated as part of creating this CTF, you can find them here:
- Original sourcecode on github
- Google Drive notes (note that that's not the complete set of notes - some stuff (like comments from our meetings, brainstorming docs, etc) are a little too private, and contain ideas for future challenges :) )
Part of my goal for releasing all of our source + planning documents + deployment files is to a) show others how a CTF can be run, and b) encourage other CTF developers to follow suit and release their stuff!
As of the writing, the scoreboard and challenges are still online. We plan to keep them around for a couple more days before finally shutting them down.
The rest of my team can most definitely confirm this: I'm not an infrastructure kinda guy. I was happy to write challenges, and relied on others for infrastructure bits. The only thing I did was write a Dockerfile for each of my challenges.
As such, I'll defer to my team on this part. I'm hoping that others on my team will post more details about the configurations, which I'll share on my Twitter feed. You can also find all the Dockerfiles and deployment scripts on our Github repository.
What I do know is, we used:
- Googles CTF Scoreboard running on AppEngine for our scoreboard
- Dockerfiles for each challenge that had an online component, and Docker for testing
- docker-compose for testing
- Kubernetes for deployment
- Google Container Engine for running all of that in The Cloud
As I said, all the configurations are on Github. The infrastructure worked great, though, we had absolutely no traffic or load problems, and only very minor other problems.
I'm also super excited that Google graciously sponsored all of our Google Cloud expenses! The CTF weekend cost us roughly $500 - $600, and as of now we've spent a little over $800.
Just a few numbers:
- We had 728 teams register
- We had 531 teams score at least one point
- We had 354 teams score at least 100 points
- We had 23 teams submit at least one on-site flag (presumably, that many teams played on-site)
Also, the top-10 teams were:
- dcua :: 6773
- OpenToAll :: 5178
- scryptos :: 5093
- Dragon Sector :: 4877
- Antichat :: 4877
- p4 :: 4777
- khack40 :: 4677
- squareroots :: 4643
- ASIS :: 4427
- Ox002147 :: 4397
The top-10 teams on-site were:
- OpenToAll :: 5178
- ▣ :: 3548
- hash_slinging_hackers :: 3278
- NeverTry :: 2912
- 0x41434142 :: 2668
- DevOps Solution :: 1823
- Shadow Cats :: 1532
- HOW BOU DAH :: 1448
- Newbie :: 762
- CTYS :: 694
The full list can be found on our CTFTime.org page.
We had three on-site challenges (none of them created by me):
This was a one-point challenge designed simply to determine who's eligible for on-site prizes. We had to flag taped to the wall. Not super interesting. :)
(Speaking of prizes, I want to give a shout out to Synack for providing some prizes, and in particular to working with us on a fairly complex set-up for dealing with said prizes. :)
Shared Secrets 
The Shared Secrets challenge was a last-minute idea. We wanted more on-site challenges, and others on the CTF organizers team came up with Shamir Shared Secret Scheme. We posted QR Codes containing pieces of a secret around the venue.
It was a "3 of 6" scheme, so only three were actually needed to get the secret.
The quotes on top of each image try to push people towards either "Shamir" or "ACM 22(11)". My favourite was, "Hi, hi, howdy, howdy, hi, hi! While everyone is minus, you could call me multiply", which is a line from a Shamir (the rapper) song. I did not determine if Shamir the rapper and Shamir the cryptographer were the same person. :)
Locker is really cool! We basically set up a padlock with an Arduino and a receipt printer. After successfully picking the lock, you'd get a one-time-use flag printed out by the printer.
(We had some problems with submitting the flag early-on, because we forgot to build the database for the one-time-use flags, but got that resolved quickly!)
@bmenrigh developed the lock post, which detected the lock opening, and @matir developed the software for the receipt printer.
I'm not going to go over others' challenges, other than the on-site ones I already covered, I don't have the insight to make comments on them. However, I do want to cover all my challenges. Not a ton of detail, but enough to understand the context. I'll likely blog about a couple of them specifically later.
I probably don't need to say it, but: challenge spoilers coming!
'easy' challenges [10-40]
I wrote a series of what I called 'easy' challenges. They don't really have a trick to them, but teach a fundamental concept necessary to do CTFs. They're also a teaching tool that I plan to use for years to come. :)
easy  - a couldn't-be-easier reversing challenge. Asks for a password then prints out a flag. You can get both the password and the flag by running strings on the binary.
easyauth  - a web challenge that sets a cookie, and tells you it's setting a cookie. The cookie is simply 'username=guest'. If you change the cookie to 'username=administrator', you're given the flag. This is to force people to learn how to edit cookies in their browser.
easyshell  and easyshell64  - these are both simple programs where you can send it shellcode, and they run it. It requires the player to figure out what shellcode is and how to use it (eg, from msfvenom or an online shellcode database). There's both a 32- and a 64-bit version, as well.
easyshell and easyshell64 are also good ways to test shellcode, and a place where people can grab libc binaries, if needed.
And finally, easycap  is a simple packet capture, where a flag is sent across the network one packet at a time. I didn't keep my generator, but it's essentially a ruby script that would do a s.send() on each byte of a string.
skipper  and skipper2 
Now, we're starting to get into some of the levels that require some amount of specialized knowledge. I wrote skipper and skipper2 for an internal company CTF a long time ago, and have kept them around as useful teaching tools.
One of the first thing I ever did in reverse engineering was write a registration bypass for some icon-maker program on 16-bit DOS using the debug.com command and some dumb luck. Something where you had to find the "Sorry, your registration code is invalid" message and bypass it. I wanted to simulate this, and that's where these came from.
With skipper, you can bypass the checks by just changing the program counter ($eip or $rip) or nop'ing out the checks. skipper2, however, incorporates the results from the checks into the final flag, so they can't be skipped quite so easily. Rather, you have to stop before each check and load the proper value into memory to get the flag. This simulates situations I've legitimately run into while writing keygens.
When I originally conceived of hashecute, I had imagined it being fairly difficult. The idea is, you can send any shellcode you want to the server, but you have to prepend the MD5 of the shellcode to it, and the prepended shellcode runs as well. That's gotta be hard, right? Making an MD5 that's executable??
Except it's not, really. You just need to make sure your checksum starts with a short-jump to the end of the checksum (or to a NOP sled if you want to do it even faster!). That's \xeb\x0e (for jmp) or \e9\x0e (for call), as the simplest examples (there are practically infinite others). And it's really easy to do that by just appending crap to the end of the shellcode: you can see that in my solution.
It does, however, teach a little critical thinking to somebody who might not be super accustomed to dealing with machine code, so I intend to continue using this one as a teaching tool. :)
b-64-b-tuff has the dual-honour of both having the stupidest name and being the biggest waste of my own time .:)
So, I came up with the idea of writing this challenge during a conversation with a friend: I said that I know people have written shellcode encoders for unicode and other stuff, but nobody had ever written one for Base64. We should make that a challenge!
So I spent a couple minutes writing the challenge. It's mostly just Base64 code from StackOverflow or something, and the rest is the same skeleton as easyshell/easyshell64.
Then I spent a few hours writing a pure Base64 shellcode encoder. I intend to do a future blog 100% about that process, because I think it's actually a kind of interesting problem. I eventually got to the point where it worked perfectly, and I was happy that I could prove that this was, indeed, solveable! So I gave it a stupid name and sent out my PR.
That's when I think @matir said, "isn't Base64 just a superset of alphanumeric?".
Yes. Yes it is. I could have used any off-the-shelf alphanumeric shellcode encoder such as msfvenom. D'OH!
But, the process was really interesting, and I do plan to write about it, so it's not a total loss. And I know at least one player did the same (hi @Grazfather! [he graciously shared his code where he encoded it all by hand]), so I feel good about that :-D
I like to joke that I only write challenges to drive traffic to my blog. This is sort of the opposite: it rewards teams that read my blog. :)
A few months ago, while writing the delphi-status challenge (more on that one later), I realized that when encrypting data using a padding oracle, the last block can be arbitrarily chosen! I wrote about it in an off-handed sort of way at that time.
Shortly after, I realized that it could make a neat CTF challenge, and thus was born in-plain-site.
It's kind of a silly little challenge. Like one of those puzzles you get in riddle books. The ciphertext was literally the string "HiddenCiphertext", which I tell you in the description, but of course you probably wouldn't notice that. When you do, it's a groaner. :)
Fun story: I had a guy from the team OpenToAll bring up the blog before we released the challenge, and mention how he was looking for a challenge involving plaintext ciphertext. I had to resist laughing, because I knew it was coming!
This was a silly little level, which once again forces people to get shellcode. You're allowed to send up to 5 bytes of shellcode to the server, where the flag is loaded into memory, and the server executes them.
Obviously, 5 bytes isn't enough to do a proper syscall, so you have to be creative. It's more of a puzzle challenge than anything.
The trick is, I used a bunch of in-line assembly when developing the challenge (see the original source, it isn't pretty!) that ensures that the registers are basically set up to make a syscall - all you have to do it move esi (a pointer to the flag) into ecx. I later discovered that you can "link" variables to specific registers in gcc.
The intended method was for people to send \xcc for the shellcode (or similar) and to investigate the registers, determining what the state was, and then to use shellcode along the lines of xchg esi, ecx / int 0x80. And that's what most solvers I talked to did.
One fun thing: eax (which is the syscall number when a syscall is made) is set to len(shellcode) (the return value of read()). Since sys_write, the syscall you want to make, is number 4, you can easily trigger it by sending 4 bytes. If you send 5 bytes, it makes the wrong call.
Several of the solutions I saw had a dec eax instruction in them, however! The irony is, you only need that instruction because you have it. If you had just left it off, eax would already be 4!
delphi-status was another of those levels where I spent way more time on the solution than on the challenge.
It seems common enough to see tools to decrypt data using a padding oracle, but not super common to see challenges where you have to encrypt data with a padding oracle. So I decided to create a challenge where you have to encrypt arbitrary data!
The original goal was to make somebody write a padding oracle encryptor tool for me. That seemed like a good idea!
But, I wanted to make sure this was do-able, and I was just generally curious, so I wrote it myself. Then I updated my tool Poracle to support encryption, and wrote a blog about it. If there wasn't a tool available that could encrypt arbitrary data with a padding oracle, I was going to hold back on releasing the code. But tools do exist, so I just released mine.
It turns out, there was a simpler solution: you could simply xor-out the data from the block when it's only one block, and xor-in arbitrary data. I don't have exact details, but I know it works. Basically, it's a classic stream-cipher-style attack.
And that just demonstrates the Cryptographic Doom Principle :)
ximage might be my favourite level. Some time ago - possibly years - I was chatting with a friend, and steganography came up. I wondered if it was possible to create an image where the very pixels were executable!?
I went home wondering if that was possible, and started trying to think of 3-byte NOP-equivalent instructions. I managed to think of a large number of work-able combinations, including ones that modified registers I don't care about, plus combinations of 1- and 2-byte NOP-equivalents. By the end, I could reasonably do most colours in an image, including black (though it was slightly greenish) and white. You can find the code here.
(I got totally nerdsniped while writing this, and just spent a couple days trying to find every 3-byte NOP equivalent to see how much I can improve this!)
Originally, I just made the image data executable, so you'd have to ignore the header and run the image body. Eventually, I noticed that the bitmap header, 'BM', was effectively inc edx / dec ebp, which is a NOP for all I'm concerned. That's followed by a 2-byte length value. I changed that length on every image to be \xeb\x32, which is effectively a jump to the end of the header. That also caused weird errors when reading the image, which I was totally fine with leaving as a hint.
So what you have is an image that's effectively shellcode; it can be loaded into memory and run. A steganographic method that has probably never been done. :)
beez-fight was an item-duplication vulnerability that was modeled after a similar vulnerability in Diablo 2. I had a friend a lonnnng time ago who discovered a vulnerability in Diablo 2, where when you sold an item it was copied through a buffer, and that buffer could be sold again. I was trying to think of a similar vulnerability, where a buffer wasn't cleared correctly.
I started by writing a simple game engine. While I was creating items, locations, monsters, etc., I didn't really think about how the game was going to be played - browser? A binary I distribute? netcat? Distributing a binary can be fun, because the player has to reverse engineer the protocol. But netcat is easier! The problem is, the vulnerability has to be a bit more subtle in netcat, because I can't depend on a numbered buffer - what you see is what you get!
Eventually, I came upon the idea of equip/unequip being problematic. Not clearing the buffer properly!
Something I see far too much in real life is code that checks if an object exists in a different way in different places. So I decided to replicate that - I had both an item that's NULL-able, and a flag :is_equipped. When you tried to use an item, it would check if the :is_equipped flag is set. But when you unequipped it, it checked if the item was NULL, which never actually happened (unequipping it only toggled the flag). As a result, you could unequip the item multiple times and duplicate it!
Once that was done, the rest was easy: make a game that's too difficult to reasonably survive, and put a flag in the store that's worth a lot of gold. The only reasonable way to get the flag is to duplicate an item a bunch, then sell it to buy the flag.
I think I got the most positive feedback on this challenge, people seem to enjoy game hacking!
vhash + vhash-fixed 
It all dates back to a conversation I had with @joswr1ght about a SANS Holiday Hack Challenge level I was designing. I suggested using a hash-extension vulnerability, and he said we can't, because of hash_extender, recklessly written by yours truly, ruining hash extension vulnerabilities forever!
I found that funny, and mentioned it to @bmenrigh. We decided to make our own novel hashing algorithm that's vulnerable to an extension attack. We decided to make it extra hard by not giving out source! Players would have to reverse engineer the algorithm in order to implement the extension attack. PERFECT! Nobody knows as well as me how difficult it can be to create a new hash extension attack. :)
Now, there is where it gets a bit fun. I agreed to write the front-end if he wrote the back-end. The front-end was almost exactly easyauth, except the cookie was signed. We decided to use an md5sum-like interface, which was a bit awkward in PHP, but that was fine. I wrote and tested everything with md5sum, and then awaited the vhash binary.
When he sent it, I assumed vhash was a drop-in replacement without thinking too much about it. I updated the hash binary, and could log in just fine, and that was it.
When the challenge came out, the first solve happened in only a couple minutes. That doesn't seem possible! I managed to get in touch with the solver, and he said that he just changed the cookie and ignored the hash. Oh no! Our only big mess-up!
After investigation, we discovered that the agreed md5sum-like interface meant, to @bmenrigh, that the data would come on stdin, and to me it meant that the file would be passed as a parameter. So, we were hashing the empty string every time. Oops!
Luckily, we found it, fixed it, and rolled out an updated version shortly after. The original challenge became an easy 450-pointer for anybody who bothered to try, and the real challenge was only solved by a few, as intended.
dnscap is simply a packet-capture from dnscat2, running in unecrypted-mode, over a laggy connection (coincidentally, I'm writing this writeup at the same bar where I wrote the original challenge!). In dnscat2, I sent a .png file that contains the dnscat2 logo, as well as the flag. Product placement anyone?
I assumed it would be fairly difficult to disentangle the packets going through, which is why we gave it a high point-value. Ultimately, it was easier than we'd expected, people were able to solve it fairly quickly.
And finally, my old friend nibbler.
At some point in the past few months, I had the realization: nibbles (the snake game for QBasic where I learned to program) sounds like nibble (a 4-bit value). I forget where it came from exactly, but I had the idea to build a nibbles-clone with a vulnerability where you'd have to exploit it by collecting the 'fruit' at the right time.
I originally stored the scores in an array, and each 'fruit' would change between between worth 00 and FF points. You'd have to overflow the stack and build an exploit by gathering fruit with the snake. You'll notice that the name that I ask for at the start uses read() - that's so it can have NUL bytes so you can build a ROP-chain in your name.
I realized that picking values between 00 and FF would take FOREVER, and wanted to get back to the original idea: nibbles! But I couldn't think of a way to make it realistic while only collecting 4-bit values.
Eventually, I decided to drop the premise of performing an exploit, and instead, just let the user write shellcode that is run directly. As a result, it went from a pwn to a programming challenge, but I didn't re-categorize it, largely because we don't have programming challenges.
It ended up being difficult, but solveable! One of my favourite writeups is here; I HIGHLY recommend reading it. My favourite part is that he named the snakes and drew some damn sexy images!
I just want to give a shout out to the poor soul, who I won't name here, who solved this level BY HAND, but didn't cat the flag file fast enough. I shouldn't have had the 10-second timeout, but we did. As a result, he didn't get the flag. I'm so sorry. :(
Fun fact: @bmenrigh was confident enough that this level was impossible to solve that he made me a large bet that less than 2 people would solve it. Because we had 9 solvers, I won a lot of alcohol! :)
Hopefully you enjoyed hearing a little about the BSidesSF CTF challenges I wrote! I really enjoyed writing them, and then seeing people working on solving them!
On some of the challenges, I tried to teach something (or have a teachable lesson, something I can use when I teach). On some, I tried to make something pretty difficult. On some, I fell somewhere between. But there's one thing they have in common: I tried to make my own challenges as easy as possible to test and validate. :)
Last weekA few weeks ago, SANS hosted a private event at the Smithsonian's Air and Space Museum as part of SANS Hackfest. An evening in the Air and Space Museum just for us! And to sweeten the deal, they set up a scavenger hunt called "Hackers of Gravity" to work on while we were there!
We worked in small teams (I teamed up with Eric, who's also writing this blog with me). All they told us in advance was to bring a phone, so every part of this was solved with our phones and Google.
Each level began with an image, typically with a cipher embedded in it. After decoding the cipher, the solution and the image itself were used together to track down a related artifact.
This is a writeup of that scavenger hunt. :)
Challenge 1: Hacker of Tenacity
The order of the challenges was actually randomized, so this may not be the order that anybody else had (homework: there are 5040 possible orderings of challenges, and about 100 people attending; what are the odds that two people had the same order? The birthday paradox applies).
The first challenge was simply text:
Sometimes tenacity is enough to get through a difficult challenge. This Hacker of Gravity never gave up and even purposefully created discomfort to survive their challenge against gravity. Do you possess the tenacity to break this message? T05ZR1M0VEpPUlBXNlpTN081VVdHMjNGT0pQWEdaTEJPUlpRPT09PQ==
Based on the character set, we immediately recognized it as Base64. We found an online decoder and it decoded to:
We recognized that as Base32 - Base64 will never have four "====" signs at the end, and Base32 typically only contains uppercase characters and numbers. (Quick plug: I'm currently working on Base32 support for dnscat2, which is another reason I quickly recognized it!)
Anyway, the Base32 version decoded to spirit_of_wicker_seats, and Eric recognized "Spirit" as a possible clue and searched for "Spirit of St Louis Wicker Seats", which revealed the following quote from the Wikipedia article on the Spirit of St. Louis: "The stiff wicker seat in the cockpit was also purposely uncomfortable".
The Spirit of St. Louis was one of the first planes we spotted, so we scanned the QR code and found the solution: lots_of_fuel_tanks!
Challenge 2: Hacker of Navigation
We actually got stuck on the second challenge for awhile, but eventually we got an idea of how these challenges tend to work, after which we came back to it.
We were given a fragment of a letter:
The museum archives have located part of a letter in an old storage locker from some previously lost collection. They'd REALLY like your help finding the author.
You'll note at the bottom-left corner it implies that "A = 50 degrees". We didn't notice that initially. :)
What we did notice was that the degrees were all a) multiples of 10, and b) below 260. That led us to believe that they were numbered letters, times ten (so A = 10, B = 20, C = 30, etc).
The numbers were: 100 50 80 90 80 100 50 230 120 130 190 180 130 230 240 50.
Dividing by 10 gives 10 5 8 9 8 10 5 23 12 13 19 18 13 23 24 5.
Converting that to the corresponding letters gave us JEHIH JEWLMSRMWXE. Clearly not an English sentence, but it looks like a cryptogram (JEHIH looks like "THERE" or "WHERE").
That's when we noticed the "A = 50" in the corner, and realized that things were probably shifted by 5. Instead of manually converting it, we found a shift cipher bruteforcer that we could use. The result was: FADED FASHIONISTA
Searching for "Faded Fashionista Air and Space" led us to this Smithsonian Article: Amelia Earhart, Fashionista. Neither of us knew where her exhibit was, but eventually we tracked it down on the map and walked around it until we found her Lockheed Vega, the QR code scanned to amelias_vega.
Challenge 3: Hacker of Speed
This was an image of some folks ready to board a plane or something:
This super top secret photo has been censored. The security guys looked at this SO fast, maybe they missed something?
Because of the hint, we started looking for mistakes in the censoring and noticed that they're wearing boots that say "X-15":
We found pictures of the X-15 page on the museum's Web site and remembered seeing the plane on the 2nd floor. We reached the artifact and determined that the QR code read faster_than_superman.
Once we got to the artifact, we noticed that we hadn't broken the code yet. Looking carefully at the image, we saw the text at the bottom, nbdi_tjy_qpjou_tfwfo_uxp.
As an avid cryptogrammer, I recognized tfwfo as likely being "never". Since 'e' is one character before 'f', it seemed likely that it was a single shift ('b'->'a', 'c'->'b', etc). I mentally shifted the first couple letters of the sentence, and it looked right, so I did the entire string while Eric wrote it down: mach_six_point_seven_two.
The funny thing is, the word was "seven", not "never", but the "e"s still matched!
Challenge 4: Hacker of Design
While researching some physics based penetration testing, you find this interesting diagram. You feel like you've seen this device before... maybe somewhere or on something in the Air and Space museum?
The diagram reminded Eric of an engine he saw on an earlier visit, we found the artifact on the other side of the museum:
Unfortunately there was no QR code so we decided to work on decoding the challenge to discover the location of the artifact.
Now that we'd seen the hint on Challenge 2, we were more prepared for a diagram to help us! In this case, it was a drawing of an atom and the number "10". We concluded that the numbers probably referred to the atomic weight for elements on the periodic table, and converted them as such:
... and so on.
After decoding the full string, we ended up with:
We actually made a mistake in decoding the string, but managed to find it anyways thanks to search autocorrect. :)
After searching for "schwalbe air and space", we found this article, which led us to the artifact: the Messerschmitt Me 262 A-1a Schwalbe (Swallow). The QR code scanned revealed the_swallow.
Challenge 5: Hacker of Distance
While at the bar, listening to some Dual Core, planning your next conference-fest with some fellow hackers, you find this interesting napkin. Your mind begins to wander. Why doesn't Dual Core have a GOLDEN RECORD?! Also, is this napkin trying to tell you something in a around-about way?
The hidden text on this one was obvious… morse code! Typing the code into a phone (not fun!), we ended up with .- -.. .- ... - .-. .- .--. . .-. .- ... .--. . .-. .-, which translates to ADASTRAPERASPERA
According to Google, that slogan is used by a thousand different organizations, none of which seemed to be space or air related. However, searching for "Golden Record Air and Space" returned several results for the Voyager space probe. We looked at our map and scurried to the exhibit on the other side of the museum:
Once we made it to the exhibit finding the QR code was easy, scanning it revealed, the_princess_is_in_another_castle. The decoy flag!
We tried searching keywords from the napkin but none of the results seemed promising. After a few frustrating minutes we saw the museum banquet director and asked him for help. He told us that the plane we were looking for was close to the start of the challenge, we made a dash for the first floor and found the correct Voyager exhibit:
Scanning the QR code revealed the code, missing_canards.
Challenge 6: Hacker of Guidance
The sixth challenge gave us a map with some information:
You have intercepted this map that appears to target something. The allies would really like to know the location of the target. Also, they'd like to know what on Earth is at that location.
We immediately noticed the hex-encoded numbers on the left:
Which translates to 54.138852,13.767725. We googled the coordinates, and it turned out to be a location in Germany: Flughafenring, 17449 Peenemünde, Germany.
After many failed searches we tried "Peenemünde ww2 air and space", which led to a reference to the German V2 Rocket. Here is the exhibit and QR code:
Scanning the QR code revealed aggregat_4, the formal name for the V-2 rocket.
Challenge 7: Hacker of Coding
This is an image with a cipher on the right:
Your primary computer's 0.043MHz CPU is currently maxed out with other more important tasks, so converting all these books of source code to assembly is entirely up to you.
On the chalkboard is a cipher:
We couldn't remember what it was called, and ended up searching for "line dot cipher", which immediately identified it as a pigpen cipher. The pigpen cipher can be decoded with this graphic:
Essentially, you find the shape containing the letter that corresponds to the shape in that graphic. So, the first letter is ">" on the chalkboard, which maps to 'T'. The second is the upper three quarters of a square, which matches up with 'H', and the third is a square, which matches to E. And so on.
Initially we found a version that didn't map to the proper English characters, and translated it to:
Later, we did it right and found the text "THE BEST SHIP TO COME DOWN THE LINE"
To find the artifact, we googled "0.043MHz", and immediately discovered it was "Apollo 11".
The QR code scanned to the_eleventh_apollo
And that's it!
And that's the end of the cipher portion of the challenge! We were first place by only a few minutes. :)
The last part of the challenge involved throwing wood airplanes. Because our plane didn't go backwards, it wasn't the worst, but it's nothing to write home about!
But in the end, it was a really cool way to see a bunch of artifacts and also break some codes!
Live from the SANS Pentest Summit, I'm excited to announce the latest beta release of dnscat2: 0.04! Besides some minor cleanups and UI improvements, there is one serious improvement: all dnscat2 sessions are now encrypted by default!
Read on for some user information, then some implementation details for those who are interested! For all the REALLY gory information, check out the protocol doc!
Tell me what's new!
By default, when you start a dnscat2 client, it now performs a key exchange with the server, and uses a derived session key to encrypt all traffic. This has the huge advantage that passive surveillance and IDS and such will no longer be able to see your traffic. But the disadvantage is that it's vulnerable to a man-in-the-middle attack - assuming somebody takes the time and effort to perform a man-in-the-middle attack against dnscat2, which would be awesome but seems unlikely. :)
By default, all connections are encrypted, and the server will refuse to allow cleartext connections. If you start the server with --security=open (or run set security=open), then the client decides the security level - including cleartext.
If you pass the server a --secret string (see below), then the server will require clients to authenticate using the same --secret value. That can be turned off by using --security=open or --security=encrypted (or the equivalent set commands).
Let's look at the man-in-the-middle protection...
Short authentication strings
First, by default, a short authentication string is displayed on both the client and the server. Short authentication strings, inspired by ZRTP and Silent Circle, are a visual way to tell if you're the victim of a man-in-the-middle attack.
Essentially, when a new connection is created, the user has to manually match the short authentication strings on the client and the server. If they're the same, then it's a legit connection. Here's what it looks like on the client:
Encrypted session established! For added security, please verify the server also displays this string: Tort Hither Harold Motive Nuns Unwrap
And the server:
New window created: 1 Session 1 security: ENCRYPTED BUT *NOT* VALIDATED For added security, please ensure the client displays the same string: >> Tort Hither Harold Motive Nuns Unwrap
There are 256 different possible words, so six words gives 48 bits of protection. While a 48-bit key can eventually be bruteforced, in this case it has to be done in real time, which is exceedingly unlikely.
Alternatively, a pre-shared secret can be used instead of a short authentication string. When you start the server, you pass in a --secret value, such as --secret=pineapple. Clients with the same secret will create an authenticator string based on the password and the cryptographic keys, and send it to the server, encrypted, after the key exchange. Clients that use the wrong key will be summarily rejected.
Details on how this is implemented are below.
How stealthy is it?
To be perfectly honest: not completely.
The key exchange is pretty obvious. A 512-bit value has to be sent via DNS, and a 512-bit response has to come back. That's pretty big, and stands out.
After that, every packet has an unencrypted 40-bit (5-byte) header and an unencrypted 16-bit (2-byte) nonce. The header contains three bytes that don't really change, and the nonce is incremental. Any system that knows to look for dnscat2 will be able to find that.
It's conceivable that I could make this more stealthy, but anybody who's already trying to detect dnscat2 traffic will be able to update the signatures that they would have had to write anyway, so it becomes a cat-and-mouse game.
Of course, that doesn't stop people from patching things. :)
The plus side, however, is that none of your data leaks! And somebody would have to be specifically looking for dnscat2 traffic to recognize it.
What are the hidden costs?
Encrypted packets have 64 bits (8 bytes) of extra overhead: a 16-bit (two-byte) nonce and a 48-bit (six-byte) signature on each packet. Since DNS packets have between 200 and 250 bytes of payload space, that means we lose ~4% of our potential bandwidth.
Additionally, there's a key exchange packet and potentially an authentication packet. That's two extra roundtrips over a fairly slow protocol.
Other than that, not much changes, really. The encryption/decryption/signing/validation are super fast, and it uses a stream cipher so the length of the messages don't change.
How do I turn it off?
The server always supports crypto; if you don't WANT crypto, you'll have to manually hack the server or use a version of dnscat2 server <=0.03. But you'll have to manually turn off encryption in the client; otherwise, the connection fail.
Speaking of turning off encryption in the client: you can compile without encryption by using make nocrypto. You can also disable encryption at runtime with dnscat2 --no-encryption. On Visual Studio, you'll have to define "NO_ENCRYPTION". Note that the server, by default, won't allow either of those to connect unless you start it with --security=open.
Give me some technical details!
Your best bet if you're REALLY curious is to check out the protocol doc, where I document the protocol in full.
But I'll summarize it here. :)
The client starts a session by initiating a key exchange with the server. Both sides generate a random, 256-bit private key, then derive a public key using Elliptic Curve Diffie Hellman (ECDH). The client sends the public key to the server, the server sends a public key to the client, and they both agree on a shared secret.
That shared secret is hashed with a number of different values to derive purpose-specific keys - the client encryption key, the server encryption key, the client signing key, the server signing key, etc.
Once the keys are agreed upon, all packets are encrypted and signed. The encryption is salsa20 and uses one of the derived keys as well as an incremental nonce. After being encrypted, the encrypted data, the nonce, and the packet header are signed using SHA3, but truncated to 48 bits (6 bytes). 48 bits isn't very long for a signature, but space is at an extreme premium and for most attacks it would have to be broken in real time.
As an aside: I really wanted to encrypt the header instead of just signing it, but because of protocol limitations, that's simply not possible (because I have no way of knowing which packets belong to which session, the session_id has to be plaintext).
Immediately after the key exchange, the client optionally sends an authenticator over the encrypted session. The authenticator is based on a pre-shared secret (passed on the commandline) that the client and server pre-arrange in some way. That secret is hashed with both public keys and the secret (derived) key, as well as a different static string on the client and server. The client sends their authenticator to the server, and the server sends their authenticator to the client. In that way, both sides verify each other without revealing anything.
If the client doesn't send the authenticator, then a short authentication string is generated. It's based on a very similar hash to the authenticator, except without the pre-shared secret. The first 6 bytes are converted into words using a list of 256 English words, and are displayed on the screen. It's up to the user to verify them.
Because the nonce is only 16 bits, only 65536 roundtrips can be performed before running out. As such, the client may, at its own discretion (but before running out), initiate a new key exchange. It's identical to the original key exchange, except that it happens in a signed and encrypted packet. After the renegotiation is finished, both the client and server switch their nonce values back to 0 and stop accepting packets with the old keys.
And... that's about it! Keys are exchanged, an authenticator is sent or a short authentication string is displayed, all messages are signed and encrypted, and that's that!
A few of the challenges I had to work through...
- Because DNS has no concept of connections/sessions, I had to expose more information that I wanted in the packets (and because it's extremely length-limited, I had to truncate signatures)
- I had originally planned to use Curve25519 for the key exchange, but there's no Ruby implementation
- Finding a C implementation of ECC that doesn't require libcrypto or libssl was really hard
- Finding a working SHA3 implementation in Ruby was impossible! I filed bugs against the three more popular implementations and one of them actually took the time to fix it!
- Dealing with DNS's gratuitous retransmissions and accidental drops was super painful and required some hackier code than I like to see in crypto (for example, an old key can still be used, even after a key exchange, until the new one is used successfully; the more secure alternative can't handle a dropped response packet, otherwise both peers would have different keys)
I just wanted to do a quick shout out to a few friends who really made this happen by giving me advice, encouragement, or just listening to me complaining.
So, in alphabetical order so nobody can claim I play favourites, I want to give mad propz to:
- Alex Weber, who notably convinced me to use a proper key exchange protocol instead of just a static key (and who also wrote the Salsa20 implementation I used
- Brandon Enright, who give me a ton of handy crypto advice
- Eric Gershman, who convinced me to work on encryption in the first place, and who listened to my constant complaining about how much I hate implementing crypto
Just some links for your enjoyment
List of security conferences in 2014
AIDE (Appalachian Institute of Digital Evidence)
- BSides DC 2014
- BSides Chicago 2014
- BSides Nashville 2014
- BSides Augusta 2014
- BSides Huntsville 2014
- BSides Las Vegas 2014
- BSidesDE 2013
- BSidesLV 2013
- BSidesRI 2013
- Bsides Cleveland 2012 BsidesCLE
- Bsides Las Vegas 2012
- Defcon: All Conference CDs and DVDs with Presentation PDF files (updated 2014 for DEF CON 22): Torrent
- Defcon Wireless Village 2014
- Defcon: all other
Digital Bond's S4x14
Circle City Con
GrrCON Information Security Summit & Hacker Conference
- 2011 https://www.youtube.com/
- 2012 https://www.youtube.com/
- 2013 https://www.youtube.com/
playlist?list= PL3UAg9Zuj1yK5nePRJCq1Y3gVLkoq Vj9a
- 2014 https://www.youtube.com/
playlist?list= PL3UAg9Zuj1yLmemIKw- domjg5UkbN-pLc
- Adrian Crenshaw. Intro to Darknets: Tor and I2P Workshop
- Installing the I2P darknet software in Linux
- Adrian Crenshaw. Installing Nessus on Kali Linux and Doing a Credentialed Scan
- Intro to Metasploit Class at IU Southeast
- Louisville ISSA Web PenTesting Workshop
- Louisville Nmap Class 2014
- ISSA Kentuckiana - RESTful Web Services - Jeremy Druin - @webpwnized
- Introduction to HTML Injection (HTMLi) and Cross Site Scripting (XSS) Using Mutillidae
- Introduction to Pen Testing Simple Network Management Protocol (SNMP) - ISSA Kentuckiana workshop 9 - Jeremy Druin
- Liam Randall- Shmoocon 2013: Bro IDS and the Bro Network Programming Language
- Basics of using sqlmap - ISSA Kentuckiana workshop 8 - Jeremy Druin
- SQL Server Hacking from ISSA Kentuckiana workshop 7 - Jeremy Druin
- Introduction to buffer overflows from ISSA KY workshop 6 - Jeremy Druin
- The potential impact of Software Defined Networking on security - Brent Salisbury
- Into to Metasploit - Jeremy Druin
- Traceroute and Scapy Jeremy Druin @webpwnized
- Basic Setup of Security-Onion: Snort, Snorby, Barnyard, PulledPork, Daemonlogger
- NetworkMiner Professional for Network Forensics