Author Archives: Bruce Schneier

Why Technologists Need to Get Involved in Public Policy

Last month, I gave a 15-minute talk in London titled: "Why technologists need to get involved in public policy."

In it, I try to make the case for public-interest technologists. (I also maintain a public-interest tech resources page, which has pretty much everything I can find in this space. If I'm missing something, please let me know.)

Boing Boing post.

Adding a Hardware Backdoor to a Networked Computer

Interesting proof of concept:

At the CS3sthlm security conference later this month, security researcher Monta Elkins will show how he created a proof-of-concept version of that hardware hack in his basement. He intends to demonstrate just how easily spies, criminals, or saboteurs with even minimal skills, working on a shoestring budget, can plant a chip in enterprise IT equipment to offer themselves stealthy backdoor access.... With only a $150 hot-air soldering tool, a $40 microscope, and some $2 chips ordered online, Elkins was able to alter a Cisco firewall in a way that he says most IT admins likely wouldn't notice, yet would give a remote attacker deep control.

Using Machine Learning to Detect IP Hijacking

This is interesting research:

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That's unfortunately not very hard to do, since BGP itself doesn't have any security procedures for validating that a message is actually coming from the place it says it's coming from.

[...]

To better pinpoint serial attacks, the group first pulled data from several years' worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: Hijackers' address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network's prefix was under 50 days, compared to almost two years for legitimate networks.

  • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as "network prefixes."

  • IP addresses in multiple countries: Most networks don't have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

Note that this is much more likely to detect criminal attacks than nation-state activities. But it's still good work.

Academic paper.

Cracking the Passwords of Early Internet Pioneers

Lots of them weren't very good:

BSD co-inventor Dennis Ritchie, for instance, used "dmac" (his middle name was MacAlistair); Stephen R. Bourne, creator of the Bourne shell command line interpreter, chose "bourne"; Eric Schmidt, an early developer of Unix software and now the executive chairman of Google parent company Alphabet, relied on "wendy!!!" (the name of his wife); and Stuart Feldman, author of Unix automation tool make and the first Fortran compiler, used "axolotl" (the name of a Mexican salamander).

Weakest of all was the password for Unix contributor Brian W. Kernighan: "/.,/.," representing a three-character string repeated twice using adjacent keys on a QWERTY keyboard. (None of the passwords included the quotation marks.)

I don't remember any of my early passwords, but they probably weren't much better.

Factoring 2048-bit Numbers Using 20 Million Qubits

This theoretical paper shows how to factor 2048-bit RSA moduli with a 20-million qubit quantum computer in eight hours. It's interesting work, but I don't want overstate the risk.

We know from Shor's Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You'll hear this called "error rate" or "coherence" -- this paper talks about "noise."

Advances are hard. At this point, we don't know if they're "send a man to the moon" hard or "faster-than-light travel" hard. If I were guessing, I would say they're the former, but still harder than we can accomplish with our current understanding of physics and technology.

I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we'll be fine in the short run. But future theoretical work on quantum computing could easily change what "quantum resistant" means, so it's possible that public-key cryptography will simply not be possible in the long run. That's not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)

Friday Squid Blogging: Apple Fixes Squid Emoji

Apple fixed the squid emoji in iOS 13.1:

A squid's siphon helps it move, breathe, and discharge waste, so having the siphon in back makes more sense than having it in front. Now, the poor squid emoji will look like it should, without a siphon on its front.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

I Have a New Book: We Have Root

I just published my third collection of essays: We Have Root. This book covers essays from 2013 to 2017. (The first two are Schneier on Security and Carry On.)

There is nothing in this book is that is not available for free on my website; but if you'd like these essays in an easy-to-carry paperback book format, you can order a signed copy here. External vendor links, including for ebook versions, here.

Details on Uzbekistan Government Malware: SandCat

Kaspersky has uncovered an Uzbeki hacking operation, mostly due to incompetence on the part of the government hackers.

The group's lax operational security includes using the name of a military group with ties to the SSS to register a domain used in its attack infrastructure; installing Kaspersky's antivirus software on machines it uses to write new malware, allowing Kaspersky to detect and grab malicious code still in development before it's deployed; and embedding a screenshot of one of its developer's machines in a test file, exposing a major attack platform as it was in development. The group's mistakes led Kaspersky to discover four zero-day exploits SandCat had purchased from third-party brokers to target victim machines, effectively rendering those exploits ineffective. And the mistakes not only allowed Kaspersky to track the Uzbek spy agency's activity but also the activity of other nation-state groups in Saudi Arabia and the United Arab Emirates who were using some of the same exploits SandCat was using.

New Reductor Nation-State Malware Compromises TLS

Kaspersky has a detailed blog post about a new piece of sophisticated malware that it's calling Reductor. The malware is able to compromise TLS traffic by infecting the computer with hacked TLS engine substituted on the fly, "marking" infected TLS handshakes by compromising the underlining random-number generator, and adding new digital certificates. The result is that the attacker can identify, intercept, and decrypt TLS traffic from the infected computer.

The Kaspersky Attribution Engine shows strong code similarities between this family and the COMPfun Trojan. Moreover, further research showed that the original COMpfun Trojan most probably is used as a downloader in one of the distribution schemes. Based on these similarities, we're quite sure the new malware was developed by the COMPfun authors.

The COMpfun malware was initially documented by G-DATA in 2014. Although G-DATA didn't identify which actor was using this malware, Kaspersky tentatively linked it to the Turla APT, based on the victimology. Our telemetry indicates that the current campaign using Reductor started at the end of April 2019 and remained active at the time of writing (August 2019). We identified targets in Russia and Belarus.

[...]

Turla has in the past shown many innovative ways to accomplish its goals, such as using hijacked satellite infrastructure. This time, if we're right that Turla is the actor behind this new wave of attacks, then with Reductor it has implemented a very interesting way to mark a host's encrypted TLS traffic by patching the browser without parsing network packets. The victimology for this new campaign aligns with previous Turla interests.

We didn't observe any MitM functionality in the analyzed malware samples. However, Reductor is able to install digital certificates and mark the targets' TLS traffic. It uses infected installers for initial infection through HTTP downloads from warez websites. The fact the original files on these sites are not infected also points to evidence of subsequent traffic manipulation.

The attribution chain from Reductor to COMPfun to Turla is thin. Speculation is that the attacker behind all of this is Russia.

Wi-Fi Hotspot Tracking

Free Wi-Fi hotspots can track your location, even if you don't connect to them. This is because your phone or computer broadcasts a unique MAC address.

What distinguishes location-based marketing hotspot providers like Zenreach and Euclid is that the personal information you enter in the captive portal­ -- like your email address, phone number, or social media profile­ -- can be linked to your laptop or smartphone's Media Access Control (MAC) address. That's the unique alphanumeric ID that devices broadcast when Wi-Fi is switched on.

As Euclid explains in its privacy policy, "...if you bring your mobile device to your favorite clothing store today that is a Location -- ­and then a popular local restaurant a few days later that is also a Location­ -- we may know that a mobile device was in both locations based on seeing the same MAC Address."

MAC addresses alone don't contain identifying information besides the make of a device, such as whether a smartphone is an iPhone or a Samsung Galaxy. But as long as a device's MAC address is linked to someone's profile, and the device's Wi-Fi is turned on, the movements of its owner can be followed by any hotspot from the same provider.

"After a user signs up, we associate their email address and other personal information with their device's MAC address and with any location history we may previously have gathered (or later gather) for that device's MAC address," according to Zenreach's privacy policy.

The defense is to turn Wi-Fi off on your phone when you're not using it.

EDITED TO ADD: Note that the article is from 2018. Not that I think anything is different today....

Cheating at Professional Poker

Interesting story about someone who is almost certainly cheating at professional poker.

But then I start to see things that seem so obvious, but I wonder whether they aren't just paranoia after hours and hours of digging into the mystery. Like the fact that he starts wearing a hat that has a strange bulge around the brim -- one that vanishes after the game when he's doing an interview in the booth. Is it a bone-conducting headset, as some online have suggested, sending him messages directly to his inner ear by vibrating on his skull? Of course it is! How could it be anything else? It's so obvious! Or the fact that he keeps his keys in the same place on the table all the time. Could they contain a secret camera that reads electronic sensors on the cards? I can't see any other possibility! It is all starting to make sense.

In the end, though, none of this additional evidence is even necessary. The gaggle of online Jim Garrisons have simply picked up more momentum than is required and they can't stop themselves. The fact is, the mystery was solved a long time ago. It's just like De Niro's Ace Rothstein says in Casino when the yokel slot attendant gets hit for three jackpots in a row and tells his boss there was no way for him to know he was being scammed. "Yes there is," Ace replies. "An infallible way. They won." According to one poster on TwoPlusTwo, in 69 sessions on Stones Live, Postle has won in 62 of them, for a profit of over $250,000 in 277 hours of play. Given that he plays such a large number of hands, and plays such an erratic and, by his own admission, high-variance style, one would expect to see more, well, variance. His results just aren't possible even for the best players in the world, which, if he isn't cheating, he definitely is among. Add to this the fact that it has been alleged that Postle doesn't play in other nonstreamed live games at Stones, or anywhere else in the Sacramento area, and hasn't been known to play in any sizable no-limit games anywhere in a long time, and that he always picks up his chips and leaves as soon as the livestream ends. I don't really need any more evidence than that. If you know poker players, you know that this is the most damning evidence against him. Poker players like to play poker. If any of the poker players I know had the win rate that Mike Postle has, you'd have to pry them up from the table with a crowbar. The guy is making nearly a thousand dollars an hour! He should be wearing adult diapers so he doesn't have to take a bathroom break and cost himself $250.

This isn't the first time someone has been accused of cheating because they are simply playing significantly better than computer simulations predict that even the best player would play.

News article. BoingBoing post

Illegal Data Center Hidden in Former NATO Bunker

Interesting:

German investigators said Friday they have shut down a data processing center installed in a former NATO bunker that hosted sites dealing in drugs and other illegal activities. Seven people were arrested.

[...]

Thirteen people aged 20 to 59 are under investigation in all, including three German and seven Dutch citizens, Brauer said.

Authorities arrested seven of them, citing the danger of flight and collusion. They are suspected of membership in a criminal organization because of a tax offense, as well as being accessories to hundreds of thousands of offenses involving drugs, counterfeit money and forged documents, and accessories to the distribution of child pornography. Authorities didn't name any of the suspects.

The data center was set up as what investigators described as a "bulletproof hoster," meant to conceal illicit activities from authorities' eyes.

Investigators say the platforms it hosted included "Cannabis Road," a drug-dealing portal; the "Wall Street Market," which was one of the world's largest online criminal marketplaces for drugs, hacking tools and financial-theft wares until it was taken down earlier this year; and sites such as "Orange Chemicals" that dealt in synthetic drugs. A botnet attack on German telecommunications company Deutsche Telekom in late 2016 that knocked out about 1 million customers' routers also appears to have come from the data center in Traben-Trarbach, Brauer said.

EDITED TO ADD (10/9): This is a better article.

Speakers Censored at AISA Conference in Melbourne

Two speakers were censored at the Australian Information Security Association's annual conference this week in Melbourne. Thomas Drake, former NSA employee and whistleblower, was scheduled to give a talk on the golden age of surveillance, both government and corporate. Suelette Dreyfus, lecturer at the University of Melbourne, was scheduled to give a talk on her work -- funded by the EU government -- on anonymous whistleblowing technologies like SecureDrop and how they reduce corruption in countries where that is a problem.

Both were put on the program months ago. But just before the event, the Australian government's ACSC (the Australian Cyber Security Centre) demanded they both be removed from the program.

It's really kind of stupid. Australia has been benefiting a lot from whistleblowers in recent years -- exposing corruption and bad behavior on the part of the government -- and the government doesn't like it. It's cracking down on the whistleblowers and reporters who write their stories. My guess is that someone high up in ACSC saw the word "whistleblower" in the descriptions of those two speakers and talks and panicked.

You can read details of their talks, including abstracts and slides, here. Of course, now everyone is writing about the story. The two censored speakers spent a lot of the day yesterday on the phone with reporters, and they have a bunch of TV and radio interviews today.

I am at this conference, speaking on Wednesday morning (today in Australia, as I write this). ACSC used to have its own government cybersecurity conference. This is the first year it combined with AISA. I hope it's the last. And that AISA invites the two speakers back next year to give their censored talks.

EDITED TO ADD (10/9): More on the censored talks, and my comments from the stage at the conference.

Slashdot thread.

New Unpatchable iPhone Exploit Allows Jailbreaking

A new iOS exploit allows jailbreaking of pretty much all version of the iPhone. This is a huge deal for Apple, but at least it doesn't allow someone to remotely hack people's phones.

Some details:

I wanted to learn how Checkm8 will shape the iPhone experience­ -- particularly as it relates to security­ -- so I spoke at length with axi0mX on Friday. Thomas Reed, director of Mac offerings at security firm Malwarebytes, joined me. The takeaways from the long-ranging interview are:

  • Checkm8 requires physical access to the phone. It can't be remotely executed, even if combined with other exploits.

  • The exploit allows only tethered jailbreaks, meaning it lacks persistence. The exploit must be run each time an iDevice boots.

  • Checkm8 doesn't bypass the protections offered by the Secure Enclave and Touch ID.

  • All of the above means people will be able to use Checkm8 to install malware only under very limited circumstances. The above also means that Checkm8 is unlikely to make it easier for people who find, steal or confiscate a vulnerable iPhone, but don't have the unlock PIN, to access the data stored on it.

  • Checkm8 is going to benefit researchers, hobbyists, and hackers by providing a way not seen in almost a decade to access the lowest levels of iDevices.

Also:

"The main people who are likely to benefit from this are security researchers, who are using their own phone in controlled conditions. This process allows them to gain more control over the phone and so improves visibility into research on iOS or other apps on the phone," Wood says. "For normal users, this is unlikely to have any effect, there are too many extra hurdles currently in place that they would have to get over to do anything significant."

If a regular person with no prior knowledge of jailbreaking wanted to use this exploit to jailbreak their iPhone, they would find it extremely difficult, simply because Checkm8 just gives you access to the exploit, but not a jailbreak in itself. It's also a 'tethered exploit', meaning that the jailbreak can only be triggered when connected to a computer via USB and will become untethered once the device restarts.


Edward Snowden’s Memoirs

Ed Snowden has published a book of his memoirs: Permanent Record. I have not read it yet, but I want to point you all towards two pieces of writing about the book. The first is an excellent review of the book and Snowden in general by SF writer and essayist Jonathan Lethem, who helped make a short film about Snowden in 2014. The second is an essay looking back at the Snowden revelations and what they mean. Both are worth reading.

As to the book, there are lots of other reviews.

The US government has sued to seize Snowden's royalties from book sales.

More Cryptanalysis of Solitaire

In 1999, I invented the Solitaire encryption algorithm, designed to manually encrypt data using a deck of cards. It was written into the plot of Neal Stephenson's novel Cryptonomicon, and I even wrote an afterward to the book describing the cipher.

I don't talk about it much, mostly because I made a dumb mistake that resulted in the algorithm not being reversible. Still, for the short message lengths you're likely to use a manual cipher for, it's still secure and will likely remain secure.

Here's some new cryptanalysis:

Abstract: The Solitaire cipher was designed by Bruce Schneier as a plot point in the novel Cryptonomicon by Neal Stephenson. The cipher is intended to fit the archetype of a modern stream cipher whilst being implementable by hand using a standard deck of cards with two jokers. We find a model for repetitions in the keystream in the stream cipher Solitaire that accounts for the large majority of the repetition bias. Other phenomena merit further investigation. We have proposed modifications to the cipher that would reduce the repetition bias, but at the cost of increasing the complexity of the cipher (probably beyond the goal of allowing manual implementation). We have argued that the state update function is unlikely to lead to cycles significantly shorter than those of a random bijection.

Measuring the Security of IoT Devices

In August, CyberITL completed a large-scale survey of software security practices in the IoT environment, by looking at the compiled software.

Data Collected:

  • 22 Vendors
  • 1,294 Products
  • 4,956 Firmware versions
  • 3,333,411 Binaries analyzed
  • Date range of data: 2003-03-24 to 2019-01-24 (varies by vendor, most up to 2018 releases)

[...]

This dataset contains products such as home routers, enterprise equipment, smart cameras, security devices, and more. It represents a wide range of either found in the home, enterprise or government deployments.

Vendors are Asus, Belkin, DLink, Linksys, Moxa, Tenda, Trendnet, and Ubiquiti.

CyberITL's methodology is not source code analysis. They look at the actual firmware. And they don't look for vulnerabilities; they look for secure coding practices that indicate that the company is taking security seriously, and whose lack pretty much guarantees that there will be vulnerabilities. These include address space layout randomization and stack guards.

A summary of their results.

CITL identified a number of important takeaways from this study:

  • On average, updates were more likely to remove hardening features than add them.
  • Within our 15 year data set, there have been no positive trends from any one vendor.
  • MIPS is both the most common CPU architecture and least hardened on average.
  • There are a large number of duplicate binaries across multiple vendors, indicating a common build system or toolchain.

Their website contains the raw data.

New Research into Russian Malware

There's some interesting new research about Russian APT malware:

The Russian government has fostered competition among the three agencies, which operate independently from one another, and compete for funds. This, in turn, has resulted in each group developing and hoarding its tools, rather than sharing toolkits with their counterparts, a common sight among Chinese and North Korean state-sponsored hackers.

"Every actor or organization under the Russain APT umbrella has its own dedicated malware development teams, working for years in parallel on similar malware toolkits and frameworks," researchers said.

"While each actor does reuse its code in different operations and between different malware families, there is no single tool, library or framework that is shared between different actors."

Researchers say these findings suggest that Russia's cyber-espionage apparatus is investing a lot of effort into its operational security.

"By avoiding different organizations re-using the same tools on a wide range of targets, they overcome the risk that one compromised operation will expose other active operations," researchers said.

This is no different from the US. The NSA malware released by the Shadow Brokers looked nothing like the CIA "Vault 7" malware released by WikiLeaks.

The work was done by Check Point and Intezer Labs. They have a website with an interactive map.

NSA on the Future of National Cybersecurity

Glenn Gerstell, the General Counsel of the NSA, wrote a long and interesting op-ed for the New York Times where he outlined a long list of cyber risks facing the US.

There are four key implications of this revolution that policymakers in the national security sector will need to address:

The first is that the unprecedented scale and pace of technological change will outstrip our ability to effectively adapt to it. Second, we will be in a world of ceaseless and pervasive cyberinsecurity and cyberconflict against nation-states, businesses and individuals. Third, the flood of data about human and machine activity will put such extraordinary economic and political power in the hands of the private sector that it will transform the fundamental relationship, at least in the Western world, between government and the private sector. Finally, and perhaps most ominously, the digital revolution has the potential for a pernicious effect on the very legitimacy and thus stability of our governmental and societal structures.

He then goes on to explain these four implications. It's all interesting, and it's the sort of stuff you don't generally hear from the NSA. He talks about technological changes causing social changes, and the need for people who understand that. (Hooray for public-interest technologists.) He talks about national security infrastructure in private hands, at least in the US. He talks about a massive geopolitical restructuring -- a fundamental change in the relationship between private tech corporations and government. He talks about recalibrating the Fourth Amendment (of course).

The essay is more about the problems than the solutions, but there is a bit at the end:

The first imperative is that our national security agencies must quickly accept this forthcoming reality and embrace the need for significant changes to address these challenges. This will have to be done in short order, since the digital revolution's pace will soon outstrip our ability to deal with it, and it will have to be done at a time when our national security agencies are confronted with complex new geopolitical threats.

Much of what needs to be done is easy to see -- developing the requisite new technologies and attracting and retaining the expertise needed for that forthcoming reality. What is difficult is executing the solution to those challenges, most notably including whether our nation has the resources and political will to effect that solution. The roughly $60 billion our nation spends annually on the intelligence community might have to be significantly increased during a time of intense competition over the federal budget. Even if the amount is indeed so increased, spending additional vast sums to meet the challenges in an effective way will be a daunting undertaking. Fortunately, the same digital revolution that presents these novel challenges also sometimes provides the new tools (A.I., for example) to deal with them.

The second imperative is we must adapt to the unavoidable conclusion that the fundamental relationship between government and the private sector will be greatly altered. The national security agencies must have a vital role in reshaping that balance if they are to succeed in their mission to protect our democracy and keep our citizens safe. While there will be good reasons to increase the resources devoted to the intelligence community, other factors will suggest that an increasing portion of the mission should be handled by the private sector. In short, addressing the challenges will not necessarily mean that the national security sector will become massively large, with the associated risks of inefficiency, insufficient coordination and excessively intrusive surveillance and data retention.

A smarter approach would be to recognize that as the capabilities of the private sector increase, the scope of activities of the national security agencies could become significantly more focused, undertaking only those activities in which government either has a recognized advantage or must be the only actor. A greater burden would then be borne by the private sector.

It's an extraordinary essay, less for its contents and more for the speaker. This is not the sort of thing the NSA publishes. The NSA doesn't opine on broad technological trends and their social implications. It doesn't publicly try to predict the future. It doesn't philosophize for 6000 unclassified words. And, given how hard it would be to get something like this approved for public release, I am left to wonder what the purpose of the essay is. Is the NSA trying to lay the groundwork for some policy initiative ? Some legislation? A budget request? What?

Charlie Warzel has a snarky response. His conclusion about the purpose:

He argues that the piece "is not in the spirit of forecasting doom, but rather to sound an alarm." Translated: Congress, wake up. Pay attention. We've seen the future and it is a sweaty, pulsing cyber night terror. So please give us money (the word "money" doesn't appear in the text, but the word "resources" appears eight times and "investment" shows up 11 times).

Susan Landau has a more considered response, which is well worth reading. She calls the essay a proposal for a moonshot (which is another way of saying "they want money"). And she has some important pushbacks on the specifics.

I don't expect the general counsel and I will agree on what the answers to these questions should be. But I strongly concur on the importance of the questions and that the United States does not have time to waste in responding to them. And I thank him for raising these issues in so public a way.

I agree with Landau.

Slashdot thread.

Supply-Chain Security and Trust

The United States government's continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it's impossible to verify that they're trustworthy. Solving this problem ­ which is increasingly a national security issue ­ will require us to both make major policy changes and invent new technologies.

The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or ­-- even worse ­-- take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.

It's obvious that we can't trust computer equipment from a country we don't trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren't made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.

There's more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

And while nation-state threats like China and Huawei ­-- or Russia and the antivirus company Kaspersky a couple of years earlier ­-- make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.

Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it's not nearly enough. Too many back doors can evade this kind of inspection.

Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code ­ this is how a Linux back door was discovered and removed in 2003 ­ or the hardware design, which becomes a cleverness battle between attacker and defender.

This is an area that needs more research. Today, the advantage goes to the attacker. It's hard to ensure that the hardware and software you examine is the same as what you get, and it's too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won't find them all. It's a needle-in-a-haystack problem, except we don't know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, "You have to presume a dirty network." Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it's how we can have highly resilient distributed systems like Google's network even though none of the individual components are particularly good. It's also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

Security is a lot harder than reliability. We don't even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can't trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it's not going to solve our near-term problems.

At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn't for you to watch videos faster; it's for things talking to things without bothering you. These things ­-- cars, appliances, power plants, smart cities --­ increasingly affect the world in a direct physical manner. They're increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn't that their government will listen in on our conversations; it's that they'll turn the power off or make all the cars crash into one another.

All of this doesn't leave us with many options for today's supply-chain problems. We still have to presume a dirty network ­-- as well as back-doored computers and phones -- and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It's not nearly enough to solve the problem, but it's a start.


Perhaps these half-solutions are the best we can do. Live with the problem today, and accelerate research to solve the problem for the future. These are research projects on a par with the Internet itself. They need government funding, like the Internet itself. And, also like the Internet, they're critical to national security.

Critically, these systems must be as secure as we can make them. As former FCC Commissioner Tom Wheeler has explained, there's a lot more to securing 5G than keeping Chinese equipment out of the network. This means we have to give up the fantasy that law enforcement can have back doors to aid criminal investigations without also weakening these systems. The world uses one network, and there can only be one answer: Either everyone gets to spy, or no one gets to spy. And as these systems become more critical to national security, a network secure from all eavesdroppers becomes more important.

This essay previously appeared in the New York Times.

Superhero Movies and Security Lessons

A paper I co-wrote was just published in Security Journal: "Superheroes on screen: real life lessons for security debates":

Abstract: Superhero films and episodic shows have existed since the early days of those media, but since 9/11, they have become one of the most popular and most lucrative forms of popular culture. These fantastic tales are not simple amusements but nuanced explorations of fundamental security questions. Their treatment of social issues of power, security and control are here interrogated using the Film Studies approach of close reading to showcase this relevance to the real-life considerations of the legitimacy of security approaches. By scrutinizing three specific pieces -- Daredevil Season 2, Captain America: Civil War, and Batman v Superman: Dawn of Justice -- superhero tales are framed (by the authors) as narratives which significantly influence the general public's understanding of security, often encouraging them to view expansive power critically­to luxuriate within omnipotence while also recognizing the possibility as well as the need for limits, be they ethical or legal.

This was my first collaboration with Fareed Ben-Youssef, a film studies scholar. (And with Andrew Adams and Kiyoshi Murata.) It was fun to think about and write.

On Chinese "Spy Trains"

The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world's largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.

Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about "spy trains," and the possibility that the train cars might surreptitiously monitor their passengers' faces, movements, conversations or phone calls.

This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don't trust. That's why there is so much worry about Chinese-made equipment for the new 5G wireless networks.

It's also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China's technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.

The reason these threats are so real is that it's not difficult to hide surveillance or control infrastructure in computer components, and if they're not turned on, they're very difficult to find.

Like every other piece of modern machinery, modern train cars are filled with computers, and while it's certainly possible to produce a subway car with enough surveillance apparatus to turn it into a "spy train," in practice it doesn't make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.

While it's unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That's an easier, and more fruitful, attack path.

We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok's parent company has told the Washington Post that the app doesn't send American users' info back to Beijing, and that the Chinese government does not influence the app's use in the United States.)

Even so, these examples illustrate an important point: there's no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.

Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization's new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.

If there's any lesson from all of this, it's that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.

China dominates the subway car manufacturing industry because of its low prices­ -- the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they're being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.

Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.

We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I'm worried about our 5G infrastructure built using Chinese hardware, I'm not worried about our subway cars.

This essay originally appeared on CNN.com.

EDITED TO ADD: I had a lot of trouble with CNN's legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn't think needed them. They wouldn't let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.

Russians Hack FBI Comms System

Yahoo News reported that the Russians have successfully targeted an FBI communications system:

American officials discovered that the Russians had dramatically improved their ability to decrypt certain types of secure communications and had successfully tracked devices used by elite FBI surveillance teams. Officials also feared that the Russians may have devised other ways to monitor U.S. intelligence communications, including hacking into computers not connected to the internet. Senior FBI and CIA officials briefed congressional leaders on these issues as part of a wide-ranging examination on Capitol Hill of U.S. counterintelligence vulnerabilities.

These compromises, the full gravity of which became clear to U.S. officials in 2012, gave Russian spies in American cities including Washington, New York and San Francisco key insights into the location of undercover FBI surveillance teams, and likely the actual substance of FBI communications, according to former officials. They provided the Russians opportunities to potentially shake off FBI surveillance and communicate with sensitive human sources, check on remote recording devices and even gather intelligence on their FBI pursuers, the former officials said.

It's unclear whether the Russians were able to recover encrypted data or just perform traffic analysis. The Yahoo story implies the former; the NBC News story says otherwise. It's hard to tell if the reporters truly understand the difference. We do know, from research Matt Blaze and others did almost ten years ago, that at least one FBI radio system was horribly insecure in practice -- but not in a way that breaks the encryption. Its poor design just encourages users to turn off the encryption.

A Feminist Take on Information Privacy

Maria Farrell has a really interesting framing of information/device privacy:

What our smartphones and relationship abusers share is that they both exert power over us in a world shaped to tip the balance in their favour, and they both work really, really hard to obscure this fact and keep us confused and blaming ourselves. Here are some of the ways our unequal relationship with our smartphones is like an abusive relationship:

  • They isolate us from deeper, competing relationships in favour of superficial contact -- 'user engagement' -- that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.

  • They tell us the onus is on us to manage their behavior. It's our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.

  • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we're meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.

  • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.

  • It's impossible to just cut them off. They've wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

Nope. Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true.

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Earlier this month, I made fun of a company called Crown Sterling, for...for...for being a company that deserves being made fun of.

This morning, the company announced that they "decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer." Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: "Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing." They didn't demonstrate it, but if they're right they've matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn't be surprised if this was a hoax press release. It's not currently on the company's website. (And, if it is a hoax, I apologize to Crown Sterling. I'll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: "Today's decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys."

People, this isn't hard. Find an RSA Factoring Challenge number that hasn't been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won't have to reveal your super-secret world-destabilizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

EDITED TO ADD (9/24): More commentary.

I’m Looking to Hire a Strategist to Help Figure Out Public-Interest Tech

I am in search of a strategic thought partner: a person who can work closely with me over the next 9 to 12 months in assessing what's needed to advance the practice, integration, and adoption of public-interest technology.

All of the details are in the RFP. The selected strategist will work closely with me on a number of clear deliverables. This is a contract position that could possibly become a salaried position in a subsequent phase, and under a different agreement.

I'm working with the team at Yancey Consulting, who will follow up with all proposers and manage the process. Please email Lisa Yancey at lisa@yanceyconsulting.com.

Another Side Channel in Intel Chips

Not that serious, but interesting:

In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU's last-level cache, rather than following the standard (and significantly longer) path through the server's main memory. By avoiding system memory, Intel's DDIO­short for Data-Direct I/O­increased input/output bandwidth and reduced latency and power consumption.

Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

When Biology Becomes Software

All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.

Genes and genomes are based on code-- just like the digital language of computers. But instead of zeros and ones, four DNA letters --- A, C, T, G -- encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.

If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.

Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it's a relatively simple operation by today's standards, it'll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they're running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient's immune system.

We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first "recombinant" virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.

In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.

This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to "run" it in an already existing organism, from a virus to a wheat plant to a person.

In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.

Within the biological science community, urgent conversations are occurring about "cyberbiosecurity," an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.

These risks have occupied not only learned bodies -- the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively -- but have made it to mainstream media: genome editing was a major plot element in Netflix's Season 3 of "Designated Survivor."

Our worries are more prosaic. As synthetic biology "programming" reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.

Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.

Even finished code still has problems. Again due to the complexity of modern software systems, "works properly" doesn't mean that it's perfectly correct. Modern software is full of bugs -- thousands of software flaws -- that occasionally affect performance or security. That's why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.

Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn't work in biology.

In nature, a similar type of trial and error is handled by "the survival of the fittest" and occurs slowly over many generations. But human-generated code from scratch doesn't have that kind of correction mechanism. Inadvertent or intentional release of these newly coded "programs" may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.

Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn't make its way into the larger body of practitioners who have important contributions to make.

Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.

What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.

Those opportunities will not occur without effort or financial support. Let's find those resources, public, private, philanthropic, or any combination. And then let's use those resources to set up some novel opportunities for digital geeks and bionerds -- as well as ethicists and policymakers -- to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.

These are overarching problems; let's not let siloed thinking or funding get in the way of breaking down barriers between communities. And let's not let technology of any kind get in the way of the public good.

This essay previously appeared on CNN.com.

Fabricated Voice Used in Financial Fraud

This seems to be an identity theft first:

Criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

Another news article.

More on Law Enforcement Backdoor Demands

The Carnegie Endowment for International Peace and Princeton University's Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the "going dark" debate. They have released their report: "Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn't also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don't believe that backdoor access to encryption data at rest offers "an achievable balance of risk vs. benefit" either, but I agree that the two aspects should be treated independently.

EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate "session key" for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message's secrecy independent of other messages (consistent with a concept known as "forward secrecy"). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker -- former general counsel of the FBI -- and Chris Inglis: former deputy director of the NSA.

On Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart's law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.