Author Archives: Bruce Schneier

New DNS Hijacking Attacks

DNS hijacking isn't new, but this seems to be an attack of unprecedented scale:

Researchers at Cisco's Talos security division on Wednesday revealed that a hacker group it's calling Sea Turtle carried out a broad campaign of espionage via DNS hijacking, hitting 40 different organizations. In the process, they went so far as to compromise multiple country-code top-level domains -- the suffixes like .co.uk or .ru that end a foreign web address -- putting all the traffic of every domain in multiple countries at risk.

The hackers' victims include telecoms, internet service providers, and domain registrars responsible for implementing the domain name system. But the majority of the victims and the ultimate targets, Cisco believes, were a collection of mostly governmental organizations, including ministries of foreign affairs, intelligence agencies, military targets, and energy-related groups, all based in the Middle East and North Africa. By corrupting the internet's directory system, hackers were able to silently use "man in the middle" attacks to intercept all internet data from email to web traffic sent to those victim organizations.

[...]

Cisco Talos said it couldn't determine the nationality of the Sea Turtle hackers, and declined to name the specific targets of their spying operations. But it did provide a list of the countries where victims were located: Albania, Armenia, Cyprus, Egypt, Iraq, Jordan, Lebanon, Libya, Syria, Turkey, and the United Arab Emirates. Cisco's Craig Williams confirmed that Armenia's .am top-level domain was one of the "handful" that were compromised, but wouldn't say which of the other countries' top-level domains were similarly hijacked.

Another news article.

A "Department of Cybersecurity"

Presidential candidate John Delaney has announced a plan to create a Department of Cybersecurity.

I have long been in favor of a new federal agency to deal with Internet -- and especially Internet-of-Things -- security. The devil is in the details, of course, and it's really easy to get this wrong, I outline a strawman proposal in Click Here to Kill Everybody; I call it the "National Cyber Office" and model it on the Office of the Director of National Intelligence. But regardless of what you think of this idea, I'm glad that at least someone is talking about it.

Slashdot thread. News story.

EDITED TO ADD: Yes, this post is perilously close to presidential politics. Any comment that opines on the qualifications of this, or any other, presidential candidate will be deleted.

Vulnerabilities in the WPA3 Wi-Fi Security Protocol

Researchers have found several vulnerabilities in the WPA3 Wi-Fi security protocol:

The design flaws we discovered can be divided in two categories. The first category consists of downgrade attacks against WPA3-capable devices, and the second category consists of weaknesses in the Dragonfly handshake of WPA3, which in the Wi-Fi standard is better known as the Simultaneous Authentication of Equals (SAE) handshake. The discovered flaws can be abused to recover the password of the Wi-Fi network, launch resource consumption attacks, and force devices into using weaker security groups. All attacks are against home networks (i.e. WPA3-Personal), where one password is shared among all users.

News article. Research paper: "Dragonblood: A Security Analysis of WPA3's SAE Handshake":

Abstract: The WPA3 certification aims to secure Wi-Fi networks, and provides several advantages over its predecessor WPA2, such as protection against offline dictionary attacks and forward secrecy. Unfortunately, we show that WPA3 is affected by several design flaws,and analyze these flaws both theoretically and practically. Most prominently, we show that WPA3's Simultaneous Authentication of Equals (SAE) handshake, commonly known as Dragonfly, is affected by password partitioning attacks. These attacks resemble dictionary attacks and allow an adversary to recover the password by abusing timing or cache-based side-channel leaks. Our side-channel attacks target the protocol's password encoding method. For instance, our cache-based attack exploits SAE's hash-to-curve algorithm. The resulting attacks are efficient and low cost: brute-forcing all 8-character lowercase password requires less than 125$in Amazon EC2 instances. In light of ongoing standardization efforts on hash-to-curve, Password-Authenticated Key Exchanges (PAKEs), and Dragonfly as a TLS handshake, our findings are also of more general interest. Finally, we discuss how to mitigate our attacks in a backwards-compatible manner, and explain how minor changes to the protocol could have prevented most of our attack

China Spying on Undersea Internet Cables

Supply chain security is an insurmountably hard problem. The recent focus is on Chinese 5G equipment, but the problem is much broader. This opinion piece looks at undersea communications cables:

But now the Chinese conglomerate Huawei Technologies, the leading firm working to deliver 5G telephony networks globally, has gone to sea. Under its Huawei Marine Networks component, it is constructing or improving nearly 100 submarine cables around the world. Last year it completed a cable stretching nearly 4,000 miles from Brazil to Cameroon. (The cable is partly owned by China Unicom, a state-controlled telecom operator.) Rivals claim that Chinese firms are able to lowball the bidding because they receive subsidies from Beijing.

Just as the experts are justifiably concerned about the inclusion of espionage "back doors" in Huawei's 5G technology, Western intelligence professionals oppose the company's engagement in the undersea version, which provides a much bigger bang for the buck because so much data rides on so few cables.

This shouldn't surprise anyone. For years, the US and the Five Eyes have had a monopoly on spying on the Internet around the globe. Other countries want in.

As I have repeatedly said, we need to decide if we are going to build our future Internet systems for security or surveillance. Either everyone gets to spy, or no one gets to spy. And I believe we must choose security over surveillance, and implement a defense-dominant strategy.

Maliciously Tampering with Medical Imagery

In what I am sure is only a first in many similar demonstrations, researchers are able to add or remove cancer signs from CT scans. The results easily fool radiologists.

I don't think the medical device industry has thought at all about data integrity and authentication issues. In a world where sensor data of all kinds is undetectably manipulatable, they're going to have to start.

Research paper. Slashdot thread.

New Version of Flame Malware Discovered

Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.

Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.

Note that the article claims that Flame was believed to be Israeli in origin. That's wrong; most people who have an opinion believe it is from the NSA.

TajMahal Spyware

Kaspersky has released details about a sophisticated nation-state spyware it calls TajMahal:

The TajMahal framework's 80 modules, Shulmin says, comprise not only the typical keylogging and screengrabbing features of spyware, but also never-before-seen and obscure tricks. It can intercept documents in a printer queue, and keep track of "files of interest," automatically stealing them if a USB drive is inserted into the infected machine. And that unique spyware toolkit, Kaspersky says, bears none of the fingerprints of any known nation-state hacker group.

It was found on the servers of an "embassy of a Central Asian country." No speculation on who wrote and controls it.

More details.

How the Anonymous Artist Bansky Authenticates His or Her Work

Interesting scheme:

It all starts off with a fairly bog standard gallery style certificate. Details of the work, the authenticating agency, a bit of embossing and a large impressive signature at the bottom. Exactly the sort of things that can be easily copied by someone on a mission to create the perfect fake.

That torn-in-half banknote though? Never mind signatures, embossing or wax seals. The Di Faced Tenner is doing all the authentication heavy lifting here.

The tear is what uniquely separates the private key, the half of the note kept secret under lock and key at Pest Control, with the public key. The public key is the half of the note attached to the authentication certificate which gets passed on with the print, and allows its authenticity to be easily verified.

We have no idea what has been written on Pest Control's private half of the note. Which means it can't be easily recreated, and that empowers Pest Control to keep the authoritative list of who currently owns each authenticated Banksy work.

Hey Secret Service: Don’t Plug Suspect USB Sticks into Random Computers

I just noticed this bit from the incredibly weird story of the Chinese woman arrested at Mar-a-Lago:

Secret Service agent Samuel Ivanovich, who interviewed Zhang on the day of her arrest, testified at the hearing. He stated that when another agent put Zhang's thumb drive into his computer, it immediately began to install files, a "very out-of-the-ordinary" event that he had never seen happen before during this kind of analysis. The agent had to immediately stop the analysis to halt any further corruption of his computer, Ivanovich testified. The analysis is ongoing but still inconclusive, he said.

This is what passes for forensics at the Secret Service? I expect better.

EDITED TO ADD (4/9): I know this post is peripherally related to Trump. I know some readers can't help themselves from talking about broader issues surrounding Trump, Russia, and so on. Please do not comment to those posts. I will delete them as soon as I see them.

EDITED TO ADD (4/9): Ars Technica has more detail.

Unhackable Cryptography?

A recent article overhyped the release of EverCrypt, a cryptography library created using formal methods to prove security against specific attacks.

The Quanta magazine article sets off a series of "snake-oil" alarm bells. The author's Github README is more measured and accurate, and illustrates what a cool project this really is. But it's not "hacker-proof cryptographic code."

Former Mozilla CTO Harassed at the US Border

This is a pretty awful story of how Andreas Gal, former Mozilla CTO and US citizen, was detained and threatened at the US border. CBP agents demanded that he unlock his phone and computer.

Know your rights when you enter the US. The EFF publishes a handy guide. And if you want to encrypt your computer so that you are unable to unlock it on demand, here's my guide. Remember not to lie to a customs officer; that's a crime all by itself.

Adversarial Machine Learning against Tesla’s Autopilot

Researchers have been able to fool Tesla's autopilot in a variety of ways, including convincing it to drive into oncoming traffic. It requires the placement of stickers on the road.

Abstract: Keen Security Lab has maintained the security research work on Tesla vehicle and shared our research results on Black Hat USA 2017 and 2018 in a row. Based on the ROOT privilege of the APE (Tesla Autopilot ECU, software version 18.6.1), we did some further interesting research work on this module. We analyzed the CAN messaging functions of APE, and successfully got remote control of the steering system in a contact-less way. We used an improved optimization algorithm to generate adversarial examples of the features (autowipers and lane recognition) which make decisions purely based on camera data, and successfully achieved the adversarial example attack in the physical world. In addition, we also found a potential high-risk design weakness of the lane recognition when the vehicle is in Autosteer mode. The whole article is divided into four parts: first a brief introduction of Autopilot, after that we will introduce how to send control commands from APE to control the steering system when the car is driving. In the last two sections, we will introduce the implementation details of the autowipers and lane recognition features, as well as our adversarial example attacking methods in the physical world. In our research, we believe that we made three creative contributions:

  1. We proved that we can remotely gain the root privilege of APE and control the steering system.
  2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
  3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.

You can see the stickers in this photo. They're unobtrusive.

This is machine learning's big problem, and I think solving it is a lot harder than many believe.

How Political Campaigns Use Personal Data

Really interesting report from Tactical Tech.

Data-driven technologies are an inevitable feature of modern political campaigning. Some argue that they are a welcome addition to politics as normal and a necessary and modern approach to democratic processes; others say that they are corrosive and diminish trust in already flawed political systems. The use of these technologies in political campaigning is not going away; in fact, we can only expect their sophistication and prevalence to grow. For this reason, the techniques and methods need to be reviewed outside the dichotomy of 'good' or 'bad' and beyond the headlines of 'disinformation campaigns'.

All the data-driven methods presented in this guide would not exist without the commercial digital marketing and advertising industry. From analysing behavioural data to A/B testing and from geotargeting to psychometric profiling, political parties are using the same techniques to sell political candidates to voters that companies use to sell shoes to consumers. The question is, is that appropriate? And what impact does it have not only on individual voters, who may or may not be persuad-ed, but on the political environment as a whole?

The practice of political strategists selling candidates as brands is not new. Vance Packard wrote about the 'depth probing' techniques of 'political persuaders' as early as 1957. In his book, 'The Hidden Persuaders', Packard described political strategies designed to sell candidates to voters 'like toothpaste', and how public relations directors at the time boasted that 'scientific methods take the guesswork out of politics'.5 In this sense, what we have now is a logical progression of the digitisation of marketing techniques and political persuasion techniques.

Hacking Instagram to Get Free Meals in Exchange for Positive Reviews

This is a fascinating hack:

In today's digital age, a large Instagram audience is considered a valuable currency. I had also heard through the grapevine that I could monetize a large following -- or in my desired case -- use it to have my meals paid for. So I did just that.

I created an Instagram page that showcased pictures of New York City's skylines, iconic spots, elegant skyscrapers ­-- you name it. The page has amassed a following of over 25,000 users in the NYC area and it's still rapidly growing.

I reach out restaurants in the area either via Instagram's direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I've messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I've ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.

The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly -- both direct messages and emails restaurants about a potential promotion. Since its inception, I haven't even really logged into the account. I spend zero time on it. It's essentially a robot that operates like a human, but the average viewer can't tell the difference. And as the programmer, I get to sit back and admire its (and my) work.

So much going on in this project.

Recovering Smartphone Typing from Microphone Sounds

Yet another side-channel attack on smartphones: "Hearing your touch: A new acoustic side channel on smartphones," by Ilia Shumailov, Laurent Simon, Jeff Yan, and Ross Anderson.

Abstract: We present the first acoustic side-channel attack that recovers what users type on the virtual keyboard of their touch-screen smartphone or tablet. When a user taps the screen with a finger, the tap generates a sound wave that propagates on the screen surface and in the air. We found the device's microphone(s) can recover this wave and "hear" the finger's touch, and the wave's distortions are characteristic of the tap's location on the screen. Hence, by recording audio through the built-in microphone(s), a malicious app can infer text as the user enters it on their device. We evaluate the effectiveness of the attack with 45 participants in a real-world environment on an Android tablet and an Android smartphone. For the tablet, we recover 61% of 200 4-digit PIN-codes within 20 attempts, even if the model is not trained with the victim's data. For the smartphone, we recover 9 words of size 7-13 letters with 50 attempts in a common side-channel attack benchmark. Our results suggest that it not always sufficient to rely on isolation mechanisms such as TrustZone to protect user input. We propose and discuss hardware, operating-system and application-level mechanisms to block this attack more effectively. Mobile devices may need a richer capability model, a more user-friendly notification system for sensor usage and a more thorough evaluation of the information leaked by the underlying hardware.

Blog post.

NSA-Inspired Vulnerability Found in Huawei Laptops

This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA's DOUBLEPULSAR that was leaked by the Shadow Brokers -- believed to be the Russian government -- and it's obvious that this attack copied that technique.

What is less clear is whether the vulnerability -- which has been fixed -- was put into the Huwei driver accidentally or on purpose.

Malware Installed in Asus Computers through Hacked Update Process

Kaspersky Labs is reporting on a new supply chain attack they call "Shadowhammer."

In January 2019, we discovered a sophisticated supply chain attack involving the ASUS Live Update Utility. The attack took place between June and November 2018 and according to our telemetry, it affected a large number of users.

[...]

The goal of the attack was to surgically target an unknown pool of users, which were identified by their network adapters' MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses in the trojanized samples and this list was used to identify the actual intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from over 200 samples used in this attack. Of course, there might be other samples out there with different MAC addresses in their list.

We believe this to be a very sophisticated supply chain attack, which matches or even surpasses the Shadowpad and the CCleaner incidents in complexity and techniques. The reason that it stayed undetected for so long is partly due to the fact that the trojanized updaters were signed with legitimate certificates (eg: "ASUSTeK Computer Inc."). The malicious updaters were hosted on the official liveupdate01s.asus[.]com and liveupdate01.asus[.]com ASUS update servers.

The sophistication of the attack leads to the speculation that a nation-state -- and one of the cyber powers -- is responsible.

As I have previously written, supply chain security is "an incredibly complex problem." These attacks co-opt the very mechanisms we need to trust for our security. And the international nature of our industry results in an array of vulnerabilities that are very hard to secure.

Kim Zetter has a really good article on this. Check if your computer is infected here, or use this diagnostic tool from Asus.

Another news article.

Programmers Who Don’t Understand Security Are Poor at Security

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they're not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method -- hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don't know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I'm sure they grabbed the first thing they found on GitHub that did the job.

I'm not very impressed with the study or its conclusions.

Mail Fishing

Not email, paper mail:

Thieves, often at night, use string to lower glue-covered rodent traps or bottles coated with an adhesive down the chute of a sidewalk mailbox. This bait attaches to the envelopes inside, and the fish in this case -- mail containing gift cards, money orders or checks, which can be altered with chemicals and cashed -- are reeled out slowly.

In response, the US Post Office is introducing a more secure mailbox:

The mail slots are only large enough for letters, meaning sending even small packages will require a trip to the post office. The opening is also equipped with a mechanism that grabs at a letter once inserted, making it difficult to retract.

The crime has become more common in the past few years.

Friday Squid Blogging: New Research on Squid Camouflage

From the New York Times:

Now, a paper published last week in Nature Communications suggests that their chromatophores, previously thought to be mainly pockets of pigment embedded in their skin, are also equipped with tiny reflectors made of proteins. These reflectors aid the squid to produce such a wide array of colors, including iridescent greens and blues, within a second of passing in front of a new background. The research reveals that by using tricks found in other parts of the animal kingdom -- like shimmering butterflies and peacocks -- squid are able to combine multiple approaches to produce their vivid camouflage.

Researchers studied Doryteuthis pealeii, or the longfin squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

First Look Media Shutting Down Access to Snowden NSA Archives

The Daily Beast is reporting that First Look Media -- home of The Intercept and Glenn Greenwald -- is shutting down access to the Snowden archives.

The Intercept was the home for Greenwald's subset of Snowden's NSA documents since 2014, after he parted ways with the Guardian the year before. I don't know the details of how the archive was stored, but it was offline and well secured -- and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.

The article doesn't say what "shutting down access" means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.

This doesn't mean that we're done with the documents. Glenn Greenwald tweeted:

Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I've been looking for the right partner -- an academic institution or research facility -- that has the funds to robustly publish.

I'm sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a "and then they had ten years to improve this" mentality.

Eventually it'll all become public, but not before it is 100% history and 0% current events.

Zipcar Disruption

This isn't a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: "an outage experienced by a third party telecommunications vendor disrupted connections between the company's vehicles and its reservation software."

That didn't just mean people couldn't get cars they reserved. Sometimes is meant they couldn't get the cars they were already driving to work:

Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.

"We were just waiting and waiting for the call back," he said.

Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.

Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.

This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.

An Argument that Cybersecurity Is Basically Okay

Andrew Odlyzko's new essay is worth reading -- "Cybersecurity is not very important":

Abstract: There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This "chewing gum and baling wire"approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.

I am reminded of these two essays. And, as I said in the blog post about those two essays:

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

CAs Reissue Over One Million Weak Certificates

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn't a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don't allow those weak hash functions anymore. Still, it's a good thing that the CAs are reissuing the certificates. The point of a standard is that it's to be followed.

I Was Cited in a Court Decision

An article I co-wrote -- my first law journal article -- was cited by the Massachusetts Supreme Judicial Court -- the state supreme court -- in a case on compelled decryption.

Here's the first, in footnote 1:

We understand the word "password" to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or "passcode." Each term refers to the personalized combination of letters or digits that, when manually entered by the user, "unlocks" a cell phone. For simplicity, we use "password" throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here's the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password "unlocks" a cell phone, the password itself is not the "encryption key" that decrypts the cell phone's contents. See Kerr & Schneier, supra at 995. Rather, "entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user." Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

Critical Flaw in Swiss Internet Voting System

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted -- and that this system in particular should never be deployed, even if the found flaw is fixed -- but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post's embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world's top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn't work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who's ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post's position is that since the bug only allows elections to be stolen by Swiss Post employees, it's not a big deal, because Swiss Post employees wouldn't steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on "zero knowledge" proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn't be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, "Well, what is the big deal? If you don't trust the people administering an election, you can't trust the election's outcome, right?" Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It's excellent.

More info.

DARPA Is Developing an Open-Source Voting System

This sounds like a good development:

...a new $10 million contract the Defense Department's Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don't have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won't be asking people to blindly trust that their voting systems are secure -- as voting machine vendors currently do. Instead they'll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They'll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

Judging Facebook’s Privacy Shift

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that "frankly we don't currently have a strong reputation for building privacy protective services."

There is ample reason to question Zuckerberg's pronouncement: The company has made -- and broken -- many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook's surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details -- and Zuckerberg's post provides none. But we'll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn't go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better -- and more usable -- privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook's advertisers, which are the company's real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it's hard to understand the results of different options. But much of this is deliberate; Facebook doesn't want its users to make their data private from other users.

The company could give people better control over how -- and whether -- their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don't mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. "Facebook stalking" is often thought of as "stalking light," or "harmless." But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook's real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­ -- not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook's business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing "sharing" to discussing "transfers," "access to raw information," and "access to derived information" would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company's practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users' lives. On the other hand, WhatsApp -- purchased by Facebook in 2014 -- provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp's security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn't just collect data about you when you're on the platform. Because its "like" button is on so many other pages, the company can collect data about you when you're not on Facebook. It even collects what it calls "shadow profiles" -- data about you even if you're not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook's current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It's unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook's posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician's offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook's core business interests. Advertising is its business model, and targeted ads sell better and more profitably -- and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don't expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook's leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there's plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft's then newfound commitment to security.

On Surveillance in the Workplace

Data & Society just published a report entitled "Workplace Monitoring & Surveillance":

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.

  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers' ability to opt out of tracking.

  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.

  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated "nudges" or adjusting performance benchmarks based on a worker's real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned "the adoption curve for oppressive technology, which goes, 'refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'" I don't agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

Russia Is Testing Online Voting

This is a bad idea:

A second innovation will allow "electronic absentee voting" within voters' home precincts. In other words, Russia is set to introduce its first online voting system. The system will be tested in a Moscow neighborhood that will elect a single member to the capital's city council in September. The details of how the experiment will work are not yet known; the State Duma's proposal on Internet voting does not include logistical specifics. The Central Election Commission's reference materials on the matter simply reference "absentee voting, blockchain technology." When Dmitry Vyatkin, one of the bill's co-sponsors, attempted to describe how exactly blockchains would be involved in the system, his explanation was entirely disconnected from the actual functions of that technology. A discussion of this new type of voting is planned for an upcoming public forum in Moscow.

Surely the Russians know that online voting is insecure. Could they not care, or do they think the surveillance is worth the risk?

Videos and Links from the Public-Interest Technology Track at the RSA Conference

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)

  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)

  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)

  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)

  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)

  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress's Travis Moore, Ford Foundation's Jenny Toomey, CitizenLab's John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU's Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here's Ford Foundation's blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here's my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I'd like to do it again next year.

Cybersecurity Insurance Not Paying for NotPetya Losses

This will complicate things:

To complicate matters, having cyber insurance might not cover everyone's losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the "hostile or warlike action in time of peace or war" exemption.

I get that $100 million is real money, but the insurance industry needs to figure out how to properly insure commercial networks against this sort of thing.

Detecting Shoplifting Behavior

This system claims to detect suspicious behavior that indicates shoplifting:

Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

The article has no detail or analysis, so we don't know how well it works. But this kind of thing is surely the future of video surveillance.