Author Archives: Bruce Schneier

License Plate "NULL"

There was a DefCon talk by someone with the vanity plate "NULL." The California system assigned him every ticket with no license plate: $12,000.

Although the initial $12,000-worth of fines were removed, the private company that administers the database didn't fix the issue and new NULL tickets are still showing up.

The unanswered question is: now that he has a way to get parking fines removed, can he park anywhere for free?

And this isn't the first time this sort of thing has happened. Wired has a roundup of people whose license places read things like "NOPLATE," "NO TAG," and "XXXXXXX."

Modifying a Tesla to Become a Surveillance Platform

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras­ -- the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features­ -- into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

Surveillance as a Condition for Humanitarian Aid

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.

Influence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Friday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Software Vulnerabilities in the Boeing 787

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane's network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787's network architecture would make that progression impossible.

Santamarta admits that he doesn't have enough visibility into the 787's internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. "We don't have a 787 to test, so we can't assess the impact," Santamarta says. "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen."

Boeing denies that there's any problem:

In a statement, Boeing said it had investigated IOActive's claims and concluded that they don't represent any real threat of a cyberattack. "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system," the company's statement reads. "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation."

This being Black Hat and Las Vegas, I'll say it this way: I would bet money that Boeing is wrong. I don't have an opinion about whether or not it's lying.

Bypassing Apple FaceID’s Liveness Detection Feature

Apple's FaceID has a liveness detection feature, which prevents someone from unlocking a victim's phone by putting it in front of his face while he's sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim's FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim's face the researchers demonstrated how they could bypass Apple's FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

Attorney General Barr and Encryption

Last month, Attorney General William Barr gave a major speech on encryption policy­what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability -- a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats -- is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how­ -- an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having -- not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity" and not "nuclear launch codes." This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE -- which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been a National Security Agency operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that it is not about iPhones and data at rest. It is about communications­ -- data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law enforcement access­ -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

Exploiting GDPR to Get Private Information

A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones -- they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner's overnight stays

  • two UK rail companies that provided records of all the journeys she had taken with them over several years

  • a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

Cybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there's no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It's an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism­ -- as practiced by the Internet companies that are already spying on everyone -- ­matters. So does society's underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals­ -- that occasionally become law­ -- that are technological disasters.

This isn't sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand -- ­and are involved in -- ­policy. We need public-interest technologists.

Let's pause at that term. The Ford Foundation defines public-interest technologists as "technology practitioners who focus on social justice, the common good, and/or the public interest." A group of academics recently wrote that public-interest technologists are people who "study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good." Tim Berners-Lee has called them "philosophical engineers." I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it's not the best term­ -- and I know not everyone likes it­ -- but it's a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn't want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn't new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it's really not. There aren't enough people doing it, there aren't enough people who know it needs to be done, and there aren't enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There's a report titled A Pivotal Moment that includes this quote: "While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress."

That quote speaks to the three places for intervention. One: the supply side. There just isn't enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with "clinics" at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they've learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama's US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ -- places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn't just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There's a pervasive myth in Silicon Valley that technology is politically neutral. It's not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn't matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: "What should be governed by the state, and what should be governed by the market?" This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: "How much of our lives should be governed by technology, and under what terms?" In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February 2019 issue of IEEE Security & Privacy. I maintain a public-interest tech resources page here.

Why Isn’t GDPR Being Enforced?

Politico has a long article making the case that the lead GDPR regulator, Ireland, has too cozy a relationship with Silicon Valley tech companies to effectively regulate their privacy practices.

Despite its vows to beef up its threadbare regulatory apparatus, Ireland has a long history of catering to the very companies it is supposed to oversee, having wooed top Silicon Valley firms to the Emerald Isle with promises of low taxes, open access to top officials, and help securing funds to build glittering new headquarters.

Now, data-privacy experts and regulators in other countries alike are questioning Ireland's commitment to policing imminent privacy concerns like Facebook's reintroduction of facial recognition software and data sharing with its recently purchased subsidiary WhatsApp, and Google's sharing of information across its burgeoning number of platforms.

On Security Tokens

Mark Risher of Google extols the virtues of security keys:

I'll say it again for the people in the back: with Security Keys, instead of the *user* needing to verify the site, the *site* has to prove itself to the key. Good security these days is about human factors; we have to take the onus off of the user as much as we can.

Furthermore, this "proof" from the site to the key is only permitted over close physical proximity (like USB, NFC, or Bluetooth). Unless the phisher is in the same room as the victim, they can't gain access to the second factor.

This is why I keep using words like "transformative," "revolutionary," and "lit" (not so much anymore): SKs basically shrink your threat model from "anyone anywhere in the world who knows your password" to "people in the room with you right now." Huge!

Cory Doctorow makes a critical point, that the system is only as good as its backup system:

I agree, but there's an important caveat. Security keys usually have fallback mechanisms -- some way to attach a new key to your account for when you lose or destroy your old key. These mechanisms may also rely on security keys, but chances are that they don't (and somewhere down the line, there's probably a fallback mechanism that uses SMS, or Google Authenticator, or an email confirmation loop, or a password, or an administrator who can be sweet talked by a social engineer).

So while the insight that traditional 2FA is really "something you know and something else you know, albeit only very recently," security keys are "Something you know and something you have, which someone else can have, if they know something you know."

And just because there are vulnerabilities in cell phone-based two-factor authentication systems doesn't mean that they are useless. They're still much better than traditional password-only authentication systems.

Defending Democracies Against Information Attacks

To better understand influence attacks, we proposed an approach that models democracy itself as an information system and explains how democracies are vulnerable to certain forms of information attacks that autocracies naturally resist. Our model combines ideas from both international security and computer security, avoiding the limitations of both in explaining how influence attacks may damage democracy as a whole.

Our initial account is necessarily limited. Building a truly comprehensive understanding of democracy as an information system will be a Herculean labor, involving the collective endeavors of political scientists and theorists, computer scientists, scholars of complexity, and others.

In this short paper, we undertake a more modest task: providing policy advice to improve the resilience of democracy against these attacks. Specifically, we can show how policy makers not only need to think about how to strengthen systems against attacks, but also need to consider how these efforts intersect with public beliefs­ -- or common political knowledge­ -- about these systems, since public beliefs may themselves be an important vector for attacks.

In democracies, many important political decisions are taken by ordinary citizens (typically, in electoral democracies, by voting for political representatives). This means that citizens need to have some shared understandings about their political system, and that the society needs some means of generating shared information regarding who their citizens are and what they want. We call this common political knowledge, and it is largely generated through mechanisms of social aggregation (and the institutions that implement them), such as voting, censuses, and the like. These are imperfect mechanisms, but essential to the proper functioning of democracy. They are often compromised or non-existent in autocratic regimes, since they are potentially threatening to the rulers.

In modern democracies, the most important such mechanism is voting, which aggregates citizens' choices over competing parties and politicians to determine who is to control executive power for a limited period. Another important mechanism is the census process, which play an important role in the US and in other democracies, in providing broad information about the population, in shaping the electoral system (through the allocation of seats in the House of Representatives), and in policy making (through the allocation of government spending and resources). Of lesser import are public commenting processes, through which individuals and interest groups can comment on significant public policy and regulatory decisions.

All of these systems are vulnerable to attack. Elections are vulnerable to a variety of illegal manipulations, including vote rigging. However, many kinds of manipulation are currently legal in the US, including many forms of gerrymandering, gimmicking voting time, allocating polling booths and resources so as to advantage or disadvantage particular populations, imposing onerous registration and identity requirements, and so on.

Censuses may be manipulated through the provision of bogus information or, more plausibly, through the skewing of policy or resources so that some populations are undercounted. Many of the political battles over the census over the past few decades have been waged over whether the census should undertake statistical measures to counter undersampling bias for populations who are statistically less likely to return census forms, such as minorities and undocumented immigrants. Current efforts to include a question about immigration status may make it less likely that undocumented or recent immigrants will return completed forms.

Finally, public commenting systems too are vulnerable to attacks intended to misrepresent the support for or opposition to specific proposals, including the formation of astroturf (artificial grassroots) groups and the misuse of fake or stolen identities in large-scale mail, fax, email or online commenting systems.

All these attacks are relatively well understood, even if policy choices might be improved by a better understanding of their relationship to shared political knowledge. For example, some voting ID requirements are rationalized through appeals to security concerns about voter fraud. While political scientists have suggested that these concerns are largely unwarranted, we currently lack a framework for evaluating the trade-offs, if any. Computer security concepts such as confidentiality, integrity, and availability could be combined with findings from political science and political theory to provide such a framework.

Even so, the relationship between social aggregation institutions and public beliefs is far less well understood by policy makers. Even when social aggregation mechanisms and institutions are robust against direct attacks, they may be vulnerable to more indirect attacks aimed at destabilizing public beliefs about them.

Democratic societies are vulnerable to (at least) two kinds of knowledge attacks that autocratic societies are not. First are flooding attacks that create confusion among citizens about what other citizens believe, making it far more difficult for them to organize among themselves. Second are confidence attacks. These attempt to undermine public confidence in the institutions of social aggregation, so that their results are no longer broadly accepted as legitimate representations of the citizenry.

Most obviously, democracies will function poorly when citizens do not believe that voting is fair. This makes democracies vulnerable to attacks aimed at destabilizing public confidence in voting institutions. For example, some of Russia's hacking efforts against the 2016 presidential election were designed to undermine citizens' confidence in the result. Russian hacking attacks against Ukraine, which targeted the systems through which election results were reported out, were intended to create confusion among voters about what the outcome actually was. Similarly, the "Guccifer 2.0" hacking identity, which has been attributed to Russian military intelligence, sought to suggest that the US electoral system had been compromised by the Democrats in the days immediately before the presidential vote. If, as expected, Donald Trump had lost the election, these claims could have been combined with the actual evidence of hacking to create the appearance that the election was fundamentally compromised.

Similar attacks against the perception of fairness are likely to be employed against the 2020 US census. Should efforts to include a citizenship question fail, some political actors who are disadvantaged by demographic changes such as increases in foreign-born residents and population shift from rural to urban and suburban areas will mount an effort to delegitimize the census results. Again, the genuine problems with the census, which include not only the citizenship question controversy but also serious underfunding, may help to bolster these efforts.

Mechanisms that allow interested actors and ordinary members of the public to comment on proposed policies are similarly vulnerable. For example, the Federal Communication Commission (FCC) announced in 2017 that it was proposing to repeal its net neutrality ruling. Interest groups backing the FCC rollback correctly anticipated a widespread backlash from a politically active coalition of net neutrality supporters. The result was warfare through public commenting. More than 22 million comments were filed, most of which appeared to be either automatically generated or form letters. Millions of these comments were apparently fake, and attached unsuspecting people's names and email addresses to comments supporting the FCC's repeal efforts. The vast majority of comments that were not either form letters or automatically generated opposed the FCC's proposed ruling. The furor around the commenting process was magnified by claims from inside the FCC (later discredited) that the commenting process had also been subjected to a cyberattack.

We do not yet know the identity and motives of the actors behind the flood of fake comments, although the New York State Attorney-General's office has issued subpoenas for records from a variety of lobbying and advocacy organizations. However, by demonstrating that the commenting process was readily manipulated, the attack made it less likely that the apparently genuine comments of those opposing the FCC's proposed ruling would be treated as useful evidence of what the public believed. The furor over purported cyberattacks, and the FCC's unwillingness itself to investigate the attack, have further undermined confidence in an online commenting system that was intended to make the FCC more open to the US public.

We do not know nearly enough about how democracies function as information systems. Generating a better understanding is itself a major policy challenge, which will require substantial resources and, even more importantly, common understandings and shared efforts across a variety of fields of knowledge that currently don't really engage with each other.

However, even this basic sketch of democracy's informational aspects can provide policy makers with some key lessons. The most important is that it may be as important to bolster shared public beliefs about key institutions such as voting, public commenting, and census taking against attack, as to bolster the mechanisms and related institutions themselves.

Specifically, many efforts to mitigate attacks against democratic systems begin with spreading public awareness and alarm about their vulnerabilities. This has the benefit of increasing awareness about real problems, but it may ­ especially if exaggerated for effect ­ damage public confidence in the very social aggregation institutions it means to protect. This may mean, for example, that public awareness efforts about Russian hacking that are based on flawed analytic techniques may themselves damage democracy by exaggerating the consequences of attacks.

More generally, this poses important challenges for policy efforts to secure social aggregation institutions against attacks. How can one best secure the systems themselves without damaging public confidence in them? At a minimum, successful policy measures will not simply identify problems in existing systems, but provide practicable, publicly visible, and readily understandable solutions to mitigate them.

We have focused on the problem of confidence attacks in this short essay, because they are both more poorly understood and more profound than flooding attacks. Given historical experience, democracy can probably survive some amount of disinformation about citizens' beliefs better than it can survive attacks aimed at its core institutions of aggregation. Policy makers need a better understanding of the relationship between political institutions and social beliefs: specifically, the importance of the social aggregation institutions that allow democracies to understand themselves.

There are some low-hanging fruit. Very often, hardening these institutions against attacks on their confidence will go hand in hand with hardening them against attacks more generally. Thus, for example, reforms to voting that require permanent paper ballots and random auditing would not only better secure voting against manipulation, but would have moderately beneficial consequences for public beliefs too.

There are likely broadly similar solutions for public commenting systems. Here, the informational trade-offs are less profound than for voting, since there is no need to balance the requirement for anonymity (so that no-one can tell who voted for who ex post) against other requirements (to ensure that no-one votes twice or more, no votes are changed and so on). Instead, the balance to be struck is between general ease of access and security, making it easier, for example, to leverage secondary sources to validate identity.

Both the robustness of and public confidence in the US census and the other statistical systems that guide the allocation of resources could be improved by insulating them better from political control. For example, a similar system could be used to appoint the director of the census to that for the US Comptroller-General, requiring bipartisan agreement for appointment, and making it hard to exert post-appointment pressure on the official.

Our arguments also illustrate how some well-intentioned efforts to combat social influence operations may have perverse consequences for general social beliefs. The perception of security is at least as important as the reality of security, and any defenses against information attacks need to address both.

However, we need far better developed intellectual tools if we are to properly understand the trade-offs, instead of proposing clearly beneficial policies, and avoiding straightforward mistakes. Forging such tools will require computer security specialists to start thinking systematically about public beliefs as an integral part of the systems that they seek to defend. It will mean that more military oriented cybersecurity specialists need to think deeply about the functioning of democracy and the capacity of internal as well as external actors to disrupt it, rather than reaching for their standard toolkit of state-level deterrence tools. Finally, specialists in the workings of democracy have to learn how to think about democracy and its trade-offs in specifically informational terms.

This essay was written with Henry Farrell, and has previously appeared on Defusing Disinfo.

Friday Squid Blogging: Toraiz SQUID Digital Sequencer

Pioneer DJ has a new sequencer: the Toraiz SQUID: Sequencer Inspirational Device.

The 16-track sequencer is designed around jamming and performance with a host of features to create "happy accidents" and trigger random sequences, modulations and chords. There are 16 RGB pads for playing in your melodies and beats, and up to 64 patterns per each of the 16 tracks. There are eight notes of polyphony per track too, and a Harmonizer section to quickly input pre-determined chord shapes into your pattern, with up to six saved chords.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Towards an Information Operations Kill Chain

Cyberattacks don't magically happen; they involve a series of steps. And far from being helpless, defenders can disrupt the attack at any of those steps. This framing has led to something called the "cybersecurity kill chain": a way of thinking about cyber defense in terms of disrupting the attacker's process.

On a similar note, it's time to conceptualize the "information operations kill chain." Information attacks against democracies, whether they're attempts to polarize political processes or to increase mistrust in social institutions, also involve a series of steps. And enumerating those steps will clarify possibilities for defense.

I first heard of this concept from Anthony Soules, a former National Security Agency (NSA) employee who now leads cybersecurity strategy for Amgen. He used the steps from the 1980s Russian "Operation Infektion," designed to spread the rumor that the U.S. created the HIV virus as part of a weapons research program. A 2018 New York Times opinion video series on the operation described the Russian disinformation playbook in a series of seven "commandments," or steps. The information landscape has changed since 1980, and information operations have changed as well. I have updated, and added to, those steps to bring them into the present day:

  • Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic and ethnic divisions.

  • Step 2: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths­ -- a "firehose of falsehood" -- ­that distorts the political debate.

  • Step 3: Wrap those narratives in kernels of truth. A core of fact helps the falsities spread.

  • Step 4: (This step is new.) Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives.

  • Step 5: Conceal your hand; make it seem as if the stories came from somewhere else.

  • Step 6: Cultivate "useful idiots" who believe and amplify the narratives. Encourage them to take positions even more extreme than they would otherwise.

  • Step 7: Deny involvement, even if the truth is obvious.

  • Step 8: Play the long game. Strive for long-term impact over immediate impact.

These attacks have been so effective in part because, as victims, we weren't aware of how they worked. Identifying these steps makes it possible to conceptualize­ and develop­ countermeasures designed to disrupt information operations. The result is the information operations kill chain:

  • Step 1: Find the cracks. There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. We need to strengthen that shared knowledge, thereby making it harder to exploit the inevitable cracks. We need to make it unacceptable­ -- or at least costly -- ­for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. We need to become reflexively suspicious of information that makes us angry at our fellow citizens. We cannot entirely fix the cracks, as they emerge from the diversity that makes democracies strong; but we can make them harder to exploit.

  • Step 2: Seed distortion. We need to teach better digital literacy. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

  • Step 3: Wrap the narratives in kernels of truth. Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. We need to ensure that the true narrative is legitimized and promoted.

  • Step 4: Build audiences. This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Here, the defenses center around making disinformation efforts less effective. Social media companies need to detect and delete accounts belonging to propagandists and bots and groups run by those propagandists.

  • Step 5: Conceal your hand. Here the answer is attribution, attribution, attribution. The quicker we can publicly attribute information operations, the more effectively we can defend against them. This will require efforts by both the social media platforms and the intelligence community, not just to detect information operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

  • Step 6: Cultivate useful idiots. We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth­ -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter speech will convince; this is not intended for them. Focus instead on persuading the persuadable.

  • Step 7: Deny everything. When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

  • Step 8: Play the long game. Counterattacks can disrupt the attacker's ability to maintain information operations, as U.S. Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, U.S. Cyber Command Commander's Gen. Paul Nakasone here) is a strategy to achieve this. Defenders can play the long game, too. We need to better encourage people to think for the long term: beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Yes, we need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of information operations, if we can publicly attribute -- if we can respond either diplomatically or otherwise­ -- we can deter these attacks from nation-states. But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. Deterring them will require a different theory.

None of these defensive actions is sufficient on its own. In this way, the information operations kill chain differs significantly from the more traditional cybersecurity kill chain. The latter defends against a series of steps taken sequentially by the attacker against a single target­ -- a network or an organization -- and disrupting any one of those steps disrupts the entire attack. The information operations kill chain is fuzzier. Steps overlap. They can be conducted out of order. It's a patchwork that can span multiple social media sites and news channels. It requires, as Henry Farrell and I have postulated, thinking of democracy itself as an information system. Disrupting an information operation will require more than disrupting one step at one time. The parallel isn't perfect, but it's a taxonomy by which to consider the range of possible defenses.

This information operations kill chain is a work in progress. If anyone has any other ideas for disrupting different steps of the information operations kill chain, please comment below. I will update this in a future essay.

This essay previously appeared on Lawfare.com.

G7 Comes Out in Favor of Encryption Backdoors

From a G7 meeting of interior ministers in Paris this month, an "outcome document":

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system's key management system is fundamentally different than hacking the system's encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

Excellent Analysis of the Boeing 737 Max Software Problems

This is the best analysis of the software causes of the Boeing 737 MAX disasters that I have read.

Technically this is safety and not security; there was no attacker. But the fields are closely related and there are a lot of lessons for IoT security -- and the security of complex socio-technical systems in general -- in here.

EDITED TO ADD (4/30): A rebuttal of sorts.

New DNS Hijacking Attacks

DNS hijacking isn't new, but this seems to be an attack of unprecedented scale:

Researchers at Cisco's Talos security division on Wednesday revealed that a hacker group it's calling Sea Turtle carried out a broad campaign of espionage via DNS hijacking, hitting 40 different organizations. In the process, they went so far as to compromise multiple country-code top-level domains -- the suffixes like .co.uk or .ru that end a foreign web address -- putting all the traffic of every domain in multiple countries at risk.

The hackers' victims include telecoms, internet service providers, and domain registrars responsible for implementing the domain name system. But the majority of the victims and the ultimate targets, Cisco believes, were a collection of mostly governmental organizations, including ministries of foreign affairs, intelligence agencies, military targets, and energy-related groups, all based in the Middle East and North Africa. By corrupting the internet's directory system, hackers were able to silently use "man in the middle" attacks to intercept all internet data from email to web traffic sent to those victim organizations.

[...]

Cisco Talos said it couldn't determine the nationality of the Sea Turtle hackers, and declined to name the specific targets of their spying operations. But it did provide a list of the countries where victims were located: Albania, Armenia, Cyprus, Egypt, Iraq, Jordan, Lebanon, Libya, Syria, Turkey, and the United Arab Emirates. Cisco's Craig Williams confirmed that Armenia's .am top-level domain was one of the "handful" that were compromised, but wouldn't say which of the other countries' top-level domains were similarly hijacked.

Another news article.

A "Department of Cybersecurity"

Presidential candidate John Delaney has announced a plan to create a Department of Cybersecurity.

I have long been in favor of a new federal agency to deal with Internet -- and especially Internet of Things -- security. The devil is in the details, of course, and it's really easy to get this wrong. In Click Here to Kill Everybody, I outline a strawman proposal; I call it the "National Cyber Office" and model it on the Office of the Director of National Intelligence. But regardless of what you think of this idea, I'm glad that at least someone is talking about it.

Slashdot thread. News story.

EDITED TO ADD: Yes, this post is perilously close to presidential politics. Any comment that opines on the qualifications of this, or any other, presidential candidate will be deleted.

Vulnerabilities in the WPA3 Wi-Fi Security Protocol

Researchers have found several vulnerabilities in the WPA3 Wi-Fi security protocol:

The design flaws we discovered can be divided in two categories. The first category consists of downgrade attacks against WPA3-capable devices, and the second category consists of weaknesses in the Dragonfly handshake of WPA3, which in the Wi-Fi standard is better known as the Simultaneous Authentication of Equals (SAE) handshake. The discovered flaws can be abused to recover the password of the Wi-Fi network, launch resource consumption attacks, and force devices into using weaker security groups. All attacks are against home networks (i.e. WPA3-Personal), where one password is shared among all users.

News article. Research paper: "Dragonblood: A Security Analysis of WPA3's SAE Handshake":

Abstract: The WPA3 certification aims to secure Wi-Fi networks, and provides several advantages over its predecessor WPA2, such as protection against offline dictionary attacks and forward secrecy. Unfortunately, we show that WPA3 is affected by several design flaws,and analyze these flaws both theoretically and practically. Most prominently, we show that WPA3's Simultaneous Authentication of Equals (SAE) handshake, commonly known as Dragonfly, is affected by password partitioning attacks. These attacks resemble dictionary attacks and allow an adversary to recover the password by abusing timing or cache-based side-channel leaks. Our side-channel attacks target the protocol's password encoding method. For instance, our cache-based attack exploits SAE's hash-to-curve algorithm. The resulting attacks are efficient and low cost: brute-forcing all 8-character lowercase password requires less than 125$in Amazon EC2 instances. In light of ongoing standardization efforts on hash-to-curve, Password-Authenticated Key Exchanges (PAKEs), and Dragonfly as a TLS handshake, our findings are also of more general interest. Finally, we discuss how to mitigate our attacks in a backwards-compatible manner, and explain how minor changes to the protocol could have prevented most of our attack

China Spying on Undersea Internet Cables

Supply chain security is an insurmountably hard problem. The recent focus is on Chinese 5G equipment, but the problem is much broader. This opinion piece looks at undersea communications cables:

But now the Chinese conglomerate Huawei Technologies, the leading firm working to deliver 5G telephony networks globally, has gone to sea. Under its Huawei Marine Networks component, it is constructing or improving nearly 100 submarine cables around the world. Last year it completed a cable stretching nearly 4,000 miles from Brazil to Cameroon. (The cable is partly owned by China Unicom, a state-controlled telecom operator.) Rivals claim that Chinese firms are able to lowball the bidding because they receive subsidies from Beijing.

Just as the experts are justifiably concerned about the inclusion of espionage "back doors" in Huawei's 5G technology, Western intelligence professionals oppose the company's engagement in the undersea version, which provides a much bigger bang for the buck because so much data rides on so few cables.

This shouldn't surprise anyone. For years, the US and the Five Eyes have had a monopoly on spying on the Internet around the globe. Other countries want in.

As I have repeatedly said, we need to decide if we are going to build our future Internet systems for security or surveillance. Either everyone gets to spy, or no one gets to spy. And I believe we must choose security over surveillance, and implement a defense-dominant strategy.

Maliciously Tampering with Medical Imagery

In what I am sure is only a first in many similar demonstrations, researchers are able to add or remove cancer signs from CT scans. The results easily fool radiologists.

I don't think the medical device industry has thought at all about data integrity and authentication issues. In a world where sensor data of all kinds is undetectably manipulatable, they're going to have to start.

Research paper. Slashdot thread.

New Version of Flame Malware Discovered

Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.

Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.

Note that the article claims that Flame was believed to be Israeli in origin. That's wrong; most people who have an opinion believe it is from the NSA.

TajMahal Spyware

Kaspersky has released details about a sophisticated nation-state spyware it calls TajMahal:

The TajMahal framework's 80 modules, Shulmin says, comprise not only the typical keylogging and screengrabbing features of spyware, but also never-before-seen and obscure tricks. It can intercept documents in a printer queue, and keep track of "files of interest," automatically stealing them if a USB drive is inserted into the infected machine. And that unique spyware toolkit, Kaspersky says, bears none of the fingerprints of any known nation-state hacker group.

It was found on the servers of an "embassy of a Central Asian country." No speculation on who wrote and controls it.

More details.

How the Anonymous Artist Banksy Authenticates His or Her Work

Interesting scheme:

It all starts off with a fairly bog standard gallery style certificate. Details of the work, the authenticating agency, a bit of embossing and a large impressive signature at the bottom. Exactly the sort of things that can be easily copied by someone on a mission to create the perfect fake.

That torn-in-half banknote though? Never mind signatures, embossing or wax seals. The Di Faced Tenner is doing all the authentication heavy lifting here.

The tear is what uniquely separates the private key, the half of the note kept secret under lock and key at Pest Control, with the public key. The public key is the half of the note attached to the authentication certificate which gets passed on with the print, and allows its authenticity to be easily verified.

We have no idea what has been written on Pest Control's private half of the note. Which means it can't be easily recreated, and that empowers Pest Control to keep the authoritative list of who currently owns each authenticated Banksy work.