Author Archives: Bruce Schneier

Machine Learning Will Transform How We Detect Software Vulnerabilities

No one doubts that artificial intelligence (AI) and machine learning will transform cybersecurity. We just don’t know how or when. While the literature generally focuses on the different uses of AI by attackers and defenders — and the resultant arms race between the two — I want to talk about software vulnerabilities.

All software contains bugs. The reason is basically economic: The market doesn’t want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.

Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.

Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn’t reward that kind of delay and expense. AI, and machine learning (ML) in particular, has the potential to forever change this trade-off.

Machine Learning Can Help Nip Vulnerabilities in the Bud

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

What Will the Future of Vulnerability Management Look Like?

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we’ll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.

The post Machine Learning Will Transform How We Detect Software Vulnerabilities appeared first on Security Intelligence.

New Shamoon Variant

A new variant of the Shamoon malware has destroyed signifigant amounts of data at a UAE "heavy engineering company" and the Italian oil and gas contractor Saipem.

Shamoon is the Iranian malware that was targeted against the Saudi Arabian oil company, Saudi Aramco, in 2012 and 2016. We have no idea if this new variant is also Iranian in origin, or if it is someone else entirely using the old Iranian code base.

Real-Time Attacks Against Two-Factor Authentication

Attackers are targeting two-factor authentication systems:

Attackers working on behalf of the Iranian government collected detailed information on targets and used that knowledge to write spear-phishing emails that were tailored to the targets' level of operational security, researchers with security firm Certfa Lab said in a blog post. The emails contained a hidden image that alerted the attackers in real time when targets viewed the messages. When targets entered passwords into a fake Gmail or Yahoo security page, the attackers would almost simultaneously enter the credentials into a real login page. In the event targets' accounts were protected by 2fa, the attackers redirected targets to a new page that requested a one-time password.

This isn't new. I wrote about this exact attack in 2005 and 2009.

Marriott Hack Reported as Chinese State-Sponsored

The New York Times and Reuters are reporting that China was behind the recent hack of Marriott Hotels. Note that this is still uncomfirmed, but interesting if it is true.

Reuters:

Private investigators looking into the breach have found hacking tools, techniques and procedures previously used in attacks attributed to Chinese hackers, said three sources who were not authorized to discuss the company's private probe into the attack.

That suggests that Chinese hackers may have been behind a campaign designed to collect information for use in Beijing's espionage efforts and not for financial gain, two of the sources said.

While China has emerged as the lead suspect in the case, the sources cautioned it was possible somebody else was behind the hack because other parties had access to the same hacking tools, some of which have previously been posted online.

Identifying the culprit is further complicated by the fact that investigators suspect multiple hacking groups may have simultaneously been inside Starwood's computer networks since 2014, said one of the sources.

I used to have opinions about whether these attributions are true or not. These days, I tend to wait and see.

New Australian Backdoor Law

Last week, Australia passed a law giving the government the ability to demand backdoors in computers and communications systems. Details are still to be defined, but it's really bad.

Note: Many people e-mailed me to ask why I haven't blogged this yet. One, I was busy with other things. And two, there's nothing I can say that I haven't said many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of embarrassing.

Banks Attacked through Malicious Hardware Connected to the Local Network

Kaspersky is reporting on a series of bank hacks -- called DarkVishnya -- perpetrated through malicious hardware being surreptitiously installed into the target network:

In 2017-2018, Kaspersky Lab specialists were invited to research a series of cybertheft incidents. Each attack had a common springboard: an unknown device directly connected to the company's local network. In some cases, it was the central office, in others a regional office, sometimes located in another country. At least eight banks in Eastern Europe were the targets of the attacks (collectively nicknamed DarkVishnya), which caused damage estimated in the tens of millions of dollars.

Each attack can be divided into several identical stages. At the first stage, a cybercriminal entered the organization's building under the guise of a courier, job seeker, etc., and connected a device to the local network, for example, in one of the meeting rooms. Where possible, the device was hidden or blended into the surroundings, so as not to arouse suspicion.

The devices used in the DarkVishnya attacks varied in accordance with the cybercriminals' abilities and personal preferences. In the cases we researched, it was one of three tools:

  • netbook or inexpensive laptop
  • Raspberry Pi computer
  • Bash Bunny, a special tool for carrying out USB attacks

Inside the local network, the device appeared as an unknown computer, an external flash drive, or even a keyboard. Combined with the fact that Bash Bunny is comparable in size to a USB flash drive, this seriously complicated the search for the entry point. Remote access to the planted device was via a built-in or USB-connected GPRS/3G/LTE modem.

Slashdot thread.

Your Personal Data is Already Stolen

In an excellent blog post, Brian Krebs makes clear something I have been saying for a while:

Likewise for individuals, it pays to accept two unfortunate and harsh realities:

Reality #1: Bad guys already have access to personal data points that you may believe should be secret but which nevertheless aren't, including your credit card information, Social Security number, mother's maiden name, date of birth, address, previous addresses, phone number, and yes ­ even your credit file.

Reality #2: Any data point you share with a company will in all likelihood eventually be hacked, lost, leaked, stolen or sold ­ usually through no fault of your own. And if you're an American, it means (at least for the time being) your recourse to do anything about that when it does happen is limited or nil.

[...]

Once you've owned both of these realities, you realize that expecting another company to safeguard your security is a fool's errand, and that it makes far more sense to focus instead on doing everything you can to proactively prevent identity thieves, malicious hackers or other ne'er-do-wells from abusing access to said data.

His advice is good.

Bad Consumer Security Advice

There are lots of articles about there telling people how to better secure their computers and online accounts. While I agree with some of it, this article contains some particularly bad advice:

1. Never, ever, ever use public (unsecured) Wi-Fi such as the Wi-Fi in a café, hotel or airport. To remain anonymous and secure on the Internet, invest in a Virtual Private Network account, but remember, the bad guys are very smart, so by the time this column runs, they may have figured out a way to hack into a VPN.

I get that unsecured Wi-Fi is a risk, but does anyone actually follow this advice? I think twice about accessing my online bank account from a pubic Wi-Fi network, and I do use a VPN regularly. But I can't imagine offering this as advice to the general public.

2. If you or someone you know is 18 or older, you need to create a Social Security online account. Today! Go to www.SSA.gov.

This is actually good advice. Brian Krebs calls it planting a flag, and it's basically claiming your own identity before some fraudster does it for you. But why limit it to the Social Security Administration? Do it for the IRS and the USPS. And while you're at it, do it for your mobile phone provider and your Internet service provider.

3. Add multifactor verifications to ALL online accounts offering this additional layer of protection, including mobile and cable accounts. (Note: Have the codes sent to your email, as SIM card "swapping" is becoming a huge, and thus far unstoppable, security problem.)

Yes. Two-factor authentication is important, and I use it on some of my more important online accounts. But I don't have it installed on everything. And I'm not sure why having the codes sent to your e-mail helps defend against SIM-card swapping; I'm sure you get your e-mail on your phone like everyone else. (Here's some better advice about that.)

4. Create hard-to-crack 12-character passwords. NOT your mother's maiden name, not the last four digits of your Social Security number, not your birthday and not your address. Whenever possible, use a "pass-phrase" as your answer to account security questions ­ such as "Youllneverguessmybrotherinlawsmiddlename."

I'm a big fan of random impossible-to-remember passwords, and nonsense answers to secret questions. It would be great if she suggested a password manager to remember them all.

5. Avoid the temptation to use the same user name and password for every account. Whenever possible, change your passwords every six months.

Yes to the first part. No, no no -- a thousand times no -- to the second.

6. To prevent "new account fraud" (i.e., someone trying to open an account using your date of birth and Social Security number), place a security freeze on all three national credit bureaus (Equifax, Experian and TransUnion). There is no charge for this service.

I am a fan of security freezes.

7. Never plug your devices (mobile phone, tablet and/or laptop) into an electrical outlet in an airport. Doing so will make you more susceptible to being hacked. Instead, travel with an external battery charger to keep your devices charged.

Seriously? Yes, I've read the articles about hacked charging stations, but I wouldn't think twice about using a wall jack at an airport. If you're really worried, buy a USB condom.

Click Here to Kill Everybody News

My latest book is doing well. And I've been giving lots of talks and interviews about it. (I can recommend three interviews: the Cyberlaw podcast with Stewart Baker, the Lawfare podcast with Ben Wittes, and Le Show with Henry Shearer.) My book talk at Google is also available.

The Audible version was delayed for reasons that were never adequately explained to me, but it's finally out.

I still have signed copies available. Be aware that this is both slower and more expensive than online bookstores.

That Bloomberg Supply-Chain-Hack Story

Back in October, Bloomberg reported that China has managed to install backdoors into server equipment that ended up in networks belonging to -- among others -- Apple and Amazon. Pretty much everybody has denied it (including the US DHS and the UK NCSC). Bloomberg has stood by its story -- and is still standing by it.

I don't think it's real. Yes, it's plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.

FBI Takes Down a Massive Advertising Fraud Ring

The FBI announced that it dismantled a large Internet advertising fraud network, and arrested eight people:

A 13-count indictment was unsealed today in federal court in Brooklyn charging Aleksandr Zhukov, Boris Timokhin, Mikhail Andreev, Denis Avdeev, Dmitry Novikov, Sergey Ovsyannikov, Aleksandr Isaev and Yevgeniy Timchenko with criminal violations for their involvement in perpetrating widespread digital advertising fraud. The charges include wire fraud, computer intrusion, aggravated identity theft and money laundering. Ovsyannikov was arrested last month in Malaysia; Zhukov was arrested earlier this month in Bulgaria; and Timchenko was arrested earlier this month in Estonia, all pursuant to provisional arrest warrants issued at the request of the United States. They await extradition. The remaining defendants are at large.

It looks like an impressive piece of police work.

Details of the forensics that led to the arrests.

Distributing Malware By Becoming an Admin on an Open-Source Project

The module "event-steam" was infected with malware by an anonymous someone who became an admin on the project.

Cory Doctorow points out that this is a clever new attack vector:

Many open source projects attain a level of "maturity" where no one really needs any new features and there aren't a lot of new bugs being found, and the contributors to these projects dwindle, often to a single maintainer who is generally grateful for developers who take an interest in these older projects and offer to share the choresome, intermittent work of keeping the projects alive.

Ironically, these are often projects with millions of users, who trust them specifically because of their stolid, unexciting maturity.

This presents a scary social-engineering vector for malware: A malicious person volunteers to help maintain the project, makes some small, positive contributions, gets commit access to the project, and releases a malicious patch, infecting millions of users and apps.

Propaganda and the Weakening of Trust in Government

On November 4, 2016, the hacker "Guccifer 2.0,: a front for Russia's military intelligence service, claimed in a blogpost that the Democrats were likely to use vulnerabilities to hack the presidential elections. On November 9, 2018, President Donald Trump started tweeting about the senatorial elections in Florida and Arizona. Without any evidence whatsoever, he said that Democrats were trying to steal the election through "FRAUD."

Cybersecurity experts would say that posts like Guccifer 2.0's are intended to undermine public confidence in voting: a cyber-attack against the US democratic system. Yet Donald Trump's actions are doing far more damage to democracy. So far, his tweets on the topic have been retweeted over 270,000 times, eroding confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken American democracy. Cybersecurity today is not only about computer systems. It's also about the ways attackers can use computer systems to manipulate and undermine public expectations about democracy. Not only do we need to rethink attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from computer security to understand the relationship between democracy and information. These ideas help us understand attacks which destabilize confidence in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be more pernicious than attacks from other countries. They are more sophisticated, employ tools that are harder to defend against, and lead to harsh political tradeoffs. The US can threaten charges or impose sanctions when Russian trolling agencies attack its democratic system. But what punishments can it use when the attacker is the US president?

People who think about cybersecurity build on ideas about confrontations between states during the Cold War. Intellectuals such as Thomas Schelling developed deterrence theory, which explained how the US and USSR could maneuver to limit each other's options without ever actually going to war. Deterrence theory, and related concepts about the relative ease of attack and defense, seemed to explain the tradeoffs that the US and rival states faced, as they started to use cyber techniques to probe and compromise each others' information networks.

However, these ideas fail to acknowledge one key differences between the Cold War and today. Nearly all states -- whether democratic or authoritarian -- are entangled on the Internet. This creates both new tensions and new opportunities. The US assumed that the internet would help spread American liberal values, and that this was a good and uncontroversial thing. Illiberal states like Russia and China feared that Internet freedom was a direct threat to their own systems of rule. Opponents of the regime might use social media and online communication to coordinate among themselves, and appeal to the broader public, perhaps toppling their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open information flows. As scholars like Molly Roberts have shown, states like China and Russia discovered how they could "flood" internet discussion with online nonsense and distraction, making it impossible for their opponents to talk to each other, or even to distinguish between truth and falsehood. These flooding techniques stabilized authoritarian regimes, because they demoralized and confused the regime's opponents. Libertarians often argue that the best antidote to bad speech is more speech. What Vladimir Putin discovered was that the best antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its neighborhood as direct threats, and began experimenting with counter-offensive techniques. When a Russia-friendly government in Ukraine collapsed due to popular protests, Russia tried to destabilize new, democratic elections by hacking the system through which the election results would be announced. The clear intention was to discredit the election results by announcing fake voting numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last moment. Even so, it provided the model for a new kind of attack. Hackers don't have to secretly alter people's votes to affect elections. All they need to do is to damage public confidence that the votes were counted fairly. As researchers have argued, "simply put, the attacker might not care who wins; the losing side believing that the election was stolen from them may be equally, if not more, valuable."

These two kinds of attacks -- "flooding" attacks aimed at destabilizing public discourse, and "confidence" attacks aimed at undermining public belief in elections -- were weaponized against the US in 2016. Russian social media trolls, hired by the "Internet Research Agency," flooded online political discussions with rumors and counter-rumors in order to create confusion and political division. Peter Pomerantsev describes how in Russia, "one moment [Putin's media wizard] Surkov would fund civic forums and human rights NGOs, the next he would quietly support nationalist movements that accuse the NGOs of being tools of the West." Similarly, Russian trolls tried to get Black Lives Matter protesters and anti-Black Lives Matter protesters to march at the same time and place, to create conflict and the appearance of chaos. Guccifer 2.0's blog post was surely intended to undermine confidence in the vote, preparing the ground for a wider destabilization campaign after Hillary Clinton won the election. Neither Putin nor anyone else anticipated that Trump would win, ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides, Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable long-term consequences. Detailed research on the flow of news articles through social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing that Fox News was far more influential in the spread of false news stories than any Russian effort.

However, global adversaries like the Russians aren't the only actors who can use flooding and confidence attacks. US actors can use just the same techniques. Indeed, they can arguably use them better, since they have a better understanding of US politics, more resources, and are far more difficult for the government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its proposal to get rid of "net neutrality," it was flooded by fake comments supporting the proposal. Nearly every real person who commented was in favor of net neutrality, but their arguments were drowned out by a flood of spurious comments purportedly made by identities stolen from porn sites, by people whose names and email addresses had been harvested without their permission, and, in some cases, from dead people. This was done not just to generate fake support for the FCC's controversial proposal. It was to devalue public comments in general, making the general public's support for net neutrality politically irrelevant. FCC decision making on issues like net neutrality used to be dominated by industry insiders, and many would like to go back to the old regime.

Trump's efforts to undermine confidence in the Florida and Arizona votes work on a much larger scale. There are clear short-term benefits to asserting fraud where no fraud exists. This may sway judges or other public officials to make concessions to the Republicans to preserve their legitimacy. Yet they also destabilize American democracy in the long term. If Republicans are convinced that Democrats win by cheating, they will feel that their own manipulation of the system (by purging voter rolls, making voting more difficult and so on) are legitimate, and very probably cheat even more flagrantly in the future. This will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans -- including Martha McSally -- have so far stayed firm against pressure from the White House and the Republican National Committee to claim that cheating is happening. They presumably see more long term value from preserving existing institutions than undermining them. Very plausibly, Donald Trump has exactly the opposite incentives. By weakening public confidence in the vote today, he makes it easier to claim fraud and perhaps plunge American politics into chaos if he is defeated in 2020.

If experts who see Russian flooding and confidence measures as cyberattacks on US democracy are right, then these attacks are just as dangerous -- and perhaps more dangerous -- when they are used by domestic actors. The risk is that over time they will destabilize American democracy so that it comes closer to Russia's managed democracy -- where nothing is real any more, and ordinary people feel a mixture of paranoia, helplessness and disgust when they think about politics. Paradoxically, Russian interference is far too ineffectual to get us there -- but domestically mounted attacks by all-American political actors might.

To protect against that possibility, we need to start thinking more systematically about the relationship between democracy and information. Our paper provides one way to do this, highlighting the vulnerabilities of democracy against certain kinds of information attack. More generally, we need to build levees against flooding while shoring up public confidence in voting and other public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies. Modernization of government commenting platforms to make them robust against flooding is only a very minimal first step. Up until very recently, companies like Twitter have won market advantage from bot infestations -- even when it couldn't make a profit, it seemed that user numbers were growing. CEOs like Mark Zuckerberg have begun to worry about democracy, but their worries will likely only go so far. It is difficult to get a man to understand something when his business model depends on not understanding it. Sharp -- and legally enforceable -- limits on automated accounts are a first step. Radical redesign of networks and of trending indicators so that flooding attacks are less effective may be a second.

The second requires general standards for voting at the federal level, and a constitutional guarantee of the right to vote. Technical experts nearly universally favor robust voting systems that would combine paper records with random post-election auditing, to prevent fraud and secure public confidence in voting. Other steps to ensure proper ballot design, and standardize vote counting and reporting will take more time and discussion -- yet the record of other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its election machinery. Yet voting is not the only important form of democratic information. Apparent efforts to deliberately skew the US census against counting undocumented immigrants show the need for a more general audit of the political information systems that we need if democracy is to function properly.

It's easier to respond to Russian hackers through sanctions, counter-attacks and the like than to domestic political attacks that undermine US democracy. To preserve the basic political freedoms of democracy requires recognizing that these freedoms are sometimes going to be abused by politicians such as Donald Trump. The best that we can do is to minimize the possibilities of abuse up to the point where they encroach on basic freedoms and harden the general institutions that secure democratic information against attacks intended to undermine them.

This essay was co-authored with Henry Farrell, and previously appeared on Motherboard, with a terrible headline that I was unable to get changed.

How Surveillance Inhibits Freedom of Expression

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren't as willing to be our individual selves when others are watching.

Let's take an example: hearing that parents and children are being separated as they cross the US border, you want to learn more. You visit the website of an international immigrants' rights group, a fact that is available to the government through mass Internet surveillance. You sign up for the group's mailing list, another fact that is potentially available to the government. The group then calls or e-mails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If, instead of visiting the website, you visit the group's Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn't matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn't just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the Internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn't exclusively an American event; Internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments­ -- if found to be inoffensive or even essential to society -- ­cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress­ -- from ending slavery to fighting for women's rights­ -- began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks' right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it's currently sitting between somewhat moral, and­ -- depending on the state or country in question -- ­tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn't really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don't yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they're around. And they require privacy to germinate. Take away that privacy, and we'll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do­ -- the different things each of us does­ -- that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don't harm anyone. But it's important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn't necessarily the morality of everyone­ -- and that that's okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It's easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today's surveillance capabilities in the 1920s, it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less "what if we didn't?" thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a "social credit" rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone's knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney's issue #54: "The End of Trust." It was reprinted on Wired.com.

Using Machine Learning to Create Fake Fingerprints

Researchers are able to create fake fingerprints that result in a 20% false-positive rate.

The problem is that these sensors obtain only partial images of users' fingerprints -- at the points where they make contact with the scanner. The paper noted that since partial prints are not as distinctive as complete prints, the chances of one partial print getting matched with another is high.

The artificially generated prints, dubbed DeepMasterPrints by the researchers, capitalize on the aforementioned vulnerability to accurately imitate one in five fingerprints in a database. The database was originally supposed to have only an error rate of one in a thousand.

Another vulnerability exploited by the researchers was the high prevalence of some natural fingerprint features such as loops and whorls, compared to others. With this understanding, the team generated some prints that contain several of these common features. They found that these artificial prints were more likely to match with other prints than would be normally possible.

If this result is robust -- and I assume it will be improved upon over the coming years -- it will make the current generation of fingerprint readers obsolete as secure biometrics. It also opens a new chapter in the arms race between biometric authentication systems and fake biometrics that can fool them.

More interestingly, I wonder if similar techniques can be brought to bear against other biometrics are well.

Research paper.

Slashdot thread

Information Attacks against Democracies

Democracy is an information system.

That's the starting place of our new paper: "Common-Knowledge Attacks on Democracy." In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don't like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­ -- even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge, which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­ -- an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy. There, common political knowledge about who is in charge over the long term and what their policy goals are is a basic condition of stability. Autocracies do not require common political knowledge about the efficacy and fairness of elections, and strive to maintain a monopoly on other forms of common political knowledge. They actively suppress common political knowledge about potential groupings within their society, their levels of popular support, and how they might form coalitions with each other. On the other hand, they benefit from contested political knowledge about nongovernmental groups and actors in society. If no one really knows which other political parties might form, what they might stand for, and what support they might get, that itself is a significant barrier to those parties ever forming.

This difference has important consequences for security. Authoritarian regimes are vulnerable to information attacks that challenge their monopoly on common political knowledge. They are vulnerable to outside information that demonstrates that the government is manipulating common political knowledge to their own benefit. And they are vulnerable to attacks that turn contested political knowledge­ -- uncertainty about potential adversaries of the ruling regime, their popular levels of support and their ability to form coalitions­ -- into common political knowledge. As such, they are vulnerable to tools that allow people to communicate and organize more easily, as well as tools that provide citizens with outside information and perspectives.

For example, before the first stirrings of the Arab Spring, the Tunisian government had extensive control over common knowledge. It required everyone to publicly support the regime, making it hard for citizens to know how many other people hated it, and it prevented potential anti-regime coalitions from organizing. However, it didn't pay attention in time to Facebook, which allowed citizens to talk more easily about how much they detested their rulers, and, when an initial incident sparked a protest, to rapidly organize mass demonstrations against the regime. The Arab Spring faltered in many countries, but it is no surprise that countries like Russia see the Internet openness agenda as a knife at their throats.

Democracies, in contrast, are vulnerable to information attacks that turn common political knowledge into contested political knowledge. If people disagree on the results of an election, or whether a census process is accurate, then democracy suffers. Similarly, if people lose any sense of what the other perspectives in society are, who is real and who is not real, then the debate and argument that democracy thrives on will be degraded. This is what seems to be Russia's aims in their information campaigns against the US: to weaken our collective trust in the institutions and systems that hold our country together. This is also the situation that writers like Adrien Chen and Peter Pomerantsev describe in today's Russia, where no one knows which parties or voices are genuine, and which are puppets of the regime, creating general paranoia and despair.

This difference explains how the same policy measure can increase the stability of one form of regime and decrease the stability of the other. We have already seen that open information flows have benefited democracies while at the same time threatening autocracies. In our language, they transform regime-supporting contested political knowledge into regime-undermining common political knowledge. And much more recently, we have seen other uses of the same information flows undermining democracies by turning regime-supported common political knowledge into regime-undermining contested political knowledge.

In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.

This framework not only helps us understand how different political systems are vulnerable and how they can be attacked, but also how to bolster security in democracies. First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.

There's a lot more in the paper.

This essay was co-authored by Henry Farrell, and previously appeared on Lawfare.com.

The PCLOB Needs a Director

The US Privacy and Civil Liberties Oversight Board is looking for a director. Among other things, this board has some oversight role over the NSA. More precisely, it can examine what any executive-branch agency is doing about counterterrorism. So it can examine the program of TSA watchlists, NSA anti-terrorism surveillance, and FBI counterterrorism activities.

The PCLOB was established in 2004 (when it didn't do much), disappeared from 2007-2012, and reconstituted in 2012. It issued a major report on NSA surveillance in 2014. It has dwindled since then, having as few as one member. Last month, the Senate confirmed three new members, including Ed Felten.

So, potentially an important job if anyone out there is interested.

What Happened to Cyber 9/11?

A recent article in the Atlantic asks why we haven't seen a"cyber 9/11" in the past fifteen or so years. (I, too, remember the increasingly frantic and fearful warnings of a "cyber Peal Harbor," "cyber Katrina" -- when that was a thing -- or "cyber 9/11." I made fun of those warnings back then.) The author's answer:

Three main barriers are likely preventing this. For one, cyberattacks can lack the kind of drama and immediate physical carnage that terrorists seek. Identifying the specific perpetrator of a cyberattack can also be difficult, meaning terrorists might have trouble reaping the propaganda benefits of clear attribution. Finally, and most simply, it's possible that they just can't pull it off.

Commenting on the article, Rob Graham adds:

I think there are lots of warning from so-called "experts" who aren't qualified to make such warnings, that the press errs on the side of giving such warnings credibility instead of challenging them.

I think mostly the reason why cyberterrorism doesn't happen is that which motivates violent people is different than what which motivates technical people, pulling apart the groups who would want to commit cyberterrorism from those who can.

These are all good reasons, but I think both authors missed the most important one: there simply aren't a lot of terrorists out there. Let's ask the question more generally: why hasn't there been another 9/11 since 2001? I also remember dire predictions that large-scale terrorism was the new normal, and that we would see 9/11-scale attacks regularly. But since then, nothing. We could credit the fantastic counterterrorism work of the US and other countries, but a more reasonable explanation is that there are very few terrorists and even fewer organized ones. Our fear of terrorism is far greater than the actual risk.

This isn't to say that cyberterrorism can never happen. Of course it will, sooner or later. But I don't foresee it becoming a preferred terrorism method anytime soon. Graham again:

In the end, if your goal is to cause major power blackouts, your best bet is to bomb power lines and distribution centers, rather than hack them.

Worst-Case Thinking Breeds Fear and Irrationality

Here's a crazy story from the UK. Basically, someone sees a man and a little girl leaving a shopping center. Instead of thinking "it must be a father and daughter, which happens millions of times a day and is perfectly normal," he thinks "this is obviously a case of child abduction and I must alert the authorities immediately." And the police, instead of thinking "why in the world would this be a kidnapping and not a normal parental activity," thinks "oh my god, we must all panic immediately." And they do, scrambling helicopters, searching cars leaving the shopping center, and going door-to-door looking for clues. Seven hours later, the police eventually came to realize that she was safe asleep in bed.

Lenore Skenazy writes further:

Can we agree that something is wrong when we leap to the worst possible conclusion upon seeing something that is actually nice? In an email Furedi added that now, "Some fathers told me that they think and look around before they kiss their kids in public. Society is all too ready to interpret the most innocent of gestures as a prelude to abusing a child."

So our job is to try to push the re-set button.

If you see an adult with a child in plain daylight, it is not irresponsible to assume they are caregiver and child. Remember the stat from David Finkelhor, head of the Crimes Against Children Research Center at the University of New Hampshire. He has heard of NO CASE of a child kidnapped from its parents in public and sold into sex trafficking.

We are wired to see "Taken" when we're actually witnessing something far less exciting called Everyday Life. Let's tune in to reality.

This is the problem with the "see something, say something" mentality. As I wrote back in 2007:

If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security.

And the police need to understand the base-rate fallacy better.

Israeli Surveillance Gear

The Israeli Defense Force mounted a botched raid in Gaza. They were attempting to install surveillance gear, which they ended up leaving behind. (There are photos -- scroll past the video.) Israeli media is claiming that the capture of this gear by Hamas causes major damage to Israeli electronic surveillance capabilities. The Israelis themselves destroyed the vehicle the commandos used to enter Gaza. I'm guessing they did so because there was more gear in it they didn't want falling into the Palestinians' hands.

Can anyone intelligently speculate about what the photos shows? And if there are other photos on the Internet, please post them.

Mailing Tech Support a Bomb

I understand his frustration, but this is extreme:

When police asked Cryptopay what could have motivated Salonen to send the company a pipe bomb ­ or, rather, two pipe bombs, which is what investigators found when they picked apart the explosive package ­ the only thing the company could think of was that it had declined his request for a password change.

In August 2017, Salonen, a customer of Cryptopay, emailed their customer services team to ask for a new password. They refused, given that it was against the company's privacy policy.

A fair point, as it's never a good idea to send a new password in an email. A password-reset link is safer all round, although it's not clear if Cryptopay offered this option to Salonen.

Hidden Cameras in Streetlights

Both the US Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE) are hiding surveillance cameras in streetlights.

According to government procurement data, the DEA has paid a Houston, Texas company called Cowboy Streetlight Concealments LLC roughly $22,000 since June 2018 for "video recording and reproducing equipment." ICE paid out about $28,000 to Cowboy Streetlight Concealments over the same period of time.

It's unclear where the DEA and ICE streetlight cameras have been installed, or where the next deployments will take place. ICE offices in Dallas, Houston, and San Antonio have provided funding for recent acquisitions from Cowboy Streetlight Concealments; the DEA's most recent purchases were funded by the agency's Office of Investigative Technology, which is located in Lorton, Virginia.

Fifty thousand dollars doesn't buy a lot of streetlight surveillance cameras, so either this is a pilot program or there are a lot more procurements elsewhere that we don't know about.

Chip Cards Fail to Reduce Credit Card Fraud in the US

A new study finds that credit card fraud has not declined since the introduction of chip cards in the US. The majority of stolen card information comes from hacked point-of-sale terminals.

The reasons seem to be twofold. One, the US uses chip-and-signature instead of chip-and-PIN, obviating the most critical security benefit of the chip. And two, US merchants still accept magnetic stripe cards, meaning that thieves can steal credentials from a chip card and create a working cloned mag stripe card.

Boing Boing post.

More Spectre/Meltdown-Like Attacks

Back in January, we learned about a class of vulnerabilities against microprocessors that leverages various performance and efficiency shortcuts for attack. I wrote that the first two attacks would be just the start:

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

We saw several variants over the year. And now researchers have discovered seven more.

Researchers say they've discovered the seven new CPU attacks while performing "a sound and extensible systematization of transient execution attacks" -- a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU's internal caches, and other internal execution stages.

The research team says they've successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers.

Microprocessor designers have spent the year rethinking the security of their architectures. My guess is that they have a lot more rethinking to do.

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:


The list is maintained on this page.

Oracle and "Responsible Disclosure"

I've been writing about "responsible disclosure" for over a decade; here's an essay from 2007. Basically, it's a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it's the best solution we have. Making it illegal to publish vulnerabilities without the vendor's consent means that they won't get fixed quickly -- and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

New IoT Security Regulations

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­ -- from toys to light bulbs to major appliances­ -- to the Internet at breakneck speeds. This is the Internet of Things, and it's a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon's Alexa, which not only answers questions and plays music but allows you to control your home's lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner -- ­cars, pacemakers, thermostats­ -- the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or "IoT" devices sold in the state­ -- and the effects will soon be felt worldwide.

California's new SB 327 law, which will take effect in January 2020, requires all "connected devices" to have a "reasonable security feature." The good news is that the term "connected devices" is broadly defined to include just about everything connected to the Internet. The not-so-good news is that "reasonable security" remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California's attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There's just one specific in the law that's not subject to the attorney general's interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it's just one of dozens of awful "security" measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­ -- in the United States or elsewhere­ -- are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won't make sense to have two versions: one for California and another for everywhere else. It's much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you've read and agreed to the website's privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­ -- people physically in the European Union, and EU citizens wherever they are -- ­and those who are not. It's easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can't, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don't have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there's a market for new ideas and new products. We've seen this pattern again and again in safety and security engineering, and we'll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it's heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

The Pentagon Is Publishing Foreign Nation-State Malware

This is a new thing:

The Pentagon has suddenly started uploading malware samples from APTs and other nation-state sources to the website VirusTotal, which is essentially a malware zoo that's used by security pros and antivirus/malware detection engines to gain a better understanding of the threat landscape.

This feels like an example of the US's new strategy of actively harassing foreign government actors. By making their malware public, the US is forcing them to continually find and use new vulnerabilities.

EDITED TO ADD (11/13): This is another good article. And here is some background on the malware.

Privacy and Security of Data at Universities

Interesting paper: "Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier," by Christine Borgman:

Abstract: As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of "grey data" about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.

iOS 12.1 Vulnerability

This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users' contact information with no need for a passcode.

[...]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim's contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select "Add Person"
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that's stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don't know the number, they can say "call my phone." We tested this with both the owners' voice and a strangers voice, in both cases, Siri initiated the call.