Category Archives: whistleblowers

Speakers Censored at AISA Conference in Melbourne

Two speakers were censored at the Australian Information Security Association's annual conference this week in Melbourne. Thomas Drake, former NSA employee and whistleblower, was scheduled to give a talk on the golden age of surveillance, both government and corporate. Suelette Dreyfus, lecturer at the University of Melbourne, was scheduled to give a talk on her work -- funded by the EU government -- on anonymous whistleblowing technologies like SecureDrop and how they reduce corruption in countries where that is a problem.

Both were put on the program months ago. But just before the event, the Australian government's ACSC (the Australian Cyber Security Centre) demanded they both be removed from the program.

It's really kind of stupid. Australia has been benefiting a lot from whistleblowers in recent years -- exposing corruption and bad behavior on the part of the government -- and the government doesn't like it. It's cracking down on the whistleblowers and reporters who write their stories. My guess is that someone high up in ACSC saw the word "whistleblower" in the descriptions of those two speakers and talks and panicked.

You can read details of their talks, including abstracts and slides, here. Of course, now everyone is writing about the story. The two censored speakers spent a lot of the day yesterday on the phone with reporters, and they have a bunch of TV and radio interviews today.

I am at this conference, speaking on Wednesday morning (today in Australia, as I write this). ACSC used to have its own government cybersecurity conference. This is the first year it combined with AISA. I hope it's the last. And that AISA invites the two speakers back next year to give their censored talks.

EDITED TO ADD (10/9): More on the censored talks, and my comments from the stage at the conference.

Slashdot thread.

How I Learned to Stop Worrying and Love Vendor Risk

Insider risk, supply chain vulnerability and vendor risk all boil down to the same thing: the more people have access to your data, the more vulnerable it is to being leaked or breached.

This summer brought an interesting twist to that straight-forward situation: Can data leaked by an employee or a contractor be a good thing?

In July, a Belgian contractor who had been hired to transcribe Google Home recordings shared several of them with news outlet VRT. The leak revealed that customers were being recorded without their consent, often times after unintentionally triggering their devices. Google’s response was immediate. They went after the contractor. (Never mind that they were doing something that they had denied. The leaked recordings were for research!!!)

“Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” the company said in a press release.

Translation: We’re not sorry we got caught doing whatever we want, but we are sorry we hired the wrong vendor and will try not to do that again.

An Apple contractor shared a similar story with the Guardian a short time later. Recordings taken from the company’s audio assistant Siri were also being transcribed by third-party contractors. This time the news was worse. The company’s watch was consistently recording users without any explicit prompting. Weeks later, a contractor for Microsoft went to Vice with what at this point had become a familiar story, this time in connection with both Skype and Cortana.

Whistleblower or Data Leak?

The typical narrative is that someone with inside knowledge of a company or its technology is able to exploit it to some sort of ill purpose. The accused hacker behind the recent Capital One data breach had previously worked for Amazon Web Services and was able to exploit her knowledge of a common firewall misconfiguration to steal customer data: more than 100 million records. Anthem and Boeing similarly suffered large-scale breaches perpetrated by insiders.

What makes the rash of recent data leaks noteworthy is that external contractors had access to data that they didn’t think they should have, and they did something about it. With the exception the leaked data in question was passed along to press outlets for the express purpose of preserving customer data. And it worked, at least in the short term. Apple and Google suspended their use of human transcribers, and Microsoft has made their privacy policy more explicit.

HR or IT?

What’s interesting here (other than the revelation that just about every major IoT speech-recognition product on the market has been spying on us without telling us) is what it reveals about insider risk.

It seems increasingly apparent that risk has as much to do with a company’s HR department as it does its cybersecurity policy. A single disgruntled employee with an axe to grind is a familiar scenario, and one that can be mitigated through careful data management, but widespread unhappiness with a company’s ethical practices is significantly more difficult to manage. It brings to mind that semi-old adage, now-defunct company motto at Google: Don’t be evil. Or rather, be nicer to make yourself less of a target.

Google has had to contend with internal protests ranging from its involvement with Chinese censorship to its work with U.S. border and immigration agencies. Both Amazon and Microsoft experienced similar unrest among employees for their contracts with ICE. While none of these have led to large-scale data breaches yet, knowing that there are potentially thousands of employees and contractors with access to sensitive information and a motive to leak, it is a matter of serious concern.

The new law of the cyber jungle: Widespread disapproval exponentially increases one’s attackable surface.

While employee whistleblowers are nothing new (just ask Enron or Big Tobacco), it’s semi-terra incognita in our era of massive data breaches. We’re used to thinking of any kind of data breach and any kind of data leak as being a bad thing, and it usually is. But there is a grey area when companies are not playing by the rules in an environment where people are highly motivated to call them out for bad behavior.

What’s the Takeaway?

From a strictly technical perspective, even a well-intentioned data leak has the unfortunate side effect of showing where in the supply chain companies are most vulnerable. If hackers weren’t aware that organizations were entrusting intimate customer data to external contractors, they most certainly know it now.

The post How I Learned to Stop Worrying and Love Vendor Risk appeared first on Adam Levin.